id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.08891 | ViTO: Vision Transformer-Operator | We combine vision transformers with operator learning to solve diverse
inverse problems described by partial differential equations (PDEs). Our
approach, named ViTO, combines a U-Net based architecture with a vision
transformer. We apply ViTO to solve inverse PDE problems of increasing
complexity, namely for the wave equation, the Navier-Stokes equations and the
Darcy equation. We focus on the more challenging case of super-resolution,
where the input dataset for the inverse problem is at a significantly coarser
resolution than the output. The results we obtain are comparable or exceed the
leading operator network benchmarks in terms of accuracy. Furthermore, ViTO`s
architecture has a small number of trainable parameters (less than 10% of the
leading competitor), resulting in a performance speed-up of over 5x when
averaged over the various test cases. | Oded Ovadia, Adar Kahana, Panos Stinis, Eli Turkel, George Em Karniadakis | 2023-03-15T19:24:14Z | http://arxiv.org/abs/2303.08891v1 | # ViTO: Vision Transformer-Operator
###### Abstract
We combine vision transformers with operator learning to solve diverse inverse problems described by partial differential equations (PDEs). Our approach, named ViTO, combines a U-Net based architecture with a vision transformer. We apply ViTO to solve inverse PDE problems of increasing complexity, namely for the wave equation, the Navier-Stokes equations and the Darcy equation. We focus on the more challenging case of super-resolution, where the input dataset for the inverse problem is at a significantly coarser resolution than the output. The results we obtain are comparable or exceed the leading operator network benchmarks in terms of accuracy. Furthermore, ViTO's architecture has a small number of trainable parameters (less than 10% of the leading competitor), resulting in a performance speed-up of over 5x when averaged over the various test cases.
Deep learning Vision transformers Scientific machine learning Inverse problems Super-resolution
## 1 Introduction
Operator learning refers to training neural networks to represent mappings between families of functions. For example, if we want to infer the acoustic wave pressure in the ocean for each and every initial source, we can define the operator as the mapping from the initial source (a function, initial pressure in every point of the domain) to the pressures at a later time (also a function, future pressures in every point of the domain). The main advantage of operator learning is that, after the operator has been learned, no further training is needed and the solution, e.g., for a partial differential equation (PDE), which can be very expensive using classic methods, can be simply inferred (estimated) by the network with negligible computational cost in real-time. The first operator learning method was introduced by Lu _et al._, named the Deep Operator Network (DeepONet) [1]. DeepONet is composed of a branch and a trunk; the branch learns the input function space of the operator, while the trunk learns the space of functions onto which the output is projected. By multiplying the branch with the trunk, the projection provides a representation of the output function space. Another popular invention is the Fourier Neural Operator [2, 3], which is based on replacing the kernel integral operator with a convolution operator defined in Fourier space by employing a fast Fourier transform on the input space. It uses a ResNet but does not have a trunk net.
When modeling a physical system/experiment we usually find one of two scenarios. The first scenario is the _forward problem_, when given a set of conditions, one attempts to simulate the physical process. For example, given an initial
source, find the acoustic wave amplitude in the ocean after some time. The second scenario is the opposite one, called the _inverse problem_. Given the state of the physical experiment, find the causal condition that led to that state. For example, given measurements of the acoustic pressures in the ocean at some time instant, find the source that started emitting the acoustic sounds. The inverse problem is often considered more challenging since one has access to limited data (for example, recordings at a small set of sensors only), the data may be noisy and of low resolution, some recordings may be missing, etc. Inverse problems are often ill-posed, meaning they do not necessarily have a solution and if so, it may not necessarily be unique. In this work, we focus on inverse problems given _relatively sparse_ data sets.
A recent innovation in the field of deep learning is the so-called transformers [4], which refer to deep neural networks that have an attention mechanism. The attention module attempts to understand context from the given input. To learn the context, the attention mechanism operates on a discrete embedding of the data that is composed of tokens. An immediate example is Natural Language Processing (NLP) using transformers, where a sentence is embedded using tokens according to a specific vocabulary. A challenge that arises when using transformers is considering non-discrete data. For example, instead of sentences (sequence of words), we would like to embed a continuous function or signal. The literature offers methods to achieve that, as introduced in the following section. One prominent method, explored in the current work, is called Vision Transformers (ViT) [5]. The vision transformers receive an image as input, for example the initial condition of a system (in the forward problem), or a future state of a system (in the inverse problem). Then, the image is split into small regions, often referred to as patches, and each region acts as a token. The vision transformer extracts the context from the tokens (regions of the image), and thus is able to utilize the attention mechanism for the continuous signal and make accurate predictions. In the original ViT paper [5], the authors demonstrated how ViT outperforms the state-of-the-art (SOTA) methods for image classification, including ImageNet [6], CIFAR [7], Oxford pets and flowers [8], and VTAB [9]. In addition, the benchmark ViT model is up to four times more efficient than the SOTA methods used as reference. In [10], Okolo _et. al._ used a ViT for X-ray image classification, while in [11] it was used for unsupervised volumetric medical image registration. ViTs are currently being adopted in various areas of research.
In the current work, we are interested in using transformers for operator learning. There have been some recent attempts to use various types of transformers for operator learning [12; 13; 14; 15; 16]. For example, in [12], Li _et. al._ use transformers to approximate _forward_ solutions of PDEs with operator learning. They propose an innovative way for choosing the collocation points and iterate through time to find the solution at those points. The attention mechanism is split in such a way that the query, key and value are split into different parts of the forward pass and is implemented as a multi-layer perceptron (MLP). The latent encoding, which is the outcome of the combination of the three components, is the embedding of the coordinates used for the spatio-temporal input of the PDE. In [15], Cao presents a method to combine FNO with attention to improve the performance for PDE solutions, by replacing the softmax (often used in transformers especially in classification) with a linear variant of it that does not involve normalization.
Here, we introduce a novel way to perform operator learning using vision transformers combined with a U-net [17; 18] based architecture to design the Vision Transformer-Operator or ViTO. We apply ViTO to solve inverse problems of increasing complexity, obtaining the solution at high resolution using only sparse and low resolution data. Compared to SOTA results for operator learning, our current results exceed the leading operator network benchmarks in terms of accuracy, and they are also obtained at a significant speedup.
The paper is organized as follows. Section 2 presents the proposed methodology. Section 3 presents numerical results for a collection of inverse problems of increasing complexity. Section 4 offers a discussion of the results and directions for future work.
## 2 Methodology
### Mathematical formulation of operator learning
We first present the general problem formulation of operator learning for PDEs before focusing on the particular case of inverse problems. We follow the DeepONet theory and notation, as given by [1; 19].
#### 2.1.1 PDE operators
Typically, when tackling a forward PDE problem, our objective is to determine the solution to the PDE. Thus, we aim to approximate the PDE solution by utilizing a set of parameters that describe the PDE problem setup. Such parameters include initial and boundary conditions, forcing terms, and other physical characteristics that may vary between different PDEs. Hence, forward PDE problems can be formulated as a mapping between an input function that corresponds to these parameters to an output function representing the solution.
Mathematically, let \(v\) denote the input function defined on some physical domain \(D\in\mathbb{R}^{d}\) and \(u\) denote the corresponding output function defined on the physical domain \(D^{\prime}\in\mathbb{R}^{d^{\prime}}\):
\[v:D\ni x\longmapsto v(x)\in\mathbb{R},\] \[u:D^{\prime}\ni\xi\longmapsto u(\xi)\in\mathbb{R}.\]
Let \(\mathcal{V}\) and \(\mathcal{U}\) be the spaces of the functions \(v\) and \(u\), respectively. Then, the mapping from \(v\) to \(u\) is defined by an operator \(\mathcal{G}\):
\[\mathcal{G}:\mathcal{V}\ni v\longmapsto u\in\mathcal{U}.\]
This operator describes the forward problem. For example, in many applications \(v\) is the initial condition of the PDE and \(u\) is its solution at some final time. However, in this work we are interested in the inverse problem. So, the operator corresponding to the inverse problem is of the form:
\[\tilde{\mathcal{G}}:\mathcal{U}\ni u\longmapsto v\in\mathcal{V}. \tag{1}\]
Continuing the previous example, the relevant inverse problem would be to retrieve the initial condition of the PDE given a snapshot of its solution.
We note that in many cases, the inverse operator relates to an ill-posed problem [20]. This type of problem is generally considered more challenging, particularly when dealing with incomplete or noisy data.
#### 2.1.2 Super-resolution
For most applications, it is impossible to get the full analytical solution of a PDE. In some cases, it is even hard to get a discrete approximation of the solution on a fine mesh due to computational, physical, or experimental difficulties. This is especially common in the domain of inverse problems, where the input function is often derived from sensor measurements of physical phenomena. These considerations often lead to a low-resolution mesh for the discrete approximation.
Low-resolution data is challenging to use. The main goal of Super-Resolution (SR) methods is to produce high-resolution accurate results given low-resolution input data. In this work, we do not treat the SR aspect as a separate problem. Instead, we combine it with the inverse operator defined in the previous section 2.1.1 to form a unified inverse-SR framework. Using the same notation as before, instead of getting functions \(u\in\mathcal{U},v\in\mathcal{V}\), we get discrete approximations of these functions on meshes. However, \(u\in\mathcal{U}\) is discretized using a much coarser mesh in comparison to \(v\in\mathcal{V}\). For example, the input might be a low-resolution snapshot of a PDE solution, while the desired output would be a high resolution discretization of the initial condition.
### Data driven formulation of operator learning
The goal is to approximate the operator \(\tilde{\mathcal{G}}\) in (1), when the input is in low-resolution and the output is in high-resolution, using a ViT-based neural network. We define a dataset, where each sample is composed of pairs of discretized functions: \(\mathcal{T}=\{(\mathrm{u}^{(1)},\mathrm{v}^{(1)}),(\mathrm{u}^{(2)},\mathrm{v }^{(2)}),\ldots,(\mathrm{u}^{(N)},\mathrm{v}^{(N)})\}\) such that \(\forall n,\mathrm{u}^{(n)}\) and \(\mathrm{v}^{(n)}\) are the projections of functions \(u^{(n)}\in\mathcal{U}\) and \(v^{(n)}\in\mathcal{V}\) onto discrete meshes \(\mathcal{M}_{u}\) and \(\mathcal{M}_{v}\), respectively. We let \(\mathcal{M}_{u}\) and \(\mathcal{M}_{v}\) remain constant for all samples, and assume that the discretization is equispaced. For ease of notation and without loss of generality, we assume that the domain coordinates are positive. Then, in the two-dimensional case, which is used for all of the numerical experiments, these meshes are written as:
\[\mathcal{M}_{u}=\{(i\Delta_{x,u},\ j\Delta_{y,u})\ |\ i,j\in \mathbb{N},i\leq N_{x,u},j\leq N_{y,u}\}\] \[\mathcal{M}_{v}=\{(i\Delta_{x,v},\ j\Delta_{y,v})\ |\ i,j\in \mathbb{N},i\leq N_{x,v},j\leq N_{y,v}\},\]
where \(\Delta_{x,\cdot},\Delta_{y,\cdot},N_{x,\cdot},N_{y,\cdot}\) determine the resolution of the discretization. We use \(\#\mathcal{M}_{u}=N_{x,u}\cdot N_{y,u}\) to note the number of points in the set \(\mathcal{M}_{u}\).
If \(\mathcal{M}_{u}\) and \(\mathcal{M}_{v}\) have the same number of points (\(\#\mathcal{M}_{u}=\#\mathcal{M}_{v}\).), SR is not performed, and instead, we are describing a standard inverse problem. However, as highlighted in the previous section 2.1.2, the focus is on scenarios where the input grid is significantly coarser than the output grid, i.e., \(\#\mathcal{M}_{u}\ll\#\mathcal{M}_{v}\). To achieve this, we can choose
a super-resolution factor \(s>1\) that specifies the relationship between the discretization. We choose \(N_{,v}\) such that \(N_{x,v}=sN_{x,u}\) and \(N_{y,v}=sN_{y,u}\) to attain a SR factor of \(s\). This results in the output grid size becoming \(\#\mathcal{M}_{v}=s^{2}N_{x,u}N_{y,u}\). For large values of \(s\), this renders the problem considerably more challenging.
### Network architecture
A modified version of the TransUNet [18] architecture is employed in this study, wherein a U-Net [17] backbone is integrated with a ViT (see Figure 1). The U-Net model comprises an encoder-decoder structure with interconnecting skip connections. The U-Net architecture has emerged as a powerful technique in the computer vision field, particularly in the realm of segmentation problems. Given the image-to-image nature of the data-driven problem outlined in 2.2, utilizing segmentation tools is a natural choice.
The network has two inputs: the observed solution of the PDE \(u\), and its corresponding numerical grid \(\mathcal{M}_{u}\). In the two-dimensional case, \(u\) is represented by a two-dimensional matrix, where all elements are values of the \(u\) at points on the grid \(\mathcal{M}_{u}\). We want the network to be exposed to the grid itself, so it can learn some relation between the \((x,y)\) values of a grid point and their corresponding solution value \(u(x,y)\). We achieve this by a simple encoding of the grid using two matrices representing discretizations in the \(x\) and \(y\) directions, as seen in Figure 2. The \(i\)-th row of the \(x\)-matrix is defined as a vector consisting of \(x_{i}=i\Delta_{x,u}\) repeated \(N_{y,u}\) times. Similarly, the \(j\)-th column of the \(y\)-matrix is defined as a column vector consisting of \(y_{j}=j\Delta_{y,u}\) repeated \(N_{x,u}\) times. We also use bilinear interpolation to modify the sizes of the inputs to the desired output shape. We make sure that the results of the interpolation operation is divisible by \(16\), so it could be compatible with the downsampling of the U-Net.
The U-Net architecture we employ is composed of three convolutional blocks for both the encoder and decoder (see Figure 1). Each block is comprised of three convolutional layers, equipped with a residual skip connection [21]. All convolutions are followed by a batch normalization [22] layer and a GELU activation function [23]. The final layer of each block performs either a downsampling operation for the encoder or an upsampling operation for the decoder.
In our approach, we utilize a ViT within the latent space of the U-Net by taking the encoded values as input. As the encoding is significantly smaller than the original inputs due to the U-Net's downsampling nature, we can use a patch size of \(1\times 1\) without any computational difficulties.
The original ViT utilizes absolute positional embedding via a linear projection layer. However, this can be problematic in PDE applications as it requires all inputs to have the same shape. In operator learning, we are often interested in models that can handle inputs of various sizes. To address this issue, we employ a form of relative conditional embedding [24; 25], where we use convolutions to learn the relationships between tokens instead of linearly projecting their absolute positions within the representation in the latent space. Specifically, we use a separable convolutional layer [26] followed by a standard convolutional layer, similar to the approach proposed in [27].
### Training loss function
The loss function is defined as the mean relative \(L^{2}\) error:
\[\mathcal{L}=\frac{1}{N}\sum_{j=1}^{N}\frac{||\hat{v}^{(j)}-\mathrm{v}^{(j)}|| _{2}}{\varepsilon+||\mathrm{v}^{(j)}||_{2}}, \tag{2}\]
where \(N\) is the size of training data, \(\mathrm{v}^{(j)}\) is the \(j\)-th ground-truth sample of the training data, \(\hat{\mathrm{v}}^{(j)}\) is the \(j\)-th sample prediction, and \(\varepsilon\) is a small number to prevent a zero denominator and stabilize the loss. Note that the inputs and outputs of the model are two-dimensional, so they are flattened inside the loss function.
## 3 Numerical results
We apply the ViTO method on various ill-posed two-dimensional inverse problems. The conducted tests include three PDEs: the acoustic wave equation, time-dependent incompressible Navier-Stokes, and a steady-state Darcy flow equation. In all cases we compare the results of the ViTO to three other popular methods in the scientific machine learning literature: 1) DeepONet [1], 2) FNO [2], and 3) a standard ResNet [21]. For each method we compute the relative \(L^{2}\) error compared to the ground truth data. We also measure relevant information regarding the training time and the number of parameters of used by each method. Details regarding the data generation process can be found in each of the following sections.
An important consideration in the domain of inverse problems is robustness to noise. Since inverse problems are often ill-posed, even a small amount of noise in the observed data can greatly amplify the numerical error [28]. To test how well ViTO can handle noise, we run all experiments twice: with noise and without noise. In both cases we train the model with the relevant amount of noise. We used zero-mean Gaussian additive noise, which is a common choice. Since different PDEs can behave quite differently, we make sure that the variance of the Gaussian noise is dependent on the input data. The operation of adding noise is given by: \(\mathrm{D}\ni x^{(n)}\longmapsto x^{(n)}+\gamma\mathcal{N}(0,\sigma_{\mathrm{ D}}^{2})\) where \(x^{(n)}\) is an input sample in the dataset \(\mathrm{D}\), \(\sigma_{\mathrm{D}}^{2}\) is the variance of the entire dataset, and \(\gamma\) is the desired noise level, e.g. \(\gamma=0.1\) is equivalent to \(10\%\) noise.
For all our evaluations, we utilized a super-resolution scale factor of 8, which means, for example, that an image of size \(16\times 16\) would be mapped to an image of size \(128\times 128\). It should be noted that a magnification of \(\times 8\) is considerably high. Although lower SR factors are utilized in several benchmarking datasets, the choice of \(\times 8\) is still prevalent for certain datasets such as Urban100 [29] and the Berkeley Segmentation Dataset [30].
For the FNO we used the same architecture as described in [2] for the FNO-2D network: four Fourier layers with width of 32 and 12 modes. Each layer was followed by a GeLU [23] activation function. For the ResNet we used 3 residual blocks with 16, 32, and 64 filters for each convolution within each block of depth 3. In the DeepONet case we used the ResNet described before as the branch network, 4 hidden fully connected layers for the trunk network, and 256 neurons for the latent dimension.
Following standard machine learning practice, we split all datasets into train, test, and validation sets. During training, we monitor the relative \(L^{2}\) losses and save the model with the lowest validation loss. Unless stated otherwise, all models are trained with a batch size of 100 for 500 epochs, subject to an early stopping criterion of 50 consecutive epochs with no validation loss improvement. The optimizer of choice is the Adam/AdamW optimizer [31; 32] with an initial learning rate \(10^{-3}\) and weight decay \(10^{-4}\). The learning rate is updated throughout the training process using cosine annealing [33]. The main code was implemented in PyTorch [34], and the DeepONet was implemented using DeepXDE [35] with a PyTorch backend. All computations were conducted using a single RTX-4090 GPU.
In Table 1 we present a computational comparison of the four models mentioned above. These results are given for a Darcy problem (see 3.3) experiment with a grid size of \(128\times 128\) and batch size 100. Memory was calculated as the peak GPU memory usage from the beginning of the training process to its end. The iterations per second metric was calculated by measuring the time it took the model to train for a single batch, averaged over 200 batches to increase consistency. Note that we do not report these two metrics for the DeepONet, since the training process was quite different from the other three models, so any direct comparison would have been misleading.
Figure 1: The architecture of the ViTO deep neural network. The inputs are the discretized function \(u(x,y)\) and the grid points, concatenated and inserted into U-net convolutional blocks. In the lowest level of the U-net the ViT is employed.
ViTO was the most efficient model by a substantial margin, both in terms of memory consumption and training time. ViTO was able to produce results on-par with SOTA methods, using a surprisingly small number of trainable parameters. The full details of the ViT-related weights in ViTO are shown in Table 2. Despite having a similar number of parameters to the ResNet, ViTO employs downsampling due to its U-Net architecture. Consequently, many convolutional operations occur on smaller feature maps compared to the ResNet which explains the lower memory usage and running time for ViTO.
### Wave equation
The formulation of the acoustic wave equation in two dimensions is given by [36; 37]:
\[\begin{cases}\ddot{u}(x,y,t)=c^{2}(x,y)(u_{xx}(x,y,t)+u_{yy}(x,y,t))+f(x,y,t)&(x, y)\in(0,L)^{2};0\leq t\leq T,\\ u(x,y,0)=u_{0}(x,y)&(x,y)\in(0,L)^{2},\\ \dot{u}(x,y,0)=v_{0}(x,y)&(x,y)\in(0,L)^{2},\\ u(0,y,t)=u(L,y,t)=0&y\in(0,L),\ 0\leq t\leq T,\\ u(x,0,t)=u(x,L,t)=0&x\in(0,L),\ 0\leq t\leq T.\end{cases} \tag{3}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & \(\#\) of parameters (M) & Memory (GB) & Iterations per second \\ \hline \hline FNO & 2.376 & 12.74 & 3.45 \\ \hline DeepONet & 0.297 & - & - \\ \hline ResNet & **0.148** & 4.66 & 9.25 \\ \hline ViTO & 0.150 & **0.85** & **59.98** \\ \hline \end{tabular}
\end{table}
Table 1: Computational performance of the different models.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Problem & Transformer blocks & Attention heads & Embedding dimension & ViT MLP size \\ \hline \hline Wave equation & 2 & 2 & 16 & 128 \\ \hline Navier-Stokes & 4 & 8 & 16 & 64 \\ \hline Darcy Flow & 2 & 2 & 16 & 128 \\ \hline \end{tabular}
\end{table}
Table 2: ViT parameters for each scenario.
Figure 2: An example of discrete \(X,Y\) grid encoded inputs for a problem defined in \([0,1]^{2}\).
where \(u(x,y,t)\) is the wave amplitude or acoustic pressure, \(c(x,y)\) is the wave propagation speed, \(f(x,y,t)\) is the source term, \(T\) is the final propagation time, \(L\) is the size of the physical domain, and \(u_{0}(x,y),v_{0}(x,y)\) are the initial pressures and velocities, respectively. The boundary condition is a homogeneous Dirichlet boundary (fully-reflective). The inverse problem is to learn the following mapping:
\[u(x,y,T)\longmapsto u_{0}(x,y).\]
We chose a physical domain with \(L=\pi\) and propagation time \(T=0.001\). We set the initial pressure and velocity to be 0, and randomly created Gaussian-shaped sources at different locations. For each sample, we created two such Gaussian sources with random amplitudes and locations. The locations are selected using a discrete random uniform distribution on the indices of the grid. The amplitudes are sampled uniformly using \(\mathcal{U}(-1,1)\) for each source in each initial condition. The wave velocity was taken as \(c(x,y)=c_{0}sin(x)sin(y)\), where \(c_{0}\) was randomly sampled for each initial condition using a uniform distribution set between 1,300 and 1,600, which is centred around the average acoustic wave propagation speed of 1,484 in the Mediterranean sea.
The dataset was generated using a standard explicit second-order finite-difference scheme [38; 39]. We generated 20,000 samples, of which 16,000 were used as the training set, while the remaining 4,000 were evenly split to form the testing and validation sets.
The results are shown in Table 3 and Figure 3. ViTO obtains the lowest error compared to the other methods, both with and without noise. It is worth noting that ViTO is able to reconstruct the initial condition even in difficult scenarios where there is a large difference between the amplitudes of the different sources (such as the first row in Figure 3).
### Navier-Stokes equations
The time-dependent two-dimensional Navier-Stokes equation for the viscous, incompressible fluid in vorticity form is given by:
\[\begin{cases}\partial_{t}\omega(x,y,t)+u(x,y,t)\cdot\nabla\omega(x,y,t)=\nu \Delta\omega(x,y,t)+f(x,y),&x,y\in(0,1)^{2},t\in(0,T]\\ \nabla\cdot u(x,y,t)=0,&(x,y)\in(0,1)^{2},t\in(0,T]\\ \omega(x,y,0)=\omega_{0},&(x,y)\in(0,1)^{2}\end{cases} \tag{4}\]
where \(\omega\) is the vorticity, \(u\) is the velocity field, \(\nu=10^{-3}\) is the viscosity, and \(\Delta\) is the two-dimensional Laplacian. We consider periodic boundary conditions. The source term \(f\) is set as: \(f(x,y)=0.1(sin(2\pi(x+y))+cos(2\pi(x+y)))\), and the initial condition \(\omega_{0}(x)\) is sampled from a Gaussian random field according to the following distribution: \(\mathcal{N}(0,7^{3/2}(-\Delta+49I)^{-5/2})\). The inverse problem is to learn the following mapping:
\[\omega(x,y,T)\longmapsto\omega_{0}(x,y).\]
We used the publicly available Python solver given in [2] to create two separate datasets with different final simulation times \(T=1\) and \(T=5\). Each dataset was composed of 10,000 samples, which we then split into train, test, and validation. The error analysis for \(T=1,5\) are shown in Table 4. ViTO and FNO obtain very similar accuracy in both cases, ViTO is slightly more accurate without noise, while FNO has a minor advantage with noise.
A visualization of the results is shown in Figure 4 and Figure 5. In the case \(T=1\) (Figure 4), the vorticities for different initial conditions are still very different from one another, and so the reconstructions are able to capture fine details. However, for \(T=5\) (Figure 5), the behavior of the vorticity becomes very similar, regardless of the choice of initial condition. In that case, some fine details are lost in all reconstructions, which explains the larger error compared to the \(T=1\) case.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Method & \(0\%\) noise & \(10\%\) noise \\ \hline \hline FNO & 0.3260 & 0.4383 \\ \hline DeepONet & 0.7128 & 0.7131 \\ \hline ResNet & 0.7892 & 0.8154 \\ \hline ViTO & **0.2678** & **0.2942** \\ \hline \end{tabular}
\end{table}
Table 3: Test relative \(L^{2}\) errors for the wave problem.
### Darcy equation
The steady-state two-dimensional Darcy flow for a porous medium is given by the following equation:
\[\begin{cases}-\nabla\cdot(K(x,y)\nabla h(x,y))=f(x,y),&(x,y)\in(0,1)^{2}\\ h(x,y)=0,&(x,y)\in\partial(0,1)^{2}\end{cases} \tag{5}\]
where \(\partial(0,1)^{2}\) is the domain boundary, \(K(x,y)\) is the permeability coefficient field, \(h(x,y)\) is the pressure, and \(f\) is a forcing function. The boundary condition used here is a homogeneous Dirichlet boundary (fully-reflective). The inverse problem is to learn the following mapping:
\[h(x,y)\longmapsto K(x,y).\]
We used the publicly available finite difference solver (written in MATLAB [40], given in [2]) to create data with piecewise smooth coefficients \(K\), with a constant forcing function \(f\equiv 1\). The coefficient was selected using a Gaussian
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{\(T=1\)} & \multicolumn{2}{c}{\(T=5\)} \\ \cline{2-5} & \(\gamma=0\) & \(\gamma=0.1\) & \(\gamma=0\) & \(\gamma=0.1\) \\ \hline FNO & 0.06449 & **0.1587** & 0.1881 & **0.3582** \\ DeepONet & 0.09424 & 0.1684 & 0.2007 & 0.4528 \\ ResNet & 0.1271 & 0.4471 & 0.4520 & 0.5745 \\ ViTO & **0.06348** & 0.1635 & **0.1757** & 0.3757 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test relative \(L^{2}\) errors for the errors for the Navier-Stokes problem with different final simulation times (\(T\)) and noise levels (\(\gamma\)).
Figure 3: Predictions for the wave problem 3.1 for 5 random samples. In the section labeled “Data” is the propagated wave at the final time using fine and coarse discretizations, alongside a high-resolution image of the ground truth sources. In the section labeled “Predictions” are the initial condition reconstructions by the various methods.
random field according to the following distribution: \(\mathcal{N}(0,(-\Delta+9I)^{-2})\). This is followed by a binarization operation that mapped positive values to 12 and negative values to 3. We created 3 such datasets for the following resolutions: \(n=128,256,512\) with a SR scale factor of 8. Hence, the super-resolution mappings were of the following dimensions: \(16\times 16\longmapsto 128\times 128\), \(32\times 32\longmapsto 256\times 256\), and \(64\times 64\longmapsto 512\times 512\). Each dataset contained \(1,000\) samples which were split into \(800,100,100\) training, validation, and testing samples, respectively. Despite being a binary problem, we still used the \(L^{2}\) loss function (2), and not a binary loss function like negative log likelihood, since \(K(x,y)\) does not have to be binary in many applications. For the two datasets with finer grids we had to use a smaller batch size of 10 to fit the models into memory. We note that ViTO was the only model we were able to run with a batch size of 100, but we kept it at 10 to make the comparison more accurate.
The full error analysis is shown in Table 5. In all 6 cases ViTO obtained the best accuracy compared to the benchmark methods. Note that the results for the Darcy problem and the wave problem 3.1 are more decisive in comparison to the Navier-Stokes problem 3.2. This could potentially be explained by the shift from smooth functions to functions consisting of irregular interfaces and sharp features. Recall that FNOs rely on Fourier transforms, which can be very accurate for smooth functions, but face severe difficulties with discontinuities. Furthermore, as we refined the grid in the Darcy case we saw a noticeable improvement in the ViTO results, which was not observed in the FNO case. This can also be explained by Fourier analysis, since FNOs are learning a global base of functions, which renders them grid-invariant.
Visualizations of the results for \(n=128\) and \(n=512\) are shown in Figure 6, respectively. Note that ViTO was able to capture sharp features of the coefficient \(K(x,y)\). This is especially noticeable in cases where there are very small discontinuities (cavities) in the data; while ViTO was mostly able to capture them, FNO tended to smooth them.
### Varying input size
Finally, we assessed the ability of ViTO to handle inputs of various sizes without requiring retraining. Typically, transformers are capable of handling such inputs, which are prevalent in NLP contexts. As mentioned in 2.3, we
Figure 4: Predictions for the Navier-Stokes problem 3.2 with \(T=1\) for 5 random samples. In the section labeled “Data” is the vorticity at the final time using fine and coarse discretizations, alongside a high-resolution image of the initial vorticity. In the section labeled “Predictions” are the initial condition reconstructions by the various methods.
employed relative positional encoding, which was shown to be effective for computer vision problems of this sort. Additionally, the U-Net architecture is fully convolutional [41], allowing it to handle such inputs.
To evaluate this capability, we used the Darcy example 3.3 with \(n=512\). We followed the same steps as in all other experiments, with one addition to the training process. During each training batch, we randomly selected a subsampling parameter \(r\in\{1,2,3,\ldots,9\}\) and applied it to the input image (rounding the number of grid points to the nearest integer). For instance, taking \(r=4\), an input of size \(512\times 512\) was downsampled to size \(128\times 128\). We used this process to allow ViTO to generalize better for new discretizations. This procedure can be considered a type of data augmentation. We also dropped the super-resolution part of the inverse problem (i.e. \(s=1\) in 2.2) to enable us to run tests for large grids.
Finally, we tested the model twice. First, we evaluated it on samples from the test set with grid sizes it had encountered during training, which were: \(\{\frac{512}{r}:r=1,\ldots,9\}\), rounded to the nearest integer. Next, we created a zero-shot scenario, where the model was presented with samples having random discretizations that it had not seen during training. We created these discretizations by resizing the original samples accordingly. The results are presented in Figure 7. The results show that ViTO is capable of handling different grids without retraining, even in a zero-shot scenario. The
Figure 5: Predictions for the Navier-Stokes problem 3.2 with \(T=5\) for 5 random samples. In the section labeled ”Data” is the vorticity at the final time using fine and coarse discretizations, alongside a high-resolution image of the initial vorticity. In the section labeled ”Predictions” are the initial condition reconstructions by the various methods.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{\(n=128\)} & \multicolumn{2}{c}{\(n=256\)} & \multicolumn{2}{c}{\(n=512\)} \\ \cline{2-7} & \(\gamma=0\) & \(\gamma=0.1\) & \(\gamma=0\) & \(\gamma=0.1\) & \(\gamma=0\) & \(\gamma=0.1\) \\ \hline FNO & 0.1422 & 0.4502 & 0.1272 & 0.1915 & 0.1235 & 0.1683 \\ DeepONet & 0.1463 & 0.4502 & 0.1422 & 0.2090 & 0.1608 & 0.2174 \\ ResNet & 0.1603 & 0.2760 & 0.1287 & 0.3078 & 0.1416 & 0.3702 \\ ViTO & **0.1184** & **0.1943** & **0.08216** & **0.1799** & **0.05197** & **0.1623** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Test relative \(L^{2}\) errors for the Darcy flow problem with different grid sizes (\(n\)) and noise levels (\(\gamma\)).
Figure 6: Predictions for the Darcy problem 3.3 for 5 random samples using for \(n=128\) and \(n=512\). In the section labeled "Data" is the PDE solution \(u(x,y)\) using fine and coarse discretizations, alongside a high-resolution image of the permeability coefficient field \(K(x,y)\). In the section labeled "Predictions" are the permeability coefficient reconstructions by the various methods.
error maps show us that larger grids generally yield better results, and that the error is mostly concentrated around the discontinuities (due to the binarization of the permeability field).
Figure 7: ViTO predictions with varying input sizes. The results are presented in columns corresponding to different discretizations. The first two rows show the ground-truth values of \(u(x,y)\) and \(K(x,y)\) as given by (5). The third row presents the predictions of ViTO. The last row shows the point-wise relative \(L^{2}\) error between \(K(x,y)\) and the ViTO prediction.
## 4 Discussion and future work
We have introduced a novel approach to inverse problems and super-resolution, which incorporates vision transformers with operator learning. Our approach, named ViTO, combines a U-Net based architecture with a vision transformer. We have obtained comparable or superior results compared to the leading operator network benchmarks in terms of accuracy, accompanied by substantial efficiency gains.
The impressive performance of ViTO on inverse problems of considerable complexity requires thorough investigation to uncover the mathematical and algorithmic reasons behind it. In particular, we should understand the learning mechanism in the latent space and provide some theoretical background.
Extending ViTO to solve problems with three spatial dimensions is an avenue worth exploring. Moreover, we need to determine ViTO's ability to adapt to forward problems, especially those that are time-dependent.
## 5 Acknowledgements
This work was supported by the Vannevar Bush Faculty Fellowship award (GEK) from ONR (N00014-22-1-2795). The work of PS and GEK is supported by the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) project, DE-SC0023191. Pacific Northwest National Laboratory (PNNL) is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830.
|
2308.04301 | Electronic correlations in promising room-temperature superconductor
Pb$_9$Cu(PO$_4$)$_6$O: a DFT+DMFT study | We present results of the first investigations on the correlated nature of
electronic states that cross the Fermi level in Pb$_9$Cu(PO$_4$)$_6$O aka LK-99
obtained within the DFT + DMFT approach. Coulomb correlations between Cu-$d$
electrons led to the opening of the band gap between the extra-O $p$ and Cu
$d_{xz}/d_{yz}$ states. We state that oxygen $p$ states play a significant role
in the electronic properties of LK-99. We also assume that doping with
electrons is necessary to turn the stoichiometric Pb$_9$Cu(PO$_4$)$_6$O into
conducting state. | Dmitry M. Korotin, Dmitry Y. Novoselov, Alexey O. Shorikov, Vladimir I. Anisimov, Artem R. Oganov | 2023-08-08T14:46:39Z | http://arxiv.org/abs/2308.04301v1 | Electronic correlations in promising room-temperature superconductor Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O: a DFT+DMFT study
###### Abstract
We present results of the first investigations on the correlated nature of electronic states that cross the Fermi level in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O _aka_ LK-99 obtained within the DFT + DMFT approach. Coulomb correlations between Cu-\(d\) electrons led to the opening of the band gap between the extra-O \(p\) and Cu \(d_{xz}/d_{yz}\) states. We state that oxygen \(p\) states play a significant role in the electronic properties of LK-99. We also assume that doping with electrons is necessary to turn the stoichiometric Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O into conducting state.
## I Introduction
Starting from the first report on the existence of room-temperature superconductivity in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O [1] ("LK-99") there are continuing attempts to clarify which characteristics of the electronic structure could generate the reported compound properties [2; 3; 4; 5]. It is obviously to suggest, that the \(d\)-states of the Cu ion in formal \(d^{9}\) electronic configuration are on the Fermi level and the supposed superconductivity is interconnected with them. Following the HTSC cuprates story, one can assume that LK-99 is a system with strong electron-electron correlations, and density functional theory will fail to describe its properties. DFT+U calculations were already presented in [2; 3; 4]. The insulating band structure was obtained in all three works with the existence of long-range ferromagnetic ordering (the long-range magnetic ordering is the artifact of the DFT+U method). Importance of accounting for correlation effects was reported for many High-T\({}_{c}\) superconductors with \(d\)- or \(f\)-atoms [6; 7; 8; 9].
In this research we employed the features of DFT+DMFT method to describe electronic structure of strongly correlated paramagnetic systems, taking into account also finite electronic temperature.
## II Methods
We performed DFT+DMFT [10] calculations following the procedure described in [11]. On the first step, the DFT calculations using the Quantum-ESPRESSO [12] package with pseudopotentials from the standard solid-state pseudopotential library set [13] were performed. The exchange-correlation functional was chosen to be in PBEsol form. The energy cut-off for the plane wave function and charge density expansion has been set to 50 Ry and 400 Ry, respectively. Integration in the reciprocal space was done on a regular \(4\times 4\times 5\)\(k\)-point mesh in the irreducible part of the Brillouin zone. The convergence criteria used for crystal cell relaxation within DFT are: total energy \(<10^{-8}\) Ry, total force \(<10^{-4}\) Ry/Bohr, pressure \(<0.2\) kbar.
Next, to take into account Coulomb correlations and many-body effects for the constructed small Hamiltonian, the DFT+DMFT approach [14; 15] was utilized. DFT+DMFT calculations were performed at electronic inverse temperature \(\beta=1/k_{\rm B}T=\)40 eV\({}^{-1}\), where \(k_{\rm B}\) is the Boltzmann constant and \(T\) is the absolute temperature, which corresponds to 290 K. An effective DMFT quantum impurity problem [16] was solved using the continuous-time quantum Monte Carlo method with the hybridization expansion algorithm [17] as realized in the package AMULET [18].
## III Results
We are still lacking sufficient data regarding the fine crystal structure of the LK-99 [1] compound. It is known that it was identified as lead apatite Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O with the P6\({}_{3}\)/m space group, wherein a copper ion substitutes one of the Pb ions at position \(4f\). Consequently, selecting an appropriate crystal structure for electronic structure calculations is not straightforward.
We started our investigation using the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O crystal structure [20], as illustrated in Figure 1. It is important to note that the oxygen ion, which is not part of the PO\({}_{4}\) octahedra, is located at the Wyckoff site \(4e\) with a partial occupation of 0.25. In our analysis, we refer to this oxygen ion as "extra-O" throughout the text.
The problem of dealing with partial occupation sites (or impurity sites) can be approached in two ways: (1) by calculating large supercells with randomly occupied \(4e\) sites by oxygen ions, or (2) by staying within a single
cell and avoiding relaxation of internal atomic positions of the extra-O (impurity) ion. Avoiding atomic position relaxation in small cells prevents undesirable local distortions that would propagate throughout the crystall due to translation periodicity.
The full band structure and partial densities of states for the extra-O ion for the parent lead apatite compound are shown in Figure 2. Calculations were done for the experimental crystal structure with only one extra-O ion in the cell.
The left panel of Figure 2 illustrates the presence of narrow energy bands, with a width of approximately 400 meV, situated just below the Fermi level in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O. These three bands arise from the \(p\)-states of the extra-O ion. The figure provides evidence that the extra-O ion in the lead apatite can be treated as an impurity, exhibiting minimal interaction with the surrounding ions. Consequently, it forms a distinct set of narrow energy bands.
The same rationale was used for a copper ion substitution in LK-99, replacing one of the ten Pb ions. The copper ion acts as an impurity that exerts negligible influence on the average positions of the Pb ions. Consequently, to obtain the crystal cell representing Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))6O (x=1), we substituted one of the lead ions in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O with copper. Subsequently, we fixed the atomic positions of Pb, Cu, and extra-O ions and performed relaxation for all other structural parameters, including the crystal shape, volume, and positions of all remaining atoms. The resulting lattice parameters \(a=9.748\AA\), \(c=7.218\AA,V=594\AA^{3}\) agree with the already published data [1; 2; 4].
The calculated band structure and partial densities of states for the optimized cell of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O are depicted in Figure 3. The copper ion states also form impurity-like energy bands near the Fermi level, in the energy region slightly higher than the extra-O bands. The near-the-Fermi bands set is narrower than in previously presented works [2; 4] right because we didn't allow undesirable extra hybridization of Cu and extra-O ions with the nearest-neighbor ions states.
The crystal field of the trigonally distorted oxygen octahedra splits the Cu \(d\) -shell into double degenerate subshell corresponding to the irreducible representation \(e_{g}^{\sigma}\) (\(d_{xz}\) and \(d_{yz}\) orbitals), double degenerate \(e_{g}^{\pi}\)-subshell (\(d_{x-y^{2}}\) and \(d_{xy}\) orbitals) and \(d_{3z^{2}-r^{2}}\) orbital that corresponds to the representation \(a_{1g}\). This splitting is clearly seen in partial densities of states in the right panel of Figure 3.
Two exceptionally narrow energy bands intersecting the Fermi level correspond primarily to electronic states with Cu \(d_{xz}\) and \(d_{yz}\) orbital symmetries. These partially filled bands have a width of only about 120 meV, suggesting that strong electronic correlations undoubtedly play a crucial role in their behavior. This fact was already accentuated before [2; 4].
The two bands within the energy range of [-0.15, -0.01] eV below the Fermi level are attributed to the \(p_{x}\) and \(p_{y}\) orbitals of the extra-O ion. Due to the considerable distance between the Cu and extra-O ions (approximately 5.7 \(\AA\)), there is minimal overlap between the copper ion orbitals and the extra-O \(p\)-orbitals, resulting in negligible hybridization between them. Below there are also Cu \(d_{x^{2}-y^{2}}/d_{xy}\), extra-O \(p_{z}\) and Cu \(d_{3z^{2}-r^{2}}\) related energy bands. The width of the entire band set is about 350 meV.
The eight states mentioned above (5 Cu-\(d\) and 3 extra-O-\(p\)) are the minimal basis for the model which could be used for evaluation of the strong Coulomb interaction in Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))6O. Using the projection procedure (see Methods section) we constructed eight Wannier functions with the symmetry of corresponding atomic orbitals. Then the Hamiltonian of the Hubbard model Hamiltonian was constructed in the basis of these Wannier functions. The five Wannier functions with Cu \(d\) symmetry were considered corellated states, and the tree O-\(p\) like Wannier functions were treated as a bath.
To take into account strong Coulomb correlations in the narrow energy bands we used the approach of Dy
Figure 1: Crystal structure of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O. The O4 site (referenced as extra-O in the text) has partial occupation of 0.25. In Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O copper ion substitutes one of Pb2 ions. Visualized using VESTA [19].
Figure 2: Energy bands (left panel) and densities of states (right panel) for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O
namical Mean Field Theory to solve the Hubbard model for the constructed small Hamiltonian. The inverse temperature was \(\beta\)=40 eV (\(\approx\)290 K) and Coulomb repulsion parameter \(U\)=1.8 eV.The obtained spectral functions are shown in Fig. 4.
Even such a small U value opens the gap about 0.4 eV and results in the insulator solution. Electronic states with \(e_{g}^{\pi}\) and \(a_{1g}\) symmetry are completely filled. The spectral function of the \(e_{g}^{\sigma}\) states demonstrates the appearance of the lower and upper Hubbard bands at -1.1 and 0.6 eV respectively. The obtained averaged squared magnetic moment is 0.99 \(\mu_{B}^{2}\) which correspond to 1 hole in doubly degenerate \(e_{g}^{\sigma}\) orbitals. The DFT+DMFT results show that taking into account Coulomb correlation is important since it significantly changes the band structure in vicinity of Fermi level obtained in DFT and hence the Fermi surface topology which could be very significant for description superconductivity if it will be confirmed in this compound. The Figure 4 shows that the top of the valence band is formed by extra-O \(p_{x}+p_{y}\) states which reminiscent the Pb\({}_{1}\)0(PO\({}_{4}\))\({}_{6}\)O since in case of absent of Cu the top of valence band is also formed by extra-O \(p\)-states. The spectral functions look similar to charge-transfer insulators, _i.e._ NiO or LiNiO\({}_{2}\)[21], but the situation is different in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. In charge transfer insulators the energy gap is formed by metal \(d\)-states and the nearest ligands \(p\)-states. In LK-99 the states that cross the Fermi level are \(p\)-states of extra-O ion which is at least 5.7 Aaway from the copper ion.
We assume that further doping of the system with electrons or holes will lead to it's metallization and the appearance of \(e_{g}^{\sigma}\) or \(e_{g}^{\pi}\) states at the Fermi level, correspondingly.
## IV Conclusion
We investigated the electronic structure and the role of correlation effects in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O in frames of DFT+DMFT approach. It was shown that proper accounting for the disorder in occupation of extra-O and Cu/Pb sites in the parent lead apatite structure is especially important and can result in the appearance of narrow energy bands in the vicinity of the Fermi level in the case of the LK-99 compound. These energy bands originate from overlapping of two bands groups, namely Cu-\(d\) and extra-O \(p\). Then using the DFT+DMFT approach, showed that accounting for Coulomb correlations leads to the band gap opening and drastically change band structure obtained in DFT. However, the physical picture in this compound is much more complicated and cannot be reduced either to Mott insulator on a triangular lattice formed by the Cu \(d_{xz}\) and \(d_{yz}\) orbitals or charge transfer insulator, due to the remarkable complicated structure of the valence band of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O formed not by the ligands closest to the Cu ion but by distant extra-O \(p\) states.
###### Acknowledgements.
The DFT parts of the study were supported by the Ministry of Science and Higher Education of the Russian Federation (No. 122021000039-4, theme "Electron"). The DMFT results were obtained within the state assignment of the Russian Science Foundation (Project 19-72-30043).
|
2301.08030 | Multi-Agent Interplay in a Competitive Survival Environment | Solving hard-exploration environments in an important challenge in
Reinforcement Learning. Several approaches have been proposed and studied, such
as Intrinsic Motivation, co-evolution of agents and tasks, and multi-agent
competition. In particular, the interplay between multiple agents has proven to
be capable of generating human-relevant emergent behaviour that would be
difficult or impossible to learn in single-agent settings. In this work, an
extensible competitive environment for multi-agent interplay was developed,
which features realistic physics and human-relevant semantics. Moreover,
several experiments on different variants of this environment were performed,
resulting in some simple emergent strategies and concrete directions for future
improvement. The content presented here is part of the author's thesis
"Multi-Agent Interplay in a Competitive Survival Environment" for the Master's
Degree in Artificial Intelligence and Robotics at Sapienza University of Rome,
2022. | Andrea Fanti | 2023-01-19T12:04:03Z | http://arxiv.org/abs/2301.08030v1 | # Multi-Agent Interplay in a Competitive Survival Environment
###### Abstract
Solving hard-exploration environments in an important challenge in Reinforcement Learning. Several approaches have been proposed and studied, such as Intrinsic Motivation, co-evolution of agents and tasks, and multi-agent competition. In particular, the interplay between multiple agents has proven to be capable of generating human-relevant emergent behaviour that would be difficult or impossible to learn in single-agent settings. In this work, an extensible competitive environment for multi-agent interplay was developed, which features realistic physics and human-relevant semantics. Moreover, several experiments on different variants of this environment were performed, resulting in some simple emergent strategies and concrete directions for future improvement. The content presented here is part of the author's thesis "Multi-Agent Interplay in a Competitive Survival Environment" for the Master's Degree in Artificial Intelligence and Robotics at Sapienza University of Rome, 2022.
## 1 Introduction
Creating artificial agents which are capable of solving complex human-related tasks is one of the main challenges of Machine Learning. When these tasks involve real-time interaction with an environment, they are usually modeled and solved with Reinforcement Learning (RL). In Reinforcement Learning, one ore more agents interact with a stochastic environment through a discrete sequence of observations and actions. To guide the agent towards the goal, these agents are also provided with feedback for their decisions in the form of a reward signal; the objective of an RL agent is then to maximize its cumulative reward. Because of the generality of these concepts, Reinforcement Learning naturally applies to a high number of human-relevant tasks and to tasks involving physical interaction.
Recently, Reinforcement Learning has been leveraging the generalization power of Deep Learning, giving rise to Deep Reinforcement Learning (Deep RL) [22][4][37][29][31][14][18][15][16]. However, even though Deep Reinforcement Learning algorithms managed to significantly improve the state-of-the-art for a high variety of tasks, they still struggle on _hard-exploration_ environments [12]. The challenge in solving these tasks is often not only due to the intrinsic obstacles they pose, but also to the difficulty in specifying an appropriate reward signal. More specifically, it is often impractical or impossible to design a reward signal that can guide the RL agent through the relevant milestones needed to accomplish the goal. Unfortunately, many challenging human-related problems are also hard-exploration problems. This is because it is often only possible or practical to specify abstract goals for these tasks, resulting in reward functions that do not provide any clear guidance on _how_ the agent should achieve its goal [12]. Moreover, the reward signal can even be deceiving: this happens especially when optimizing performance in the short term results in agents that are less likely to achieve the main objective of the task.
Another issue with standard Deep Reinforcement Learning algorithms is that they focus on generating single solutions to specific tasks [38][2]. For this reason, they are not directly capable of producing agents with multiple skills, or sets of diverse agents that have similar performances on a single task.
To tackle these problems, a variety of methods have been developed and studied. A first example is Intrinsic Motivation, which is based on inducing curiosity in the agent with an additional reward, unrelated to the original task. This _intrinsic_ reward is designed in such a way that the agent is motivated to discover novel regions of the environment state space.
The usage of co-evolution-based algorithms has also proven to be very effective in generating and solving hard-exploration environments. One example is the Paired Open-Ended Trailbazer [38][39] algorithm, which continually evolves variants of an environment, along with agents that solve them
using Evolution Strategies [28]. POET often produces environments which are not solvable by external agents that are only trained on that environment [38], showing that there exist tasks that are not solvable by direct optimization, and instead require that the agents go through an appropriate _curriculum_.
Another notable approach is using competitiveness in multi-agent settings to produce emergent behaviour. Even if this method usually relies on standard Deep Reinforcement Learning algorithms, the competitive interplay between multiple agents can spark complex behaviour and skills not attainable without it. This happens when one or more agents pose adequate challenges to their opponents, resulting in a new challenge for the original opponents. If the environment allows it, this exchange can go on indefinitely, generating more and more advanced skills in the process. This approach of exploiting multi-agent interplay has been successfully applied to various complex games [17][36] and simple physically grounded environments [33][17][3][19][2]. Baker et al., in particular, focused on the human-relevance of the skills acquired by the agents, and on the physical grounding of the simple simulated world in which they were trained. This work also focuses on these aspects, with the double goal of: (1) developing a computationally efficient and easily extensible multi-agent, competitive environment; and (2) experimenting with variants of this environment to produce interesting emergent agent behaviours. The first objective was achieved by developing an extensible and efficient framework, based on the Box2D physics simulation library, that allows to produce modular environments based on realistic physics. This framework was then used to develop concrete multi-agent environments based on common survival video game semantics. This included different levels of competitiveness, environment complexity and the subdivision of agents in opposing teams. The second objective was partially achieved by using these concrete variants in several experiments, using standard Deep Reinforcement Learning techniques. These experiments resulted in some simple emergent strategies, also giving concrete directions for the improvement of these environment variants in future work.
## 2 Related Work
There are several algorithms that significantly enhance the exploration capabilities of standard Deep RL algorithms. _Intrinsic Motivation_ approaches introduce an additional reward signal, called _intrinsic_, aimed at explicitly encouraging the agent to explore the environment. In this context, the original reward signal is also called _extrinsic_. The intrinsic reward is given by an _intrinsic reward function_\(R_{i}\), which is in general non-stationary, in contrast with the extrinsic reward function. Most of the Intrinsic Motivation methods shape the intrinsic reward so that it is higher in novel states, producing "curious" agents that seek unseen states. One way to do this is with count-based approaches [5][23], in which the intrinsic reward is inversely proportional to the number of times a state has been visited. This is straightforward in environments with a finite number of states, but requires additional care when the state space is infinite. A completely different approach is to define the intrinsic reward as the prediction error for a problem related to the agent's transition [21][34][1][25][8][10][9]. For example, predicting forward or inverse dynamics of the environment, or even trivial problems like predicting a constant zero function, can in practice obtain good results. One problem of prediction-error-based Intrinsic Motivation is that the prediction error itself may be caused by several reasons, not all relating to the agent's exploration performance [9]. _Random Network Distillation_ (RND) [9][8] is a state-of-the-art Intrinsic Motivation approach that uses the prediction error on a random task, represented by a random fixed Neural Network, to compute the intrinsic reward. While Intrinsic Motivation approaches obtain significantly better results than standard Reinforcement Learning algorithms on most hard-exploration tasks, they still leave some issues unsolved, such as _detachment_ and _derailment_[12]. Go-Explore [12] is an algorithm designed to solve these issues by keeping an explicit archive of interesting states and trajectories from where exploration can be resumed.
A different way to overcome hard-exploration problems with Deep RL algorithms is to provide the agent with a _curriculum_ of intermediate tasks, a technique called _curriculum learning_. A _curriculum_ in this context is a series of increasingly difficult tasks such that an agent that has solved one of them can be transferred to the next and solve it by directly maximizing its return. Even though a curriculum can be implemented manually, this approach does not scale well with environment complexity. Automatic generation of appropriate curricula is an active area of Reinforcement Learning research.
One possibility is leveraging multi-agent environments. The idea behind this approach is that, when one of the agent improves its policy and beats the other agents, they are in turn pressured to
improve their own policies. As long as these improvements don't lead to situations in which an agent is too difficult for the others to beat, this can establish an open-ended stream of increasingly difficult challenges for all agents. This also resembles how life evolved on our planet, in that competition and co-evolution has played a fundamental role in the capabilities and diversity of living organisms. This approach was successfully applied to complex abstract games and complex multi-player video games in Sukhbaatar et al. and Tesauro. Sims, Jaderberg et al., Bansal et al., Liu et al. and Baker et al. also leveraged this approach to produce emergent behaviour using multi-agent interplay in simple physically grounded environments.
Another notable approach to automatic curriculum learning is the co-evolution of agents and tasks, as in the Paired Open-Ended Trailblazer (POET) algorithm [38][39]. POET takes inspiration from minimal criterion evolution [6] and other Open-Ended Evolution methods [24]. Dharna et al. introduced a variation of POET that evolves video-game levels together with agents that play them.
## 3 A Competitive Survival Environment
The first part of this work consisted in developing a multi-agent, survival-based Reinforcement Learning environment. The following aspects were prioritized in its design:
* physical grounding of the environment dynamics, in particular classical mechanics;
* the possibility of changing the level of semantic complexity and competitiveness between the agents or teams;
* computational efficiency, such that the environment would not be a bottleneck in the experimental setup;
* extensibility of the implementation, so that further semantics may be implemented without modifying existing code;
* modularity of the implementation, so that different mechanics could be easily "turned off" when needed, or combined in different ways.
This section specifies the dynamics and semantics of the developed environment, while appendix A discusses their implementation, along with the framework developed to make them extensible and modular.
The environment is set in a planar world where bodies follow the laws of classical mechanics. The relevant region of the plane is a _room_ enclosed by four walls. The environment can contain one or more agents, all acting simultaneously. Agents can interact with other entities in the environment in several ways, including taking damage from various sources. The semantics of these interactions are detailed in sections 3.1, 3.2 and 3.3, while section 3.4 defines the action and observation spaces for each agent. Finally, section 3.5 discusses the reward schemes and episode termination conditions. Note that a multitude of variants of this environment are possible; a small fraction of these possibilities are depicted in figures 1, 2 and 3.
### Agents
Agents are modeled as circles with a fixed radius. They can apply forces to their bodies in the directions parallel and orthogonal to their orientation, along with torque along the axis orthogonal to the room plane. All agents have an integer health; when their health becomes 0 or less, they despawn and are considered dead. Agents can attack other bodies, including other agents, by using a _melee_, a close-ranged weapon. This is modeled as a segment of fixed length and parallel to the direction the agent is facing. An attack succeeds only if the target touches the melee.
Agents can also carry items in a small inventory with a fixed number of slots. The item in the last slot can be either used, dropped on the ground, or given to another agent that is sufficiently close. Moreover, when an agent dies, the contents of their inventory are dropped on the ground, randomly scattered in a small circular region centered on the last position of the agent. Items are described more in detail in section 3.2.
Depending on the environment variant, agents can also be grouped into teams. In this case, agents can only attack other agents outside of their team and they can only give items to agents inside their team. Moreover, agents in the same team completely share their rewards.
### Items and Objects
Items are modeled as small, intangible circular bodies. Agents can pick them up in their inventory by touching the item body, which then despawns. Each item has a specific behaviour when it is used. This environment includes two kinds of items: _heals_ and _object items_. Heals are items that simply increase the health of their user. Object items are items that spawn other bodies, such as boxes, at the location of the user. These objects can be broken to drop an item that respawns them. In this environment, objects are rectangular boxes; the dimensions of each box are determined randomly at the start of each episode.
### The Safe Zone
The _safe zone_ is a popular mechanic in competitive "Battle-Royale" video games. It is a moving region of the map that shrinks as the game advances, hurting all players that are outside of it with a small but constant damage. Thus, not staying inside the safe zone is eventually lethal. Moreover, the safe zone usually shrinks to a void region near the end of the game, so that players cannot survive indefinitely. The safe zone does not by itself introduce competitiveness in the environment; however, if paired with an appropriate reward scheme, it indirectly forces agents to fight for their spot in the safe zone to stay alive longer.
In this environment, the safe zone is circular and alternates between a stationary phase and a shrinking-moving phase. The positions of the center for each phase are extracted randomly, such that the zone is always fully inside of the room. The radius of the zone decreases linearly during each shrinking-moving phase, until it becomes null after the last phase.
### Observation and Action Spaces
The observation \(x\) available to each agent is in the form
\[x=(x_{\text{self}},x_{\text{zone}},x_{\text{entities}})\]
All the components are described in more detail below. All the scalar values of the observation are real numbers, so that the observation space is continuous.
Figure 1: Screenshot of an environment variant with heals and boxes and 2 agents. The segment originating from the agents is the melee, the green bar on top of them is their health, while the number appearing on their side is their agent ID.
\(x_{\text{self}}\) contains the information about the observing agent, and is in the form
\[x_{\text{self}}=(i,h,x,y,\theta,v_{x},v_{y},\omega)\]
where:
* \(i\) is the ID of the agent;
* \(h\) is its current health;
* \(p=(x,y)\) is its absolute position;
* \(\theta\) is its orientation;
* \(v=(v_{x},v_{y})\) is its velocity;
* \(\omega\) is its angular velocity.
Additionally, if the agents are divided into teams, \(x_{\text{self}}\) also contains the ID of the team after the agent ID.
Figure 3: Screenshot of an environment variant with heals, boxes, and agents divided into teams.
Figure 2: Screenshot of an environment variant with only heals and 2 agents.
\(x_{\text{zone}}\) observes the current center and radius of the safe zone, along with the center and radius of the next phase of the safe zone. More precisely, it is in the form
\[x_{\text{zone}}=(c_{x},c_{y},r,c_{x}^{\prime},c_{y}^{\prime},r^{\prime})\]
where \((c_{x},c_{y})\) is the center of the current safe zone, \(r\) its radius, and \((c_{x}^{\prime},c_{y}^{\prime})\) and \(r^{\prime}\) are the center and radius of the safe zone in its next stationary phase.
The remaining component of the observation, \(x_{\text{entities}}\), contains all the information about the various entities that are in the environment, and is in the form
\[x_{\text{entities}}=(x_{\text{other}},x_{\text{heal}},x_{\text{box}},x_{\text{ boxitem}},x_{\text{healslot}},x_{\text{boxslot}})\]
All the components \(x_{\text{entities}_{i}}\) of \(x_{\text{entities}}\) have a similar structure but different sizes, and each correspond to a different type of entity in the environment. Each \(x_{\text{entities}_{i}}\) is in the form
\[x_{\text{entities}_{i}}=(x_{\text{allents}_{i}},x_{\text{mask}_{i}})\]
\(x_{\text{allents}_{i}}\) is a matrix which contains, for each entity of that type, the actual information about it, stored as rows. The \(x_{\text{mask}_{i}}\) component, instead, is a single vector that contains binary values. Each of these values corresponds to a row of \(x_{\text{allents}_{i}}\) and tells whether the observing agent can see that entity with its camera or not. The dimension and contents of each row of \(x_{\text{allents}_{i}}\) depend on the entity type as follows:
* For \(x_{\text{other}}\), the entities are the other agents; each row is thus in the same form as \(x_{\text{self}}\).
* For \(x_{\text{heal}}\), entities are the heal items, and each row contains the position \((x,y)\) of the item.
* For \(x_{\text{box}}\), the entities are the box objects. Each entity row contains: the 2D positions of each of its 4 vertices, as if the box was centered at the world origin; the translation vector to the actual center of the box in the world; the actual orientation of the box in the world.
* For \(x_{\text{boxitem}}\), the entities are the items corresponding to broken boxes. Each row contains: the 2D positions of each its 4 vertices, as if the box was centered at the world origin; and the position of the box item in the world.
* \(x_{\text{healslot}}\) always contains a single entity, which represent the last slot of the agent's inventory, if it contains a heal item. Otherwise, its only row is masked out in the corresponding \(x_{\text{mask}_{\text{healslot}}}\).
* Similarly, \(x_{\text{boxslot}}\) also contains a single entity, which represent the last slot of the agent's inventory, if it contains a box item. Otherwise, its only row is masked out in the corresponding \(x_{\text{mask}_{\text{boxslot}}}\).
The action space of the environment is discrete, and each action \(a\) is in the form
\[a=(a_{x},a_{y},a_{\theta},a_{\text{atk}},a_{\text{use}},a_{\text{give}})\]
where:
* \(a_{x}\), \(a_{y}\) and \(a_{\theta}\) all take values in \(\{-1,0,1\}\), and give the direction of the linear and angular force that the agent can apply to itself; their value is multiplied by a constant parameter that controls the magnitude of the resulting forces
* \(a_{\text{atk}}\) is a binary action that controls whether the agent is trying to attack with its melee at the current time step;
* \(a_{\text{use}}\) is a binary action that, when active, makes the agent try to use the last item it picked up in its inventory;
* \(a_{\text{give}}\) is a binary action, and controls whether the agent is trying to give the last item it picked in its inventory to the nearest suitable agent or teammate.
### Rewards and Episode Termination
The reward function \(R\) of a single agent or team is deterministic, depends only on the states \(s\) and \(s^{\prime}\), and is defined as
\[R_{i}(s,s^{\prime})=I_{\text{alive}}(s,s^{\prime})r_{\text{alive}}+(1-I_{\text{ alive}}(s,s^{\prime}))r_{\text{dead}}+n_{\text{kills}}(s,s^{\prime})r_{\text{kill}}+I_{ \text{death}}(s,s^{\prime})r_{\text{death}}\]
where:
* \(I_{\text{alive}}(s,s^{\prime})\) is 1 when the agent is alive, and 0 otherwise;
* \(I_{\text{death}}(s,s^{\prime})\) is 1 if the agent died in the transition from \(s\) to \(s^{\prime}\), and 0 otherwise;
* \(n_{\text{kills}}(s,s^{\prime})\) is the number of enemy agents the agent has successfully killed in the transition from \(s\) to \(s^{\prime}\);
* \(r_{\text{alive}}\) is a parameter controlling the reward obtained by live agents at each step;
* \(r_{\text{dead}}\) is a parameter controlling the reward obtained by dead agents at each time step;
* \(r_{\text{kill}}\) is the reward obtained by each successful kill;
* \(r_{\text{death}}\) is the reward obtained at the time step at which the agent dies.
Each environment variant can specify different numbers for the four parameters.
There are two possible termination conditions for an episode, depending on the environment variant. The first is to consider an episode terminated when all agents or teams are dead. The second is to end an episode when there is only one or less agents or teams still alive. Along with the four reward parameters described above, this choice determines the level of competitiveness of the environment. For example, consider an environment in which only \(r_{\text{alive}}=1\), while all other parameters are 0, and the episode terminates when all agents are dead. In this environment, there is no significant advantage is adopting an aggressive strategy and attacking other agents, as long as the safe zone is big enough. The best strategy is simply to survive as long as possible. On the other hand, if \(r_{\text{kill}}\) is positive and the episode ends when only one agent or team is alive, there is considerable advantage in killing opponents to collect reward. Note that depending on the precise value of \(r_{\text{kill}}\), it may also be convenient to survive as long possible before engaging opponents to collect the kill reward.
## 4 Experiments and Results
This section details the experimental setup and the results of all the reported experiments. The following paragraphs discuss some aspects common to all experiments. Section 4.1 describes the policy optimization approach, while subsequent sections contain experiment results.
The three main aspects that differentiate the environment variants used for these experiments are:
* whether the agents are grouped into teams or not;
* competitiveness of the reward scheme and episode ending conditions;
* complexity of the environment.
Two teaming modes were used here: free-for-all, and division of the agents into 2 opposite teams. The competitiveness varied among:
* _low_ competitiveness: \(r_{\text{alive}}=1\), all other reward parameters set to 0, and episodes ending only when all agents were dead;
* _medium_ competitiveness: \(r_{\text{alive}}=1\), \(r_{\text{dead}}=-1\), the other two reward components set to 0, and episodes ending when only agent was still alive;
* _high_ competitiveness: \(r_{\text{alive}}=1\), \(r_{\text{dead}}=-1\), \(r_{\text{kill}}=100\), \(r_{\text{death}}=-100\).
The complexity of the environment mainly involved the presence or absence of randomly shaped boxes.
Besides the rewards obtained by every agent or team, the following variables were also recorded for each episode of experience during training:
* the number of heal items consumed;
* the number boxes placed by consuming their item;
* the number of kills caused by every agent or team.
Performance tests were performed on all variants described below, with the results reported in table 2.
### Policy Optimization
The policies of the agents are represented with Artificial Neural Network models. Each agent has two networks associated to it: the defines a stochastic policy by outputting action distributions; the second is used to approximate the value function. The architectures of the agent networks are based on fully-connected layers and masked self-attention to account for the varying number of entities observed by each agent. These models were optimized using the Proximal Policy Optimization algorithm [31] and Generalized Advantage Estimation [30]. A version of the StableBaselines 3 implementation of PPO [27] was used, modified to optimize multiple agents at once. This implementation features early stopping during each step of policy optimization, based on an estimate of the Kullback-Leibler divergence between the old and new policies. Early experiments in this environment showed that this was a key aspect of the policy optimization algorithm, and adopting a fixed number of SGD steps instead of early stopping produced either destructive updates to the policies, or much slower learning. Table 1 reports the values of the hyperparameters used for policy optimization.
As is common in multi-agent Reinforcement Learning literature, the parameters of the agent networks were shared between all agents [2]. Contrary to the approaches in Baker et al., Pinto et al., Lowe et al. and Foerster et al., in which agents are trained on "omniscent" observations, here non-visible entities were masked also during training. This choice was made after early experiments revealed that agents trained with omniscent observations were slightly less prone to explore their surroundings in search for items, e.g. heals.
The architectures of the policy and value networks used for the agents are very similar, with the only notable difference being the last few layers. Remember that the agent observations \(x\) are in the form
\[x=(x_{\text{self}},x_{\text{zone}},x_{\text{entities}})\]
And each \(x_{\text{entities}_{i}}\) is in the form
\[x_{\text{entities}_{i}}=(x_{\text{allent}_{i}},x_{\text{mask}_{i}})\]
The inputs \(x_{\text{self}}\) and \(x_{\text{zone}}\) are concatenated and fed to a fully-connected layer, obtaining the non-entity embedding \(z\). For each of the entity types, the corresponding rows of \(x_{\text{allent}_{i}}\) are all passed through the same fully-connected layer, with output dimension \(N_{e}\) shared between all entity types. In this way, all the entity embeddings can be concatenated into a single sequence of embedded entities \(E\). Then, the masks \(x_{\text{mask}_{i}}\) for all entity types are concatenated to obtain a mask that refers to entity rows in \(E\). \(E\) and this mask are then passed through a Self-Attention layer, which is an Attention layer where queries, keys and values all come from the same sequence. The output sequence of this Attention
\begin{table}
\begin{tabular}{|c|c|} \hline
**Hyperparameter** & **Value** \\ \hline Rollout buffer size \(T\) & \(2^{14}=16384\) \\ Minibatch size \(B\) & \(2^{9}=512\) \\ PPO Clip parameter \(\epsilon\) & 0.2 \\ Discount factor \(\gamma\) & 0.99 \\ GAE parameter \(\lambda\) & 0.95 \\ Target KL divergence \(\delta\) & 0.01 \\ \hline \end{tabular}
\end{table}
Table 1: Policy optimization hyperparameters.
layer is then passed through an average pooling to obtain a vector of fixed length \(z_{\text{ent}}\), which does not depend on the number of entities. \(z\) and \(z_{\text{ent}}\) are then concatenated and fed through two fully-connected layers. The policy network then has \(N_{a}\) fully-connected heads, one for each action, each with output dimension equal to the number of possible values for that action. A softmax activation is then used to output an action distribution for each action component. The value function network, instead, simply has one fully-connected head with a single output and no activation function. All fully-connected hidden layers use ReLU activations.
### Free-for-all
The first group of experiments performed involved variants of the environment in which all agents are against all other agents. In all variants, the number of agents was 2. The variants are as follows:
1. free-for-all with low competitiveness and only heal items;
2. free-for-all with medium competitiveness and only heal items;
3. free-for-all with medium competitiveness, heal items and boxes;
4. free-for-all with high competitiveness, heal items and boxes;
All variants were allocated 10 million environment steps for training. The figures 4, 5, 6 and 7 show the results for the 4 free-for-all variants. In all of them, the reward and the number of heals used steadily increase for the whole training process. This suggests that all the agents have learned the basic strategy of following the safe zone and collecting heals to survive longer. To confirm this, the policies learned by agents in all 4 variants were visually inspected over multiple instances of the environment, giving rise to the following observations:
1. In all variants, the agents learn to follow the safe zone and collect heal items when they are in their line of sight. However, in variant 4 the agents apply this strategy much more inconsistently, often moving around outside the zone, or failing to collect heals in their line of sight. In other
Figure 4: Results for the free–for–all variant 1. The number of kills, while still lower than 1, is still significantly higher than other free–for–all variants. Its trend is to increase in the early stages of training and stabilize to a very noisy but seemingly constant value.
variants, where episodes often get to the point where the safe zone disappears, agents have also learned to move frantically to try to find heal items, so that they can survive a little bit longer.
2. In variants 2 and 3, agents learn to avoid hurting their opponent by not attacking them when they are in range (note that if an agent is in the attack range of another, it is also always in its vision cone). Moreover, they do so in different ways: in variant 3, which also features boxes, agents only use the attack action when a box is in their attack range. Instead, in variant 2, the agents seem to "spam" the attack action when no agent is in range, but almost stop using it when an opponent is in range.
3. In variant 1, where there is no clear incentive for competitiveness, agents don't seek the opponent intentionally, but almost always use the attack action if they find an opponent is in range.
4. In variant 4, the outcome of episodes is very inconsistent, and mostly alternates between three kind of behaviour: (1) one agents kills the other in the initial phases, resulting in poor rewards; (2) episodes in which agents don't interact and die because they are eventually forced outside the safe zone, obtaining high reward; (3) episodes in which agents die because of inconsistency in following the safe zone.
5. In all variants, agents sometimes struggle with motion control, occasionally wandering outside the safe zone for a small period of time, or missing a heal item because their speed was too high.
### Teams
In the second group of experiments, agents were divided into two teams with shared reward and episode termination conditions. All variants feature 4 agents, so that each team was composed of 2 agents. The variants are as follows:
1. 2 vs 2 with low competitiveness, heal items and boxes;
2. 2 vs 2 with medium competitiveness, heals only;
Figure 5: Results for the free–for–all variant 2. Here, as agents learn to avoid hurting their opponent, the number of kills almost vanishes in the last stages of training. Note that the reward scheme results in the returns only differing by 1, which so that the two return lines seem to coincide in the plot.
3. 2 vs 2 with medium competitiveness, heal items and boxes;
4. 2 vs 2 with high competitiveness, heal items and boxes;
As for the free-for-all variants, the training lasted until 10 million environment steps were reached. Results are shown in figures 8, 9, 10 and 11. The reward trend for all these variants, besides variant 4, is an initial increase until around 4 million environment steps. Afterwards, the rewards of both teams decrease significantly. The number of heals used follows the reward trend for all four variants. In variants 1 and 4, the number of boxes placed reaches values slightly greater than 1, even though it decreases back to smaller values in variant 1. The trend of kills per episode is heavily dependent on the variants, but doesn't reach values close to 1 for any variant. For the first 3 variants, the visual inspection of the policies has been performed both with the parameters values at 10 million steps and with earlier parameter values at 4 million steps. These are the resulting observations:
* At 10 million steps, in the first 3 variants agents have learned to inconsistently follow the safe zone and collect and use heal items. The inconsistency is mainly due to the agents moving around, often spreading out in different corners of the map. The agents don't intentionally seek opponents, but almost always attack them when they are in range. It is also often the case that one agent on each team dies fairly early in the episode by standing outside the safe zone, while its teammate manages to survive much longer. Note that in this case, the death of the first teammate does not directly affect the episode return for the team.
* At 4 million steps, the agents of the first 3 variants follow the safe zone much more consistently than at 10 million steps. Moreover, while it is not clear whether this is intentional, agents seem to find themselves more grouped towards the center of the safe zone. Moreover, it is less frequent that one of the teammates of each team dies in the first steps of the episode by standing outside the safe zone. As for the policies at 10 million steps, there is no intentional seeking for opponents, although agents almost always use the attack action on opponents (and possibly boxes) in range.
Figure 6: Results for the free–for–all variant 3. The number of kills increases very slowly, as it does its variance; however, its value still remains very close to 0. The number of boxes placed remains close to 0 for the whole training process. Note that the reward scheme results in the returns only differing by 1, which so that the two return lines seem to coincide in the plot.
* In variant 4, the agents inconsistently follow the safe zone and collect heal items. As in other variants, they attack opponents and boxes when in range, but don't actively seek opponents to kill. Also in this case, the inconsistency comes from agents occasionally wandering around the map, stepping outside the safe zone.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Variant** & **Seconds per step (\(\mu\pm\sigma\))** \\ \hline Free–for–all variant 1 & \(7.59\cdot 10^{-4}\pm 1.1\cdot 10^{-4}\) \\ Free–for–all variant 2 & \(7.58\cdot 10^{-4}\pm 1.2\cdot 10^{-4}\) \\ Free–for–all variant 3 & \(9.86\cdot 10^{-4}\pm 6.28\cdot 10^{-5}\) \\ Free–for–all variant 4 & \(1.04\cdot 10^{-3}\pm 2.97\cdot 10^{-5}\) \\
2 vs 2 variant 1 & \(1.47\cdot 10^{-3}\pm 1.48\cdot 10^{-4}\) \\
2 vs 2 variant 2 & \(1.21\cdot 10^{-3}\pm 1.30\cdot 10^{-4}\) \\
2 vs 2 variant 3 & \(1.40\cdot 10^{-3}\pm 7.25\cdot 10^{-5}\) \\
2 vs 2 variant 4 & \(1.42\cdot 10^{-3}\pm 6.08\cdot 10^{-5}\) \\ \hline \end{tabular}
\end{table}
Table 2: Computational performance tests on the environment step function. Each result reports the mean and standard deviation of the time taken by the env.step call, over the first 100 steps of an episode. These are usually the most expensive steps, since all agents and objects are still in the world, so the estimates reported are pessimistic. The dependence on the objects present in the world is also confirmed by the fact that configurations which feature less agents or the absence of boxes have obtain faster times. Moreover, configurations with the same number of objects show almost perfectly overlapping results. The tests were run on a Macbook Pro ’15 laptop, with an Intel Mobile Core i7 “Broadwell” (I7-5557U) CPU.
Figure 7: Results for the free–for–all variant 4. The number of kills slowly increases, still remaining close to 0, but with a more consistent trend than variant 2. The number of boxes placed also remains close to 0, and the number of heals is significantly lower than other variants.
## 5 Discussion
In all experiments, agents learned the basics of the environment: staying inside the safe zone, and collecting and using heal items. However, there is high variability in how consistently the agents follow this strategy, depending on the variant. The first 3 free-for-all variants are the only ones in which agents reliably follow this strategy. The causes seem to depend on the variant.
For the 2 vs 2 variants in which the return decreases in the last stages of training, one possibile cause is some form of catastrophic forgetting. However, observations of the learned policies seem to suggest this is not the case. In fact, the unreliability of the policy in the later stages of training seems to be related to the agents tendency to spread more around the map, even if this sometimes means ignoring the safe zone. This may be because agents didn't learn to avoid hurting each other, as in the free-for-all variants 2 and 3, and they instead have learned that standing near other agents is dangerous. In fact, theoretically, since the damage done by an agent's attack is much more powerful than the damage done by the safe zone, it is more important to avoid other agents in the short term with respect to staying inside the zone. Another reason, besides chance, that may have prevented agents from avoiding to attack opponents is the teaming mode. For team-based variants, since the reward is shared between teammates, the death of one teammate does not directly affect the reward, and thus there is more tolerance for sloppy teammates. This is the opposite of what happens, for example, in Baker et al., in which the "hide and seek" reward scheme directly punishes hiders that act poorly.
The only two emergent behaviours shown by these experiments are thus:
1. "pacifist" agents, in free-for-all variants 2 and 3; and
2. agents spreading around the map to avoid being killed, in most of the other variants.
In both cases, this means that agents learned to actively avoid combat interaction with the other agents. In the second case, this also suggests that training these configurations for more environment
Figure 8: Results of the 2 vs 2 variant 1. The number of placed boxes and kills peaks at around 3 million steps, while the peak return is observed later at 4 around million steps. The number of kills, in particular, still takes values close to 0, but significantly higher than varinats 2 and 3. The number of heals used initially increases, and the follows a very slow decreasing trend after around 4 million steps.
steps may not give rise to any other interesting emergent interactions. In the first case, instead, since agents stop attacking but don't actively run away from each other, combat interaction may still arise if trained more on the last phases of the episodes, in which fighting for a spot in the safe zone is crucial.
The reported results and the above considerations also highlight the following flaws in the environment variants considered:
1. Agents don't ever hold onto heal items, and never give them to another agent; instead, they always consume them as soon as they are collected. This is because there is no clear advantage in not doing so: using a heal at full health has the same effect of using a heal at 1 health.
2. Team rewards are such that a sloppy teammate does not affect the team reward, at least in the early phases of each episode. This is because the only advantage in all teammates surviving is the possibility of teaming up in combat against an opponent, or exchanging heals. Since the latter is unlikely due to flaw 1 and combat interactions are very sparse, this reward scheme produces teams with less consistent agents than the free-for-all configuration.
3. The magnitude of the reward for kills and deaths does not result in high combat interaction, and makes high-competitiveness variants more similar to those with medium competitiveness. In fact, since the episodes in both configurations end when only one agent or team is alive, it is better for agents to not engage in combat early in the episode, because surviving the combat may bring a much higher reward with respect to the kill reward.
4. The dynamic controls for agent motion may be too limiting to allow for precise movements, resulting in less consistent performance overall.
Figure 9: Results of the 2 vs 2 variant 2. Again, the episode return peaks around 4 million episodes. Kills have a noisy but higher value from 2 million steps to 4 million steps, and then slowly decrease until training ends. Nonetheless, kills always take values closer to 0 than 1. Heals follow the reward trend. Note that the reward scheme results in the returns only differing by 1, which so that the two return lines seem to coincide in the plot.
## 6 Conclusions and Future Work
The two main contributions of this work are:
* developing a modular and efficient framework for 2D multi-agent environments, and implementing concrete, survival-based environments on top of it;
* conducting experiments on the developed variants, obtaining some basic emergent behaviours, and identifying concrete directions for the improvement of the tested environment variants.
Future work should address the flaws listed above before focusing on further experiments. Flaw 1 can be easily solved by introducing a maximum value for the health. Flaw 2 should be addressed by coming up with a different reward scheme for team-based variants, so that sloppy teammates are less tolerated, or by adding mechanics to the environment that promote team play. In this regard, solving flaw 1 may help by giving more value to inventory-based mechanics, such as sharing items or killing opponents to collect the items they were holding. Flaw 3 can be solved by either dismissing the kill for rewards, focusing more on indirectly competitive variants, or by simply increasing the reward given for kills.
Considering that the training scale at which the above experiments were conducted is significantly smaller than related work, such as Baker et al., it may also be beneficial for future work to focus on fewer environment variants. This would allow to allocate a higher number of training steps to each variant, avoiding the need for more computing resources.
Figure 10: Results of the 2 vs 2 variant 3. The trends are not as clear as the first 2 variants, but still show the return initially increasing and then degrading in later stages of training. The number of kills is noisy but still remains smaller than the first 2 variants and close to 0, not showing any clear trend. Note that the reward scheme results in the returns only differing by 1, which so that the two return lines seem to coincide in the plot. |
2307.05842 | The Butterfly Effect in Artificial Intelligence Systems: Implications
for AI Bias and Fairness | The Butterfly Effect, a concept originating from chaos theory, underscores
how small changes can have significant and unpredictable impacts on complex
systems. In the context of AI fairness and bias, the Butterfly Effect can stem
from a variety of sources, such as small biases or skewed data inputs during
algorithm development, saddle points in training, or distribution shifts in
data between training and testing phases. These seemingly minor alterations can
lead to unexpected and substantial unfair outcomes, disproportionately
affecting underrepresented individuals or groups and perpetuating pre-existing
inequalities. Moreover, the Butterfly Effect can amplify inherent biases within
data or algorithms, exacerbate feedback loops, and create vulnerabilities for
adversarial attacks. Given the intricate nature of AI systems and their
societal implications, it is crucial to thoroughly examine any changes to
algorithms or input data for potential unintended consequences. In this paper,
we envision both algorithmic and empirical strategies to detect, quantify, and
mitigate the Butterfly Effect in AI systems, emphasizing the importance of
addressing these challenges to promote fairness and ensure responsible AI
development. | Emilio Ferrara | 2023-07-11T23:32:26Z | http://arxiv.org/abs/2307.05842v4 | # The Butterfly Effect in Artificial Intelligence Systems:
###### Abstract
The Butterfly Effect, a concept originating from chaos theory, underscores how small changes can have significant and unpredictable impacts on complex systems. In the context of AI fairness and bias, the Butterfly Effect can stem from a variety of sources, such as small biases or skewed data inputs during algorithm development, saddle points in training, or distribution shifts in data between training and testing phases. These seemingly minor alterations can lead to unexpected and substantial unfair outcomes, disproportionately affecting underrepresented individuals or groups and perpetuating pre-existing inequalities. Moreover, the Butterfly Effect can amplify inherent biases within data or algorithms, exacerbate feedback loops, and create vulnerabilities for adversarial attacks. Given the intricate nature of AI systems and their societal implications, it is crucial to thoroughly examine any changes to algorithms or input data for potential unintended consequences. In this paper, we envision both algorithmic and empirical strategies to detect, quantify, and mitigate the Butterfly Effect in AI systems, emphasizing the importance of addressing these challenges to promote fairness and ensure responsible AI development.
+
Footnote †: journal: Machine Learning with Applications
## 1 Introduction
The Butterfly Effect, a fundamental concept in chaos theory, describes the sensitive dependence on initial conditions in nonlinear, dynamic systems. It was coined by American mathematician and meteorologist Edward Lorenz in the early 1960s (Lorenz, 1963). The Butterfly Effect suggests that small initial changes in complex, dynamic systems can result in significantly different and often unpredictable outcomes over time. The idea is best portraited by the popular saying that the flap of a butterfly's wings in Brazil could set off a chain of events leading to a tornado in Texas. While working on numerical weather prediction models, Lorenz observed that small changes in initial conditions led to drastically different long-term forecasts. The importance of the Butterfly Effect extends beyond meteorology and has found applications in various scientific disciplines, including physics, engineering, biology, and social sciences. In these disciplines, the Butterfly Effect is used to describe the consequences of small changes or perturbations in complex systems: this effect highlights the non-linear nature of complex systems as well as the importance of understanding their interconnectedness of various components and the challenges associated with accurate long-term predictions (Strogatz, 1994).
### Relevance of the Butterfly Effect to AI fairness and bias
In the context of AI fairness and bias, the Butterfly Effect highlights the potential for small biases or skewed data inputs at various stages of algorithm development to result in significant and unexpected unfair outcomes. This phenomenon can manifest in various ways, such as small adjustments to input data, inherent biases within the data or algorithms themselves, shifts in data distributions, adversarial attacks, or feedback loops that amplify existing biases. The interconnected nature of AI systems, combined with their potential impact on society, makes it crucial to examine the Butterfly Effect's role in AI fairness and bias.
### Factors contributing to the Butterfly Effect in AI systems
The relevance of the Butterfly Effect to AI fairness and bias lies in the observation that AI and machine learning (ML) systems are complex and interconnected, with multiple components contributing to their final decision-making process. These components include data, algorithms, and user interactions, among others. As a result, seemingly
small changes or biases in these components can have a significant and potentially unforeseen impact on the fairness and bias of AI systems. Furthermore, seemingly small initial biases can propagate and cause large disparities. Vice versa, larger initial biases might possibly have less dramatic consequences in real applications. Several factors can contribute to the emergence of the Butterfly Effect on AI fairness and bias, summarized in Table 1.
## 2 Examples of real-world emergence of the Butterfly Effect in AI systems
The Butterfly Effect can manifest in various ways in AI systems, leading to unintended consequences and exacerbating fairness and bias issues. Here, we discuss three real-world examples of the Butterfly Effect in AI systems.
### Facial recognition technology
Facial recognition algorithms have become increasingly prevalent in various applications, from social media platforms to law enforcement. However, these algorithms can exhibit significant performance disparities across different demographic groups due to imbalanced training datasets. Small biases in the demographic representation of these datasets can lead to large differences in the algorithm's accuracy for different groups, resulting in biased outcomes that disproportionately affect underrepresented populations.
In a study by Buolamwini and Gebru (2018), they found that commercial facial recognition systems had higher error rates for darker-skinned and female subjects compared to lighter-skinned and male subjects. These disparities in performance can be attributed to the underrepresentation of specific demographic groups in the training data, leading to a Butterfly Effect in the fairness and bias of facial recognition systems.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**Contributing Factor** & **Description** & **References** \\ \hline High-dimensional input space & ML algorithms often operate on high-dimensional input data, which means they rely on a multitude of features to make decisions. Small perturbations in the input data, such as the removal or addition of features, can lead to vastly different model behavior and predictions. This sensitivity to input data can manifest as a Butterfly Effect, where slight adjustments result in significant and unintended consequences. & Barocas \& Selbst, 2016 \\ \hline Nonlinearity and complexity of ML models & Many ML models, such as deep neural networks, are highly nonlinear and complex. This nonlinearity can make it challenging to predict how changes in input data or model parameters will affect the model’s predictions. Consequently, biases or errors introduced during the training process may propagate and amplify, leading to biased and unfair outcomes. & Goodfellow et al., 2016 \\ \hline Feedback loops and reinforcement of biases & ML systems can inadvertently create feedback loops that perpetuate and amplify biases. For example, if a biased ML system is used to generate new data, this new data may also be biased, reinforcing the system’s existing biases over time. These feedback loops can create a Butterfly Effect, where small initial biases lead to increasingly biased outcomes as the system iterates. & Ensign et al., 2018 \\ \hline Compounding effects of multiple components & AI systems often comprise multiple components, each potentially introducing its biases. These biases can interact and compound in unpredictable ways, leading to a Butterfly Effect where the overall system exhibits greater bias than any single component. & Friedler et al., 2019 \\ \hline Local minima and distribution shifts & Saddle points in the loss landscape can stall optimization, causing algorithms to converge to suboptimal solutions that exacerbate fairness and bias issues. Distribution shifts, when test data differs from training data, can lead to poor model performance, unforeseen biases, and unfair outcomes. Both factors can trigger the Butterfly Effect on AI fairness and bias, amplifying minor discrepancies and yielding significant, unpredictable consequences. & Slowik \& Bottou, 2021; Jordan, 2023; Rezaei et al., 2021 \\ \hline Adversarial attacks & By exploiting small perturbations or vulnerabilities in AI models, adversarial attacks can intentionally introduce subtle changes to input data or manipulate the model’s decision boundaries, causing the model to produce significantly different and biased outcomes. Small alterations introduced by adversarial attacks can lead to substantial and unpredictable consequences in AI fairness and bias, thereby manifesting the Butterfly Effect in these systems. & Nanda et al., 2021 \\ \hline \end{tabular}
\end{table}
Table 1: Factors contributing to the Butterfly Effect in AI systems
### Healthcare algorithms
AI and ML models are increasingly being used to support decision-making in healthcare, such as identifying high-risk patients and guiding treatment decisions. However, biases in historical data and model assumptions can lead to biased predictions, disproportionately impacting minority populations.
Obermeyer et al. (2019) found that a widely-used commercial algorithm for predicting healthcare needs exhibited significant racial bias. The algorithm assigned lower risk scores to Black patients than White patients with similar health conditions, leading to disparities in access to care management programs. The study revealed that the algorithm relied on healthcare costs as a proxy for health needs, which inadvertently introduced bias due to racial differences in healthcare utilization. The small initial bias in the model's assumptions led to a Butterfly Effect, resulting in large disparities in the allocation of healthcare resources.
### Hiring algorithms
AI-based recruiting tools are increasingly being employed to streamline the hiring process and identify qualified candidates. However, these tools can perpetuate and amplify existing biases in the hiring process, leading to unfair outcomes and exacerbating societal inequalities (Raghavan et al. 2020).
In 2018, it was reported that an AI recruiting tool showed gender bias, favoring male candidates over female candidates for technical roles (Dastin, 2018). The bias emerged due to the training data, which consisted primarily of resumes submitted to the company over a ten-year period and reflected a male-dominated applicant pool. Additionally, the algorithm might have penalized resumes containing words associated with women, such as "women's" in phrases like "women's chess club." The biases present in the training data and the use of gender-biased proxies led to a Butterfly Effect, resulting in a biased AI system that perpetuated gender inequality in hiring.
### Large Language Models
Large language models, such as GPT-4 or Bard, could be significantly impacted by the Butterfly Effect. The training process for these models involves learning from vast amounts of text data, making them susceptible to minor changes in input data or algorithmic processes. A seemingly insignificant alteration in the training data, such as a slightly skewed representation of a particular demographic or viewpoint, can lead to substantial and unexpected biases in the model's output. Additionally, adversarial attacks, distributional shifts between training and test data could also lead to unforeseen outcomes. Hence, the Butterfly Effect could manifest in large language models, resulting in outputs that propagate and amplify pre-existing biases or inaccuracies - see Weidinger et al. (2021); Ferrara (2023).
## 3 Manifestations of the Butterfly Effect in AI systems
We explore the various manifestations of the Butterfly Effect on AI fairness and bias. These manifestations include small adjustments in input data, inherent biases within data or algorithms, feedback loops that amplify biases, and adversarial attacks exploiting vulnerabilities. By understanding these different ways in which the Butterfly Effect can impact AI systems, we can better identify potential sources of unfairness and bias and develop appropriate strategies to mitigate their effects, ensuring that AI systems produce fair and unbiased outcomes. Figure 1 summarizes the causes and manifestations of the Butterfly Effect in AI systems presented in detail next.
Figure 1: Root causes and associated manifestations of the Butterfly Effect in AI systems
### Small adjustments in input data
Small adjustments in input data can significantly impact the fairness and bias of AI systems, as these systems rely on vast amounts of data to make decisions. These adjustments can manifest in various ways, including changes in data sampling, demographic makeup, and feature selection, among others. This sensitivity to input data can lead to the Butterfly Effect, where minor alterations in input data result in significant and unintended consequences in the fairness and bias of AI systems.
#### 3.1.1 Data sampling
The manner in which data is collected and sampled can introduce biases in AI systems. If the data collection process is not carefully designed, it can lead to underrepresentation or overrepresentation of certain groups, which can in turn affect the fairness and bias of the AI system (Shankar et al., 2020). For example, non-representative sampling from a population can result in skewed datasets, causing AI systems to perform poorly for underrepresented groups.
#### 3.1.2 Demographic makeup
AI systems may exhibit biases when demographic makeup within the training dataset is imbalanced. For instance, if a particular demographic group is underrepresented in the training data, the AI system may not generalize well to that group, leading to biased and unfair outcomes (Buolamwini and Gebru, 2018). In facial recognition technology, the underrepresentation of certain demographic groups in training datasets can lead to disparities in the algorithm's accuracy for those groups. Ensuring that training datasets are representative of diverse populations is crucial to mitigate the Butterfly Effect and ensure the fairness of AI systems.
#### 3.1.3 Feature selection and engineering
Feature selection and engineering play a crucial role in shaping the behavior of AI systems. The choice of features used as inputs can significantly affect the fairness and bias of AI models (Chouldechova, 2017). For example, using features that are proxies for protected attributes, such as race or gender, can introduce bias into AI systems, even if the protected attributes themselves are not explicitly used. Additionally, the omission of important features that capture relevant information about the population may result in biased models that do not adequately account for differences between groups.
In conclusion, small adjustments in input data, such as data sampling, demographic makeup, and feature selection, can have a profound impact on the fairness and bias of AI systems. To mitigate the Butterfly Effect and ensure that AI systems promote fairness and equity, it is essential to carefully curate and preprocess input data to minimize biases and accurately represent diverse populations.
### Inherent biases within data or algorithms
We delve into the inherent biases within data or algorithms that can cause the Butterfly effect on AI fairness and bias. These biases can originate from various sources, such as biased data collection processes or unintentional biases embedded in algorithm design.
Inherent biases within data or algorithms can lead to the Butterfly Effect on AI fairness and bias. Data biases can arise from historical discrimination, measurement errors, or other systemic issues affecting the data-generating process, while algorithmic biases can emerge from model assumptions, optimization techniques, or other design choices. These biases can propagate and compound throughout the system, leading to significant and unintended consequences.
#### 3.2.1 Biases in data
Data biases can emerge from various sources, such as historical discrimination or measurement errors. For instance, historical biases present in training data can lead AI systems to perpetuate or exacerbate existing inequalities. An example of this phenomenon is seen in the COMPAS recidivism risk assessment tool, which was found to have disparate impacts on different racial groups due to biases present in the training data (Angwin et al., 2016). Another source of data bias is measurement error, which occurs when variables in the dataset do not accurately represent the underlying constructs they are meant to capture. Measurement errors can introduce biases in AI systems, leading to unfair decision-making (Dressel and Farid, 2018).
#### 3.2.2 Algorithmic biases
Algorithmic biases can arise from various aspects of the AI system, such as model assumptions, optimization techniques, or other design choices. Model assumptions, like the choice of a linear model or the assumption of independence between features, can introduce biases if they do not accurately reflect the underlying data-generating process (Berk et al., 2018). Optimization techniques, such as regularization or the choice of a loss function, can also lead to biases if they prioritize certain objectives over others (Kearns et al., 2018). Lastly, other design choices, like the selection of hyperparameters or the choice of an ensemble method, can introduce biases into AI systems, which can propagate and compound over time (Grgic-Hlaca et al., 2018).
### Feedback loops that amplify biases
Feedback loops can amplify biases in AI systems, leading to the Butterfly Effect compromising fairness and bias. Feedback loops occur when the output of an AI system influences its future inputs, reinforcing and magnifying biases over time. This can result in a self-perpetuating cycle of unfair outcomes that disproportionately impact certain groups.
#### 3.3.1 Reinforcing feedback loops
Reinforcing feedback loops can occur when an AI system's biased predictions lead to actions that further perpetuate the initial biases. For example, predictive policing algorithms that rely on historical crime data can create a feedback loop by directing law enforcement resources to areas with higher reported crime rates (Lum & Isaac, 2016). If certain groups or neighborhoods are disproportionately targeted due to historical biases in the data, the increased police presence can lead to more arrests and crime reports, which in turn reinforce the initial biases in the AI system.
#### 3.3.2 Feedback loops in recommendation systems
Recommendation systems are another example where feedback loops can amplify biases. These systems often rely on user data to provide personalized recommendations, which can create a filter bubble that reinforces users' pre-existing preferences and biases (Nguyen et al., 2014). In turn, this can lead to biased content exposure and a lack of diversity in the information that users are exposed to, perpetuating existing social biases and contributing to polarization (Pariser, 2011).
#### 3.3.3 Algorithmic confounding
Algorithmic confounding is a phenomenon where a feedback loop between an AI system's predictions and the ground truth it aims to predict leads to biased and unfair outcomes. This can occur when the AI system's biased predictions influence the data used to evaluate its performance, making it difficult to disentangle the true effect of the AI system from the biases present in the data. In such cases, biased algorithms may appear to perform well due to the confounding effect, reinforcing the initial biases and leading to a self-perpetuating cycle of unfair outcomes.
To address feedback loops and mitigate their amplifying effect on biases, it is essential to carefully consider the potential consequences of AI systems' predictions on their future inputs and design mechanisms for monitoring and correcting biases as they emerge over time.
### Adversarial attacks exploiting vulnerabilities
Adversarial attacks can exploit vulnerabilities in AI systems, leading to the Butterfly effect that compromises fairness and bias. These attacks involve the intentional manipulation of input data or model parameters to cause an AI system to produce biased, unfair, or otherwise undesirable outcomes. By exploiting the sensitivity of AI systems to small changes, adversarial attacks can have significant and unpredictable consequences on fairness and bias.
#### 3.4.1 Adversarial examples
Adversarial examples are carefully crafted input data designed to cause an AI system to produce incorrect or biased predictions (Szegedy et al., 2013). These examples can be generated by adding small, imperceptible perturbations to the input data, which can result in vastly different predictions due to the Butterfly Effect. Adversarial examples can be particularly problematic for fairness and bias, as they can be used to target specific demographic groups or individuals, leading to discriminatory outcomes (Sharif et al., 2016).
#### 3.4.2 Model inversion and membership inference attacks
Model inversion and membership inference attacks can exploit the vulnerabilities of AI systems to reveal sensitive information about the training data or individuals within the dataset (Fredrikson et al., 2015; Shokri et al., 2017). These attacks can lead to the Butterfly Effect compromising fairness and bias by revealing disparities in the demographic makeup of the training data or exposing biases in the AI system's predictions. Furthermore, the knowledge gained from these attacks can be used to craft more sophisticated adversarial examples, further exacerbating the impact of the Butterfly Effect on fairness and bias.
#### 3.4.3 Poisoning attacks
Poisoning attacks involve the manipulation of the training data or model parameters to introduce or amplify biases in an AI system (Biggio et al., 2012). By injecting carefully crafted examples into the training data or modifying the model parameters, adversaries can exploit the Butterfly Effect to cause an AI system to produce biased, unfair, or otherwise undesirable outcomes. Poisoning attacks can be particularly challenging to detect and mitigate, as the perturbations introduced by the attacker can be subtle and difficult to distinguish from natural variations in the data.
To defend against adversarial attacks and mitigate their impact on fairness and bias, it is essential to develop robust AI systems that can withstand small perturbations in the input data or model parameters (Madry et al., 2017). Additionally, regular monitoring and evaluation of AI systems for fairness and bias, as well as the implementation of privacy-preserving techniques, can help prevent adversaries from exploiting the Butterfly Effect to compromise the fairness and integrity of AI systems.
### Strategies to Mitigate the Butterfly Effect on AI fairness and bias
Next, we discuss various strategies to mitigate the Butterfly Effect on AI fairness and bias. These strategies encompass diverse aspects of AI system development, ranging from data collection and preprocessing to algorithmic fairness, evaluation and monitoring, and adversarial robustness. By employing these strategies, researchers and practitioners can work towards addressing the potential unintended consequences arising from small changes in input data or algorithmic design and ensure that AI systems are more transparent, accountable, and fair. Figure 2 summarizes the mitigation strategies presented in detail next.
#### 3.5.1 Data Collection and Preprocessing
Creating balanced and representative datasets is crucial for mitigating the Butterfly Effect on AI fairness and bias. Several techniques can be employed to ensure that datasets are balanced and accurately represent the population of interest.
1. **Oversampling minority classes**: Oversampling involves creating copies of instances from minority classes to balance the class distribution. One well-known technique is the Synthetic Minority Over-sampling Technique (SMOTE), which generates synthetic instances of minority classes by interpolating between existing instances
Figure 2: Strategies to Mitigate the Butterfly Effect on AI fairness and bias
(Chawla et al., 2002). SMOTE can help alleviate the problem of overfitting associated with simple oversampling and lead to better performance in terms of fairness and generalization.
2. **Undersampling majority classes**: Undersampling involves removing instances from majority classes to balance the class distribution. One effective undersampling technique is Tomek Links, which removes majority class instances that are close to the decision boundary (Kubat & Matwin, 1997). By removing these instances, the decision boundary becomes less sensitive to small changes in the data, mitigating the Butterfly Effect.
3. **Synthetic data generation**: Synthetic data generation can be used to create new, artificial instances for under-represented classes, ensuring that the dataset is representative of the population of interest. Techniques such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can generate high-quality synthetic data that closely resembles the real-world distribution of the data, reducing the impact of small changes in the data on AI fairness and bias (Frid-Adar et al., 2018).
4. **Stratified sampling**: Stratified sampling is a method of sampling that involves dividing the population into homogenous groups, or strata, and sampling instances from each stratum in proportion to the size of the stratum. This technique can help ensure that the dataset is representative of the population of interest and reduce the sensitivity of AI systems to small changes in the data.
#### 3.5.2 Algorithmic Fairness
Algorithmic fairness is a critical aspect of machine learning that focuses on ensuring equitable treatment across different groups, mitigating biases, and reducing the sensitivity of models to minor changes in input data.
1. **Fairness-aware machine learning**: Fairness-aware machine learning aims to incorporate fairness constraints during the training process to minimize disparate treatment and disparate impact on different groups. Zafar et al. (2017) propose a convex optimization formulation for learning a classifier that satisfies various notions of fairness while maintaining high accuracy. The method minimizes the difference in correlations between the classifier's predictions and the sensitive attribute (e.g., race, gender) across different groups. By incorporating fairness constraints during the training process, this approach helps to mitigate the Butterfly Effect by reducing the sensitivity of the model to small changes in the input data.
2. **Post-processing methods for achieving fairness**: Post-processing techniques focus on adjusting the output of a trained model to ensure fairness. Hardt et al. (2016) propose a method that learns a transformation of the classifier's predictions to satisfy the equalized odds criterion (equal true positive and false positive rates across groups). The method requires no retraining of the original classifier and guarantees the best achievable trade-off between accuracy and fairness. By adjusting the model's output, post-processing methods can help to mitigate the Butterfly Effect by compensating for biases that may have been introduced during the training process.
3. **Fairness through awareness**: Dwork et al. (2012) introduce the concept of fairness through awareness, which requires that any two individuals who are similar with respect to a specific task should be treated similarly by the algorithm. They propose a Lipschitz condition on the classifier's behavior, ensuring that the model is less sensitive to small changes in the input data. Fairness through awareness can help to mitigate the Butterfly Effect by constraining the model's behavior to be consistent and less influenced by small perturbations in the data.
4. **Regularization for fairness**: Regularization techniques can be used to encourage fairness in AI models. Zhao et al. (2019) propose a method that incorporates a fairness-aware regularization term in the objective function during training. This term penalizes the difference between the predicted outcomes for different groups, encouraging the model to learn more equitable representations of the data. Regularization for fairness can help to mitigate the Butterfly Effect by discouraging the model from relying on small differences in the input data that may lead to unfair outcomes.
#### 3.5.3 Evaluation and Monitoring
We emphasize the importance of assessing, understanding, and continuously overseeing the fairness and behavior of AI systems to detect and address potential biases and unintended consequences due to the Butterfly Effect.
1. **Fairness-aware performance metrics**: Evaluating the fairness of AI models requires metrics that capture the disparate impact on different groups. Verma & Rubin (2018) discuss a set of fairness-aware performance metrics, such as demographic parity, equalized odds, and equal opportunity. These metrics provide a quantitative
measure of the difference in outcomes for different groups, helping to identify potential bias and unfairness in AI models. By using fairness-aware performance metrics, practitioners can monitor the impact of small changes in the data or model and identify instances where the Butterfly Effect leads to unintended consequences.
2. **Auditing tools for AI systems**: Auditing tools can help to systematically analyze the behavior of AI systems to identify potential biases, fairness violations, and other issues. Grgic-Hlaca et al. (2018) present a method for auditing black-box models that involves perturbing the input data to explore the model's sensitivity to different features, specifically focusing on protected attributes. By systematically analyzing the model's behavior, auditing tools can help to identify potential Butterfly Effects and inform subsequent mitigation strategies.
3. **Model interpretability for fairness**: Model interpretability techniques can provide insights into the decision-making process of AI models, allowing for better scrutiny of potential bias and unfairness. Ribeiro et al. (2016) propose Local Interpretable Model-agnostic Explanations (LIME), a method for explaining the predictions of any classifier by approximating it locally with an interpretable model. By understanding the underlying factors that influence a model's decisions, practitioners can detect potential Butterfly Effects and address them through appropriate interventions.
4. **Continual monitoring and feedback**: Continual monitoring and feedback involve tracking the performance and fairness of AI systems over time, as well as collecting feedback from users to identify potential issues. Mitchell et al. (2018) discuss the importance of actively soliciting user feedback to identify biases, unfairness, and other issues that may not be apparent from traditional evaluation metrics. Continual monitoring and feedback can help to uncover Butterfly Effects that emerge over time or as a result of changing data distributions and user contexts.
#### 3.5.4 Adversarial Robustness
We delve into the challenges and solutions associated with ensuring AI models remain fair and unbiased when faced with adversarial attacks.
1. **Robustness bias**: Nanda et al. (2021) suggested that relying solely on traditional notions of fairness based on a model's outputs may not be enough when models are susceptible to adversarial attacks. To measure robustness bias, they proposed two methods and performed an empirical investigation on state-of-the-art deep neural networks using commonly employed real-world fairness datasets. Their findings demonstrated that subgroups, which are categorized by sensitive attributes like race and gender, are less robust and therefore more susceptible to adversarial attacks. It is important to consider robustness bias when evaluating real-world systems that rely on deep neural networks to make decisions (Nanda et al., 2021).
2. **Techniques for improving AI model robustness**: To mitigate the Butterfly Effect on AI fairness and bias caused by adversarial attacks, it is essential to develop models that are robust against such perturbations. Madry et al. (2017) propose a training framework based on adversarial training, which aims to minimize the worst-case loss over a set of allowed perturbations. By training the model to be robust against adversarial examples, this approach helps to improve the model's robustness against small changes in the input data, reducing the risk of unintended consequences due to the Butterfly Effect.
3. **Defense strategies against adversarial attacks**: Defending against adversarial attacks is crucial for ensuring the robustness of AI models and mitigating the Butterfly Effect. Tramer et al. (2017) propose a defense strategy called ensemble adversarial training, which augments the training data with adversarial examples generated by an ensemble of models. This approach helps to increase the diversity of the adversarial examples used during training, making the model more robust against a broader range of attacks. By defending against adversarial attacks, this approach helps to reduce the impact of the Butterfly Effect on AI fairness and bias.
4. **Certified robustness**: Certified robustness aims to provide guarantees on the model's behavior under adversarial perturbations, ensuring that small changes in the input data do not lead to significant changes in the model's output. Cohen et al. (2019) introduce randomized smoothing, a technique that provides provable robustness guarantees for classifiers against l2-norm bounded adversarial perturbations. By providing certified robustness, this approach helps to mitigate the Butterfly Effect by ensuring that the model's behavior is stable under small perturbations in the input data.
5. **Adversarial detection**: Detecting adversarial examples is an essential step in defending against adversarial attacks and mitigating the Butterfly Effect on AI fairness and bias. Metzen et al. (2017) propose a method for
detecting adversarial examples by training a separate neural network to distinguish between clean and adversarial inputs. By detecting adversarial examples, this approach helps to prevent unintended consequences due to the Butterfly Effect, ensuring that the model's behavior remains fair and unbiased.
## 4 Conclusions
Throughout this paper, we have explored the Butterfly Effect's role in AI fairness and bias, a phenomenon rooted in chaos theory where small changes can lead to significant and unpredictable effects on complex systems. The manifestations of the Butterfly Effect in AI systems can arise from small adjustments in input data, inherent biases within data or algorithms, feedback loops that amplify biases, and adversarial attacks exploiting vulnerabilities.
Given the pervasive nature of AI systems and their increasing impact on various aspects of society, understanding the Butterfly Effect is crucial for ensuring fairness and minimizing unintended consequences. We have outlined a set of mitigation strategies that encompass data collection and preprocessing, algorithmic fairness, adversarial robustness, and continuous evaluation and monitoring.
Implementing these strategies can help researchers and practitioners develop AI systems that are more transparent, accountable, and fair, ultimately promoting equitable outcomes and fostering trust in these systems. By rigorously scrutinizing the potential Butterfly Effect's role in AI systems and proactively working to mitigate the negative consequences, we can better ensure that AI technologies serve the greater good and contribute positively to societal progress.
|
2310.17889 | Towards optimal multimode fiber imaging by leveraging input polarization
and deep learning | Deep learning techniques provide a plausible route towards achieving
practical imaging through multimode fibers. The results produced by these
methods are often influenced by physical factors like temperature, fiber
length, external perturbations, and polarization state of the input light.
Literature focuses on these different elements impacting deep-learning-enabled
multimode imaging, yet the effects of input polarization remain under-explored.
Here, we show experimentally that the state of polarization of light, being
injected at multimode fiber input, affects the fidelity of reconstructed images
from speckle patterns. Certain polarization states produce high-quality images
at fiber output, while some yield degraded results. We have designed a
conditional generative adversarial network~(CGAN) for image regeneration at
various degrees of input light polarization. At a particular polarization state
and with a thinner core multimode fiber, our network can reconstruct images
with an average structural similarity index(SSIM) exceeding 0.9. Hence, in the
case of multimode fibers that are held fixed, optimal imaging can be achieved
by leveraging deep learning models with the input light polarization state,
where the fidelity of images is maximum. We also show that the model can be
trained to image adequately for all input light polarization states when the
fiber has bends or twists. We anticipate that our work will be a stepping stone
toward developing high-resolution and less invasive multimode fiber endoscopes. | Jawaria Maqbool, Syed Talal Hassan, M. Imran Cheema | 2023-10-27T04:39:23Z | http://arxiv.org/abs/2310.17889v2 | Towards optimal multimode fiber imaging by leveraging input polarization and conditional generative adversarial networks
###### Abstract
Deep learning techniques provide a plausible route towards achieving practical imaging through multimode fibers. However, the results produced by these methods are often influenced by physical factors like temperature, fiber length, external perturbations, and polarization state of the input light. The impact of other factors, except input light polarization, has been discussed in the literature for imaging applications. The input polarization has been considered by researchers while looking at the characterization and control of polarization in multimode fibers. Here, we show experimentally that the state of polarization of light, being injected at multimode fiber input, affects the fidelity of reconstructed images from speckle patterns. Certain polarization states produce high-quality images at fiber output, while some yield degraded results. We have designed a conditional generative adversarial network (CGAN) for image regeneration at various degrees of input light polarization. We demonstrate that in the case of multimode fibers that are held fixed, optimal imaging can be achieved by leveraging our CGAN model with the input light polarization state, where the fidelity of images is maximum. Our work exhibits high average structural similarity index values exceeding 0.9, surpassing the previously reported value of 0.8772. We also show that the model can be generalized to image adequately for all input light polarization states when the fiber has bends or twists. We anticipate our work will be a stepping stone toward developing high-resolution and less invasive multimode fiber endoscopes.
## 1 Introduction
Multimode fibers (MMFs) can lead to practical endoscopes because they are thinner and less invasive than single-mode fiber bundles [1, 2, 3]. The presence of numerous spatial modes in MMFs can be harnessed for transmitting images. However, light waves propagating through different fiber modes interfere with each other to form speckles or random patterns at the
fiber's distal end. Hence, its properties resemble scattering or disordered media like fog, diffusers, and biological tissues that scramble the information to produce a speckle phenomenon. Extraction of data from speckle patterns is a challenging task. Primarily, three different strategies are generally employed for image reconstruction from speckles: optical phase conjugation [4; 5], computation of transmission matrix [6; 7; 8; 9], and deep learning [10]. Phase conjugation incorporates a complex interferometric method for measuring phase, and precise alignment is required between the camera and spatial light modulator. The transmission matrix (TM), on the other hand, aptly describes the relationship between an MMF input and output. It helps to assimilate the information about light absorption, reflection, and transmission through the medium. The TM measurement requires both amplitude and phase information. The accurate phase computation needs a stable reference arm and nontrivial interference setup. Moreover, phase values are very sensitive to external perturbations. Hence, one TM can only be used for the transmission state in which it is calculated [6]. Recent research indicates that the challenges mentioned above can be addressed by applying deep learning techniques, leading to a more effective imaging process using MMFs [11; 12].
Previous deep learning works have shown various ways to improve MMF imaging in terms of accuracy, generalizability, and data requirements [13; 14; 12; 15; 16; 17; 18]. However, the effect of input polarization state changes on the reconstruction of images from speckles of multimode fibers has yet to be discussed thoroughly. Prior research has been done in the past on the characterization, statistics, and control of the polarization of light in multimode fibers. Due to random mode interference in multimode fibers, polarization mixing also occurs, which results in depolarized or partially polarized output [19]. On the other hand, it has been shown in [20] that the field distribution of some modes does not change during propagation through the fiber. Moreover, complete control of output polarization can be achieved using the eigenvectors and eigenvalues of the multimode fiber TM with orthogonal polarizations as a basis [21]. Considering these previous works, we hypothesize that input polarizations can affect the reconstruction of images at multimode fiber output and should be utilized to improve the MMF imaging process.
Here, we devise an experimental and computational way to quantify input polarization impact on multimode fiber imaging. We acquire output data of speckles for nine input polarization states at multiple MMF positions. We reconstruct original images from the acquired datasets using our designed conditional generative adversarial network (CGAN). Our CGAN model is fast (training time: 1 hour, inference time: 5.4 ms), stable, and gives better reconstruction results. By varying the input polarizations and the fiber positions, we show that our system can produce average structural similarity index (SSIM) values above 0.9, which is higher than the previous value of 0.8772 [14].
We find that our model trained for one polarization state at a particular fiber position gives poor reconstruction results for another polarization state. To improve the generalizability of our deep learning model to reconstruct images for all polarization states at a specific fiber position, we merge an equal percentage of data from all nine polarization datasets to form one combined dataset. The model is trained on this dataset and tested on unseen data of each polarization state. This procedure is carried out separately for two fiber positions. Furthermore, we integrate subsets of eighteen datasets for both fiber positions. After training on this superset, our CGAN model can accurately reconstruct images for unseen data of
all polarization states of two fiber positions under consideration. Hence, our work highlights that the input light polarization state affects the accuracy of reconstructed images from speckles at the multimode fiber output and it can be harnessed in two ways: 1) For a fixed MMF orientation, input polarization can be set to a degree where we get optimal imaging results,2) But in scenarios, where fiber position can change, we must train our model on the data measured while constantly changing fiber position and input light polarization state. In this way, we can get satisfactory reconstruction results for any input polarization state.
We now describe the rest of the paper. Section 2 details the experimental setup for data collection, followed by Section 3, in which we describe the data acquisition procedure. Section 4 is dedicated to our deep learning framework, where we introduce its architecture, training processes, and integration with the data gathered in the previous sections. Section 5 presents our methodology for evaluating the input polarization impact on the reconstructed images' quality and offers insights into the system's sensitivity to polarization variations. Section 6 explains our model's generalization ability to diverse input polarizations. Finally, Section 7 summarizes our findings and highlights potential avenues for future research.
## 2 Experimental setup
The experimental setup schematic is illustrated in Fig. 1. We utilize a 633 nm continuous-wave laser diode (Eagleyard GC-02940) operated via Thorlabs CLD1015 controller. After reflection by mirrors, the laser light is collimated through a telescopic system comprising two lenses with focal lengths of 500 mm and 100 mm. A polarizer is placed after the telescopic system to achieve horizontal polarization for optimal phase modulation with the HOLOYE Pluto 2.0 spatial light modulator (SLM). The polarized laser beam is then directed onto a 50/50 beam splitter (BS). Half of the beam is transmitted towards the SLM, while a beam blocker blocks the remaining half. Once reflected by the SLM, the phase-modulated light passes through the BS and is imaged by lens 3 onto collimator 2, which in turn focuses the image of the phase-modulated light onto the input of a multimode fiber. The multimode fiber has core and cladding diameters of 50 \(\mu\)m and 125 \(\mu\)m, respectively, with a length of 1 m and a numerical aperture (NA) of 0.22. Before the fiber input, a half-wave plate (HWP) and a quarter-wave plate (QWP) are positioned to attain any desired state of polarization (SOP). The multimode fiber converts all the information the laser light carries into a speckle pattern. The speckle pattern emerging from collimator 3 is imaged by lens 3 onto a Thorlabs DCC1545M CMOS camera with a resolution of 1280\(\times\)1024 pixels.
## 3 Data acquisition for different polarization states
Initially, we place the multimode fiber in position 1, as shown in Fig. 1, and its orientation remains fixed for all measurement sets. Fixing the position is essential as speckle patterns change with a change in the orientation of the fiber [15]. A state of polarization (SOP) at the fiber input is set with HWP and QWP while observing SOP on Thorlabs's PAX1000IR1/M polarimeter. We choose nine different SOPs comprising linear, circular, and elliptical polarizations, as shown in Fig. 2. The Stoke's parameters indicated in Fig. 2 are measured using
the polarimeter. For each input SOP, a computer sends the Modified National Institute of Standards and Technology (MNIST) data of 60,000 handwritten digits on SLM. The images are of size 28\(\times\)28 pixels and are up-sampled to 64\(\times\)64 pixels before being sent on SLM. The light reflected from SLM now contains images of handwritten digits. After passing through MMF, this light produces speckle patterns recorded by the computer connected to the camera. The speckle patterns saved on the system are cropped to dimensions of \(256\times 256\) pixels. The process of speckle data collection for 60,000 images takes approximately 24 hours. We perform this procedure for nine input polarization states, resulting in the formation of nine different datasets. To further gauge the input polarization effect on the accuracy of imaging through MMF, we change the fiber position and repeat the methodology of acquiring nine datasets for nine different polarization states.
## 4 Deep learning framework for image reconstruction
After the formation of data sets at various distinct input polarization states, the next step is reconstructing original images from speckle patterns. For this, we design a pix2pix model based on CGAN, as shown in detail in Fig. 3. The generator is a U-Net-type architecture with an encoder, decoder, and skip connections. We first down-sample the speckle patterns to size 64\(\times\)64\(\times\)1 and apply them as the input to the generator. The generator is enabled with
Figure 1: Experimental schematic illustrating our multimode fiber (MMF) imaging process using a combination of input polarization and deep learning techniques. The illustration depicts only one example of MMF positioning. For other fiber positions, curling or bending the fiber is done arbitrarily. C:Collimator, M:Mirror, L:Lens, P:Linear polarizer, BS:Beam splitter, SLM:Spatial light modulator, HWP:Half wave plate, QWP:Quarter wave plate, and MMF:Multimode fiber.
robust feature extraction capabilities due to several convolution and deconvolution layers. In addition, the skip connections allow weight sharing and preserve feature information across different network layers. The output has the same resolution as the input. The discriminator is composed of five convolution layers and one flattened layer. The generator's output, concatenated with the true label, is employed as input to the discriminator, which works on classifying a patch in an image as real or fake.
CGAN has been used previously for reconstructing images from speckle patterns produced by multimode fibers [17, 18]. The highest average SSIM reported previously is 0.8772 [14]. In
Figure 3: The structure of our designed conditional generative adversarial network (CGAN).
Figure 2: The nine polarization states used in this work are depicted on the Poincaré sphere along with the three Stokes parameters (S\({}_{1}\),S\({}_{2}\),S\({}_{3}\)) [22] that are measured experimentally.
contrast to previous works, we use binary cross entropy (BCE) loss for the discriminator and an amalgam of mean squared error (MSE) and mean absolute error (MAE) for the generator. The MSE loss function minimizes the difference between real and generated data. It also overcomes the problem of vanishing gradients resulting in stable training and high-fidelity results [23]. MAE loss aids in the regeneration of low-frequency details. The hybrid loss function for a CGAN is the weighted sum of generative and discriminative loss (Eq.(1)). We define the discriminator and generator loss for our designed model in Eq.(2) and Eq.(3), respectively:
\[\mathcal{L}_{CGAN}=\mathcal{L}_{Gen}+\mathcal{L}_{Disc}, \tag{1}\]
\[\mathcal{L}_{Disc}=\lambda_{1}[l_{1}(D(y,x),1]+\lambda_{1}[l_{1}(D(G(x),x),0], \tag{2}\]
\[\mathcal{L}_{Gen}=l_{2}[D(G(x),x),1]+\lambda_{2}[l_{3}((G(x),y)], \tag{3}\]
where \(G()\) and \(D()\) are the generator and discriminator functions, respectively. The speckle pattern inputs are represented by \(x\) while the true labels are denoted by \(y\). We use \(l_{1}\) for BCE, \(l_{2}\) for MSE, and \(l_{3}\) for MAE. To optimize the model's performance, we incorporate weighting factors, \(\lambda_{1}\) and \(\lambda_{2}\), set at values of 100 and 0.5, respectively, to effectively balance the MAE and BCE losses. Out of 60000 pairs of speckle-MNIST digits in all polarization datasets, we reserve 5000 pairs for testing. For the remaining 55000 pairs, 85% are kept for training, and the rest 15% are used for validation. For each dataset, the CGAN model takes 1 hour to train for 80 epochs, and the inference time for each reconstructed image is 4.6 ms. We realize the data collection using a Python 3.10.12 environment. Furthermore, we utilize the PyTorch framework for building, training, and testing the model. The whole mechanism of deep learning is accelerated by the NVIDIA Tesla V100 Tensor Core GPU.
We train and evaluate our CGAN model for all compiled polarization data sets at fiber positions 1 and 2. We use SSIM and peak signal-to-noise ratio (PSNR) as evaluation metrics for our restored images. SSIM compares the similarity between reconstructed digits and true labels based on their luminance, contrast, and structure. Its value varies between 0 and 1. SSIM value around zero means no similarity between two images, while a value closer to 1 denotes that the images are almost identical. Its expression is given by:
\[SSIM=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2}+\mu_{y}^ {2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})}, \tag{4}\]
where \(\mu_{x}\) and \(\mu_{y}\) refer to the mean value over a window in images \(x\) and \(y\), respectively. \(\sigma_{x}\) and \(\sigma_{y}\) are standard deviations over a window of \(x\) and \(y\). \(\sigma_{xy}\) is the covariance over a window between image \(x\) and image \(y\) while \(C_{1}\) and \(C_{2}\) are constants.
Another metric that we have used is PSNR, which is the ratio between the maximum pixel value of the ground truth image (\(I_{max}\)) and the root mean squared error (RMSE) and is given by:
\[PSNR=20log_{10}\frac{I_{max}}{RMSE}. \tag{5}\]
RMSE is determined between the pixel values of the original and the predicted images. The higher the value of PSNR, the better the quality of reconstructed images. Some of the reconstruction results from our designed CGAN are given in Fig. 4 and Fig. 5 for fiber positions 1 and 2, respectively. For brevity, the regenerated images for each fiber position are
displayed for only two polarization states: one where SSIM and PSNR attain their respective maximum values and the other where they reach their minimum values. This choice is made to show a proper difference in image fidelity at these polarization states. As can be observed in Fig. 4, at polarization P9 (elliptical), the images are closer to true labels (have high SSIM and PSNR) as compared to polarization P4 (45-degree). P4 has relatively poor image regeneration results, especially for digits 2 and 3. The same can be apprehended from the results of Fig. 5, where average PSNR and SSIM attain their highest values at P1 (nearly horizontal) and lowest at P7 (elliptical).
## 5 Assessing the input polarization effect
We record eighteen different data sets for two distinct positions of the multimode fiber at nine varying input polarization states shown in Fig. 2. We then train our designed CGAN
Figure 4: Representative image reconstruction results at two different polarization states for position 1 of MMF. At P9 (elliptical), the average SSIM and PSNR are maximum, while these metrics have the lowest average values at P4 (45-degree). Please see Fig. 2 P9 and P4 Stokes parameters.
Figure 5: Representative reconstruction results at two different polarization states for position 2 of MMF. At P1 (nearly horizontal), the average SSIM and PSNR are maximum, while these metrics have the lowest average values at P7 (elliptical). Please see Fig. 2 for P1 and P7 Stokes parameters.
for these data sets, followed by the model evaluation for 5000 unseen test images. The obtained average SSIM and PSNR for every data set are shown in Figs. 6 and 7. The varying magnitudes of these bar graphs illustrate that SSIM and PSNR change with input polarization states. For position 1, the maximum average SSIM of 0.9010 and PSNR of 22.89 are attained at elliptical polarization. At the same time, the minimum SSIM of 0.8430 and PSNR of 20.182 are obtained for linearly polarized light at 45 degrees. The percentage difference between the smallest and largest SSIM is 6.65%, while for PSNR, this variation is 12.5%. For position 2, the highest average SSIM of 0.9046 and PSNR of 23.202 are achieved for nearly horizontally polarized light. The lowest SSIM of 0.8225 and PSNR of 19.458 are obtained when the input light is vertically polarized. In this case, the percentage difference between the two extremities is 9.5% for SSIM and 17.55% for PSNR. We find that the deviation between evaluation metrics' values for some polarization states is more significant than others. It can also be inferred from the plots that the effect of different polarization states changes with a change in the fiber position. For example, at P1 and position 2, SSIM of 0.9046 and PSNR of 23.202 are high, but for position 1 and the same polarization state, these metrics have reduced to 0.8632 and 21.302, respectively. This is because modal distribution changes with the bending or twisting of the fiber.
We repeat the data collection, model training, and testing procedure twice at each of the nine polarization states and for individual fiber positions. This is done to ensure the capture of the persistent impact of different polarization states on the fidelity of reconstructed images. When evaluated across various input polarization states, we observe that the percentage difference in SSIM and PSNR values for reconstructed images exhibit consistent results with only 1-2% marginal fluctuations. This means that if SSIM and PSNR are minimum at P4 (45-degree linear) compared to other polarization states, it will always be minimal, no matter how many times we repeat this process while keeping the position fixed.
Physically, the relationship of reconstruction of images with input polarization states can be elaborated in the following way. When light with a specific polarization state is launched to any fiber mode, it spreads to other modes. Due to modal coupling, polarization scrambling also occurs, resulting in different polarization states at the inputs and outputs
Figure 6: The average SSIM and PSNR variations of 5000 unseen test images for individual polarization states when the fiber is fixed in position 1.
of all modes. Moreover, higher-order modes suffer from higher attenuation than lower-order modes. For an arbitrary polarized (\(p\)) input \(|\phi\rangle\) the output field is \(|\psi\rangle=t_{p}|\phi\rangle\), where \(t_{p}\) is the transmission matrix for \(p\) polarized input. The total intensity of this polarization state is \(\left\langle\psi|\psi\rangle=\right\langle\phi|t_{p}^{\dagger}t_{p}|\phi\rangle\). The transmission range achieved in this state is defined by the eigenvalues of \(t_{p}^{\dagger}t_{p}\). The maximum energy that can be maintained in the same state of polarization is given by the largest eigenvalue, while the maximum energy retained in orthogonal SOP is defined by the smallest eigenvalue [21]. The larger eigenvalues and their associated eigenvectors get their contribution from lower-order modes, leading to maximum transmission. The eigenvectors corresponding to smaller eigenvalues are influenced by higher-order modes, resulting in reduced transmission. Also, input wavefronts change due to data sent on SLM. For some states of input polarization, the eigenvectors of most of the wavefronts from SLM coincide with greater eigenvalues of \(t_{p}^{\dagger}t_{p}\), contributing to the maximum transmission of these wavefronts. This eventually improves the fidelity of reconstructed images as most of the input information is retained while propagating through fiber. On the contrary, for certain input SOPs, eigenvectors of input wavefronts correspond to smaller eigenvalues, causing the attenuation of input information. SSIM and PSNR will be low in these cases. Also, due to variations in mode and polarization coupling, as well as changes in the transmission matrix and its eigenvalues with respect to the fiber's position, the influence of input polarization differs between the two fiber positions.
## 6 Generalization for varying input polarization states
The input polarization effect can be harnessed in two ways: (a) The input polarization is set at a degree where we get the optimal imaging results for endoscopic applications where multimode fiber length is small and is not bent or twisted while imaging [24]. (b) In the case of long-length endoscopes inserted deeply in the body, a dynamically perturbed multimode fiber should also be trained or calibrated for a diverse range of input polarization degrees. This approach ensures consistently satisfactory imaging results regardless of the input polarization degree.
Figure 7: The average SSIM and PSNR variations of 5000 unseen test images for individual polarization states when the fiber is fixed in position 2.
To start with the generalization mechanism, we first use the weights of one input polarization state for reconstructing the test images of another input polarization data. Not surprisingly, we get poor results as the average SSIM and PSNR remain below 0.2 and 8, respectively. We then combine 15% training data of each of the nine input polarization states to form one dataset with 74250 speckle-label pairs for fiber positions 1 and 2. We train our designed CGAN model on this collective set of images and test for 5000 unseen images of each polarization state and fiber position. The average SSIM and PSNR of nine polarization states tested on weights of combined datasets are plotted in Fig. 8 and Fig. 9 for positions 1 and 2, respectively. The SSIM and PSNR values are reasonable in this case. SSIM is not less than 0.7, and PSNR is greater than 17 for position 1. Likewise, for position 2 combined data, SSIM and PSNR remain above 0.8 and 18, respectively.
Figure 8: The average SSIM and PSNR variation of 5000 unseen test images for individual polarization states when the fiber is fixed in position 1. First, the model is trained on combined data that contains 15% images from every polarization state dataset and then is tested for all polarization states.
Figure 9: The average SSIM and PSNR variation of 5000 unseen test images for individual polarization states when the fiber is fixed in position 2. First, the model is trained on combined data that contains 15% images from every polarization state dataset and then is tested for all polarization states.
As a final step towards the generalization process where an MMF can image well for any input polarisation state (P1-P9) and position (1 or 2), we pick up 10% training data from 18 datasets and integrate them to form one super set of 133650 images. After training on this set, we test our designed CGAN model for unseen images of all polarization states at both fiber positions. In this case, the average SSIM and PSNR are shown in Fig. 10. It is noticeable from this bar graph that SSIM does not degrade more than 0.75 while PSNR is greater than 17. These reasonable metric values signify successful image reconstruction for any input polarization state. This process indicates that the CGAN model should be trained on a dataset obtained while dynamically changing the input polarization state and fiber positions for better generalization. Table 1 encapsulates the mean and standard deviation of evaluation metrics when training is done separately on each polarization dataset and for a combination of different polarization and position datasets. As can be seen, mean values are higher after training on individual polarization datasets. However, as discussed previously, the model trained for one polarization does not reconstruct well for another polarization state. The mean metric values are relatively low for a combination of datasets, but generalizability is improved. The small standard deviation values in these cases depict that the model can image well for various polarization states.
## 7 Conclusion
We demonstrate experimentally the influence of input light polarization on the accuracy of image reconstructions from speckle patterns at the multimode fiber output. Specifically, we have established a clear correlation between the input polarization states and the variation in the average structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) across a set of 5000 unseen images. Furthermore, we have exhibited this polarization impact for
Figure 10: The average SSIM and PSNR variations of 5000 unseen test images for individual polarization states for positions 1 and 2. First, the model is trained on combined data that contains 10% images from each polarization state dataset of both positions and then is tested for all polarization states.
two distinct multimode fiber positions. The high SSIM values exceeding 0.9 achieved at both fiber positions surpass the previously reported value of 0.8772. We conclude that we should incorporate that polarization state where SSIM is maximum toward achieving optimal MMF
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Fiber Position** & **No. of training samples** & **Mean SSIM for all input polarization states** & **SSIM standard deviation for all input polarization states** & **Mean PSNR for all input polarization states** & **PSNR standard deviation for all input polarization states** \\ \hline
**Position 1** (separate each polarization state) & 55000 & 0.8750 & 0.0169 & 21.640 & 0.7684 \\ \hline
**Position 2 (separate dataset for each polarization state)** & & & & & \\ \hline
**Position 2 (separate dataset for each polarization state)** & 55000 & 0.8771 & 0.023 & 21.865 & 1.067 \\ \hline
**Position 1 (combined dataset for all polarization states)** & & & & & \\ \hline
**Position 2 (combined dataset for all polarization states)** & 74250 & 0.8215 & 0.022 & 19.376 & 0.8475 \\ \hline
**Position 2 (combined dataset for all polarization states)** & & & & & \\ \hline
**Position 1 \(\&\)2 (combined dataset for all polarization states)** & 74250 & 0.8255 & 0.0105 & 19.554 & 0.3437 \\ \hline
**Position 1 \(\&\)2 (combined dataset for all polarization states)** & & & & \\ \hline
**Position 1 \(\&\)2 (combined dataset for all polarization states)** & 133650 & 0.8122 & 0.0181 & 19.134 & 0.6138 \\ \hline \end{tabular}
\end{table}
Table 1: Number of training samples and mean metric values for individual polarization and fiber position datasets and a combination of these datasets
imaging. Moreover, we have generalized our model for the cases where input polarization and position of the fiber are changing. By training on combined data from all polarization states and fiber positions, we show that imaging through MMF can be done satisfactorily for any polarization state and fiber position. Our work can be extended to explore the influence of input polarization on multimode fiber-optic communication systems. Furthermore, this research is transferable to imaging applications through challenging mediums such as fog and biological tissues. We believe that our work can significantly contribute to developing compact and high-resolution endoscopes that do not require traditional lenses.
|
2305.13859 | Generative Retrieval via Term Set Generation | Recently, generative retrieval emerges as a promising alternative to
traditional retrieval paradigms. It assigns each document a unique identifier,
known as DocID, and employs a generative model to directly generate the
relevant DocID for the input query. A common choice for DocID is one or several
natural language sequences, e.g. the title or n-grams, so that the pre-trained
knowledge of the generative model can be utilized. However, a sequence is
generated token by token, where only the most likely candidates are kept and
the rest are pruned at each decoding step, thus, retrieval fails if any token
within the relevant DocID is falsely pruned. What's worse, during decoding, the
model can only perceive preceding tokens in DocID while being blind to
subsequent ones, hence is prone to make such errors. To address this problem,
we present a novel framework for generative retrieval, dubbed Term-Set
Generation (TSGen). Instead of sequences, we use a set of terms as DocID, which
are automatically selected to concisely summarize the document's semantics and
distinguish it from others. On top of the term-set DocID, we propose a
permutation-invariant decoding algorithm, with which the term set can be
generated in any permutation yet will always lead to the corresponding
document. Remarkably, TSGen perceives all valid terms rather than only the
preceding ones at each decoding step. Given the constant decoding space, it can
make more reliable decisions due to the broader perspective. TSGen is also
resilient to errors: the relevant DocID will not be pruned as long as the
decoded term belongs to it. Lastly, we design an iterative optimization
procedure to incentivize the model to generate the relevant term set in its
favorable permutation. We conduct extensive experiments on popular benchmarks,
which validate the effectiveness, the generalizability, the scalability, and
the efficiency of TSGen. | Peitian Zhang, Zheng Liu, Yujia Zhou, Zhicheng Dou, Fangchao Liu, Zhao Cao | 2023-05-23T09:30:36Z | http://arxiv.org/abs/2305.13859v3 | # Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search Engines
###### Abstract
Auto-regressive search engines emerge as a promising paradigm for next-gen information retrieval systems. These methods work with Seq2Seq models, where each query can be directly mapped to the identifier of its relevant document. As such, they are praised for merits like being end-to-end differentiable. However, auto-regressive search engines also confront challenges in retrieval quality, given the requirement for the exact generation of the document identifier. That's to say, the targeted document will be missed from the retrieval result if a false prediction about its identifier is made in any step of the generation process. In this work, we propose a novel framework, namely **AutoTSG** (**Auto**-regressive Search Engine with **T**erm-**S**et **G**eneration), which is featured by 1) the **unordered term-based** document identifier and 2) the **set-oriented** generation pipeline. With AutoTSG, any permutation of the term-set identifier will lead to the retrieval of the corresponding document, thus largely relaxing the requirement of exact generation. Besides, the Seq2Seq model is enabled to flexibly explore the optimal permutation of the document identifier for the presented query, which may further contribute to the retrieval quality. AutoTSG is empirically evaluated with Natural Questions and MS MARCO, where notable improvements can be achieved against the existing auto-regressive search engines.
## 1 Introduction
Search engines, standing as the most representative form of information retrieval, are fundamentally important to real-world applications like web search, question answering, advertising, and recommendation (Karpukhin et al., 2020; Lewis et al., 2021). Nowadays, they are also regarded as a critical tool for the augmentation of large language models (LLMs), where external information can be introduced to facilitate faithful and knowledge-grounded generation (Komeili et al., 2021; Nakano et al., 2022; Wang et al., 2023). A typical search engine calls for the utilization of two basic modules: representation and indexing. For example, a sparse retrieval system uses lexicon-based representations and an inverted index, while a dense retrieval system is based on latent embeddings and an ANN index (Robertson and Zaragoza, 2009; Malkov and Yashunin, 2018).
Recently, a new type of method, the auto-regressive search engines, e.g., GENRE (Cao et al., 2021), DSI (Tay et al., 2022), emerge as a promising direction for next-gen information retrieval (Metzler et al., 2021). Briefly speaking, the auto-regressive search engine allocates each document with a sequential ID, called document identifier2; e.g., n-grams within the document (Bevilacqua et al., 2022), or semantic IDs acquired by hierarchical clustering (Tay et al., 2022). Next, it learns to predict the document identifier for an input query with a Seq2Seq model. Compared with traditional retrieval methods, the autoregressive search engine is praised for being end-to-end differentiable: instead of optimizing each module individually, the entire retrieval pipeline can be optimized by the Seq2Seq learning and does not need a separate index (Metzler et al., 2021; Tay et al., 2022).
Despite the preliminary progresses achieved by recent works [14, 22, 25, 26], we argue that the auto-regressive search is much more challenging than typical Seq2Seq problems. Particularly, auto-regressive search engines require the exact generation of identifier for the targeted document. If incorrect predictions are made in any steps of the generation process, it will falsely produce the identifier of a different document, which causes the missing of targeted document in the final retrieval result (_a.k.a._ **false pruning**). Furthermore, considering that the sequence length of the identifier must be large enough to guarantee the discrimination of all documents, the generation process has to go through a large number of decoding steps. If we regard the generation process as sequential decision making, the probability of false pruning will gradually accumulate step-by-step and finally result in a bad retrieval quality. A derived problem from false-pruning is that the **permutation of document identifier** becomes critical. While retrieving, the targeted document will be falsely pruned if the prefix of its predefined identifier is bad, i.e. relatively hard to generate conditioned on the query. However, it can be successfully retrieved as long as its prefix is sufficiently good. We introduce the following concrete example to better illustrate the above points.
**Example 1**: _We use a sample query from Natural Questions, "Who cooks for the president of the United States", for discussion. We have three candidate documents from Wikipedia: D1, D2, and D3. D3 is the target as it contains the correct answer. Each document is identified by keywords from its title and first paragraph. All document identifiers are organized by a prefix tree (trie) as Figure 1 (A)._
_We apply the Seq2Seq model from GENRE [25] for our example. As we may observe, D3 is **falsely pruned** in the first step, as the generation likelihood \(P(\textit{cristeta}|\textbf{Q})\) is lower than the other candidates, "white" and "executive". We may also derive two interesting observations from this example. Firstly, if the identifier of D3 can be re-ordered as "executive, chef, cristeta, comerford", it will achieve a much higher generation likelihood -12.8, making D3 successfully retrieved (greater than -16.5 from D1, and -31.0 from D2). This reflects the **importance of identifier's permutation**. Secondly, although the document identifier is problematic regarding the presented query, it can be favorable to other queries, like "who is cristeta comerford?". In other words, there is probably **no universally favorable permutation** of identifier for the document._
**Our Method**. We propose a novel framework, namely **AutoTSG**, to overcome the above challenges in auto-regressive search engines. The proposed framework is highlighted by two featured designs. First of all, the document identifier is no longer one (or a few) predefined sequence, but a set of unordered terms from the document, known as the **unordered term-based identifier**. Any permutation of the term-set will be a valid identification for the corresponding document; that is, the
Figure 1: (A) The targeted document (D3) is falsely pruned given the predefined sequential identifier. (B) The targeted document (D3) is retrieved via the highlighted permutation on top of AutoTSG.
targeted document can be retrieved if any permutation of its identifier is generated by the Seq2Seq model. Thus, it will be more tolerable and largely relaxed for the requirement of exact generation. Secondly, given the change of document identifier, the Seq2Seq model is switched to perform the **set-oriented generation**: it aims to generate the included terms of the document identifier, rather than exactly predict any required sequences. With such flexibility, the Seq2Seq model may explore the "favorable permutation" of the document identifier given different queries. This model is easier to train and therefore contributes to a better retrieval quality for the generation process.
Back to our example (Figure 1 B.), the terms _white_, _house_, _executive_, _chef_, etc., are selected as the document identifier of D3. Therefore, all permutations, like "_white_, _house_,..., _executive_, _chof_", "_white, house_,..., _cristeta_, _comerford_", etc., will be valid identification of D3. Given the query "_Q: who cooks for the president of the United States_", the Seq2Seq explores the entire term space (\(\bigcup_{\text{Terms}}\)), where it figures out "_executive_" to be the most probable (with the highest generation likelihood) and valid (belongs to a valid document) term to decode. In the second step, it further explores the term space. This time, it selects "_chef_" given its high likelihood and validity. Note that although combinations like "_executive, director_", "_executive, manager_" may also give large enough likelihood, they will be reckoned invalid since they do not belong to any existing document identifiers. The Seq2Seq model will keep on exploring; with the permutation "_executive_, _chef_, _white_, _house_,..." generated (other terms are omitted due to limited space), document D3 is successfully retrieved for the query.
While the framework is upgraded in terms of document identifier and generation pipeline, it still needs to conquer several challenges in order to achieve competitive retrieval performance, including how to select appropriate terms for a document identifier, how to explore the optimal permutation of document identifier while ensuring its validity, how to learn the Seq2Seq model effectively to perform the exploration task. In our work, we develop the following techniques to address these challenges. (1) The matching-oriented term selection for constructing document identifiers, which determines a concise and discriminative set of terms for each document based on the importance to query-document matching. (2) The constrained greedy search, which explores the optimal identifier permutation while ensuring its validity. (3) Likelihood-adapted Seq2Seq learning: as there is no predefined permutation of document identifier, the Seq2Seq learning is performed with iteratively updated objectives determined by concrete query and model snapshot.
In summary, the main technical contributions of this paper are highlighted by the following points.
* We propose a novel framework AutoTSG for auto-regressive search engines. The proposed method is featured by its unordered term-based document identifier and the set-oriented generation pipeline. With both designs, the requirement for exact generation of the identifier is relaxed, and the Seq2Seq model is enabled to explore its favorable identifier permutation.
* We devise three technical components which jointly contribute to AutoTSG's retrieval performance: 1) the matching-oriented term selection, 2) the constrained greedy search for document identifier's generation, and 3) the likelihood-adapted Seq2Seq learning.
* We conduct comprehensive empirical analyses on top of popular evaluation benchmarks: Natural Questions and MSMARCO. Experimental results verify the effectiveness of AutoTSG, as notable improvements in retrieval quality can be achieved over the existing auto-regressive search engines under a variety of experimental settings.
## 2 Related Work
Document retrieval has been extensively studied for a long time. Conventional methods resort to lexical representations and inverted indexes, where query-document relationships can be estimated by relevance functions, like BM25 (Robertson and Zaragoza, 2009). With the development of pre-trained language models (Devlin et al., 2019), dense retrieval becomes another popular option (Karpukhin et al., 2020; Xiong et al., 2021; Izacard et al., 2021), where the relevance is measured by embedding similarity. Apart from these well-established methods, the auto-regressive search engines emerge as a promising direction (Metzler et al., 2021; Tay et al., 2022; Cao et al., 2021). These methods treat document retrieval as a Seq2Seq problem, where the document identifier can be directly generated for the query. The document identifier is one of the most decisive factors for the corresponding methods (Tay et al., 2022; Bevilacqua et al., 2022): the Seq2Seq model must generate the exact same identifier for the targeted document, and the ranking of the document is determined by the generation likelihood of its identifier. Based on different formations, the current works can be roughly partitioned
into three groups: 1) the semantic ID based methods (Tay et al., 2022; Mehta et al., 2022), 2) the atomic ID based methods (Tay et al., 2022; Zhou et al., 2022), 3) the explicit term based methods (Cao et al., 2021; De Cao et al., 2022; Bevilacqua et al., 2022). By comparison, the last category is more compatible with pre-trained language models, as the explicit terms are directly perceptible. Thus, our proposed framework also adopts such features. As discussed, the existing works call for the exact generation of the document identifier, which is a too challenging requirement. It is a major cause for the false pruning of the relevant document, which severely restricts the retrieval quality. In light of such a deficiency, our work reformulates the document identifier based on unordered terms; together with the set-oriented generation pipeline, it achieves substantial improvements in retrieval quality.
## 3 Methodology
An auto-regressive search engine usually constitutes two basic components (Tay et al., 2022; Bevilacqua et al., 2022). One is a document identifier schema - a unique identifier set \(\mathcal{I}(D)\) (e.g., a family of sequences) needs to be assigned to each document \(D\). The other one is a Seq2Seq model \(\mathbf{\Theta}(\cdot)\). For an input query \(Q\), the Seq2Seq model estimates the relevance between \(Q\) and \(D\) based on the following generation likelihood:
\[\mathrm{Rel}(Q,D)=\mathrm{Agg}\left(\left\{\prod\nolimits_{i=1}^{|I|}\Pr(I_{ i}\mid I_{<i},Q;\mathbf{\Theta}):\ I\in\mathcal{I}(D)\right\}\right), \tag{1}\]
where \(I\) is an element of \(\mathcal{I}(D)\); \(\Pr(I_{i}\mid I_{<i},Q;\mathbf{\Theta})\) indicates the generation probability of \(i\)-th element \(I_{i}\) given the prefix \(I_{<i}\), the query \(Q\), and the Seq2Seq model \(\mathbf{\Theta}\). The function \(\mathrm{Agg}(\cdot)\) stands for aggregation of the likelihood for sequences within \(\mathcal{I}(D)\). Many of the existing works (Tay et al., 2022; Cao et al., 2021; Wang et al., 2022) make use of one single sequence for document identification. In those cases, \(\mathrm{Agg}(\cdot)\) will simply be the identity function \(\mathbb{I}(\cdot)\). In SEAL (Bevilacqua et al., 2022), the whole collection of n-grams from the document are used as the identifier, where an intersective scoring function is introduced to aggregate the generation likelihood of different n-grams. With the above formulation, the document retrieval can be made through a sequence generation workflow: the Seq2Seq model generates the most likely identifiers for the given query via a beam search, then the corresponding documents, ranked by their generation likelihoods, are returned as the retrieval result.
Although AutoTSG also relies on a Seq2Seq model for document retrieval as existing methods, it is fundamentally different in terms of document identification. Particularly, it uses a set of \(N\) unordered terms to form the document identifier: \(\mathcal{T}(D)=\{t_{1},\ldots,t_{N}\}\). With the assumption that \(\mathcal{T}(D)\) is unique within the corpus, any permutation of \(\mathcal{T}(D)\) is unique as well. Then, we define that \(D\) is retrieved if one permutation of \(\mathcal{T}(D)\) is generated by the Seq2Seq model; and if multiple permutations are generated for a single document, we take their maximum likelihood: \(\mathrm{Agg}(\cdot)\leftarrow\max(\cdot)\).
In the remaining part of this section, we will introduce corresponding components of AutoTSG: (1) the document identifier schema: how to decide the terms in the document identifier in the pre-processing stage (Section 3.1) and how to generate it in the prediction stage (Section 3.2). (2) the Seq2Seq generation model: how to train the document identifier generation model (Section 3.3).
### Matching-oriented Term Selection For Document Identifier
The selection of terms in a document identifier is performed based on the following principles. Firstly, the number of terms \(N\) should be sufficiently large that all documents within the corpus can be uniquely identified, i.e., no collision of identifiers between two different documents. Secondly, the term selection needs to be concise as well. As mentioned, longer sequences are more prone to false prediction. Thirdly, the selected terms must sufficiently capture the semantic information within the document; by doing so, the query-document relevance can be precisely reflected by the generation likelihood. With the above principles, we introduce the following mechanism for term selection, where representative terms are selected depending on their importance to the query-document matching.
Each document \(D\) is partitioned into a list of terms in the first place: \([t_{1}^{D},\ldots,t_{L}^{D}]\). Then, the term importance is acquired through the estimation pipeline in Eq. (2).
\[\mathcal{M}([t_{1}^{D},\ldots,t_{L}^{D}])\xrightarrow{1\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The latent representation is further mapped into real-valued importance \(w_{i}^{D}\) via linear transformation \(W\in\mathbb{R}^{d\times 1}\) and ReLU activation \(\sigma(\cdot)\). Following the existing practice on semantic matching (Mallia et al., 2021; Gao et al., 2021; Lin and Ma, 2021), the selection modules, i.e., \(\mathcal{M}\) and \(W\), are learned to optimize the semantic matching between query and document. Particularly, given the annotations \(\mathcal{A}=\{\langle Q,D^{+},\{D_{i}^{-}\}_{i=1}^{M}\rangle\}\) where \(D^{+}\) is the relevant document to \(Q\), and \(\{D_{i}^{-}\}_{i=1}^{M}\) are \(M\) irrelevant documents to \(Q\), the following InfoNCE loss is optimized for estimating term importance:
\[\min\left(-\log\frac{\exp(\sum_{t_{i}^{Q}=t_{j}^{D^{+}}}w_{i}^{Q}w_{j}^{D^{+}}/ \tau)}{\exp(\sum_{t_{i}^{Q}=t_{j}^{D^{+}}}w_{i}^{Q}w_{j}^{D^{+}}/\tau)+\sum_{m= 1}^{M}\exp(\sum_{t_{i}^{Q}=t_{j}^{D^{+}}}w_{i}^{Q}w_{j}^{D^{+}_{m}}/\tau)} \right). \tag{3}\]
In this place, \(\tau\) is the temperature; "\(t_{i}^{Q}=t_{j}^{D}\)" indicates the constraint which regularizes \(t_{i}^{Q}\) and \(t_{j}^{D}\) to be the same term. By minimizing the above loss, large importance scores can be learned for the terms which bridge the matching between query and its relevant document. We select the top-\(N\) terms as the identifier: \(\mathcal{T}(D)\leftarrow\{t_{i}^{D}:w_{i}^{D}\in\text{top-}N\left(\{w_{i}^{D} \}_{i=1}^{L}\right)\}\). The same number of selection is applied to all documents. We choose the smallest value of \(N\) while ensuring the discrimination; e.g., for a moderate-scale corpus like NO320K, \(N=12\) is already enough to have all documents discriminated.
### Constrained Greedy Search
Given the unordered term-based identifier, the relevance between query and document can be measured in the following naive way: firstly, the generation likelihood is enumerated for all possible permutations of the document identifier (\(N\)!); then, the highest value is used as the measurement of relevance. Since the naive method is intractable, we need to design a mechanism where the language model may generate plausible document identifiers and their near optimal permutations for the given query, to ensure acceptable generation efficiency. The search mechanism needs to satisfy the following two properties: **optimality** and **validity**. First of all, it is expected to produce the document identifier of the highest generation likelihood. Knowing that the optimal solution is intractable, we resort to the greedy algorithm for its approximation. Particularly, we set the following local optimality while making stepwise term selection. At the \(i\)-th decoding step, given the collection of previously generated terms \(\{I_{<i}^{*}\}_{K}\) (\(K\): the beam size), the decoding result of the current step (\(\{I_{\leq i}^{*}\}_{K}\)) is made w.r.t. the following condition:
\[\{I_{\leq i}^{*}\}_{K}\leftarrow\underset{I_{\leq i}}{\text{argo-}}K\left( \left\{\prod_{j=1,\ldots,i}\Pr(I_{j}\mid I_{<j};Q;\boldsymbol{\Theta})\right\} \right). \tag{4}\]
In other words, we greedily select the terms which give rise to the top-\(K\) generation likelihood until the current step. Apart from the optimality, the generated term set must also correspond to valid document identifiers. To guarantee the validity, for each prefix \(I_{<i}\in\{I_{<i}^{*}\}_{K}\), we regularize the selection of \(I_{i}\) with the following set-difference based constraint:
\[1.\ I_{i}\notin\{I_{1},\ldots,I_{i-1}\}\wedge 2.\ \exists D:I_{i}\in \mathcal{T}(D)/\{I_{1},\ldots,I_{i-1}\}. \tag{5}\]
The first condition prevents the selection of a repetitive term given the current prefix \(I_{<i}\); while the second condition ensures that the newly selected term and its prefix, i.e., \(\{I_{1},\ldots,I_{i-1}\}\cup I_{i}\), will always constitute a subset of a valid document identifier.
Since it's time consuming to verify the constraint case-by-case, we implement the following data structure for efficient generation. We maintain an inverted index during generation, pointing from each prefix \(I_{<i}\) to the documents whose identifiers constitute the super sets of \(\{I_{1},\ldots,I_{i-1}\}\). The union is computed for all such identifiers: \(\boldsymbol{X}=\bigcup\{\mathcal{T}(D^{\prime}):\ \{I_{1},\ldots,I_{i-1}\} \subseteq\mathcal{T}(D^{\prime})\}\), and let the difference set \(\boldsymbol{X}/\{I_{1},\ldots,I_{i-1}\}\) be the feasible scope for next-step decoding. Note that at the begining of decoding, all terms in all document identifiers are valid. With the selection of \(I_{i}\), the inverted index is updated accordingly, with the invalid documents pruned from the entry of \(I_{<i}\). As most of the documents will be pruned for one specific prefix within very few steps, the above data structure helps to achieve a high running efficiency for the constrained greedy search.
### Likelihood-Adapted Sequence-to-Sequence Learning
Unlike the existing works where ground-truth sequences are predefined, the document identifier becomes a term-set in AutoTSG. Since one document is retrieved if any permutation of its identifier
is generated, it is straightforward to make random sampling from the \(N!\) permutations, so that the Seq2Seq learning can be conducted. However, the sampled sequence will probably be inconsistent with the decoding order of constrained greedy search (unfavorable to recall), nor will it likely be the one with the highest generation likelihood of document identifier (unfavorable to the final ranking).
To facilitate the recall of relevant documents from the generation process and have them better ranked in the final retrieval result, we expect the Seq2Seq model to learn from the permutations of document identifiers. Therefore, we propose a new training workflow named likelihood-adapted Seq2Seq learning. The proposed method adopts an iterative pipeline. In each iteration, it samples the favorable permutation of document identifier as the learning objective. Specifically, given the current Seq2Seq model \(\mathbf{\Theta}^{t-1}\), the query, and the previously generated terms \(I_{<i}\), the top-K sampling is performed to the difference set of \(\mathcal{T}(D)\) and \(I_{<i}\) according to the following distribution:
\[P(I_{i})\propto\Pr(I_{i}\mid I_{<i};Q;\mathbf{\Theta}^{t-1}),\;I_{i}\in\mathcal{T} (D)/\{I_{0},\ldots,I_{i-1}\}. \tag{6}\]
With the sampling of multiple candidate sequences \(\mathbf{I}\), the one with the highest overall likelihood is selected as the learning objective for the current iteration (\(I^{t}\)):
\[I^{t}\leftarrow\operatorname{argmax}\left(\left\{\prod\nolimits_{i=1, \ldots,N}\Pr(I_{i}\mid I_{<i};Q;\mathbf{\Theta}^{t-1}):\;I\in\mathbf{I}\right\}\right). \tag{7}\]
With this new objective, the Seq2Seq model is updated as \(\mathbf{\Theta}^{t}\) via another round of learning. The above process, i.e., the likelihood-dependent permutation sampling and the Seq2Seq learning, is iteratively conducted until a desirable model is produced.
There are still two remaining issues. One is the initial order of permutation. Although there are different options, e.g., purely randomized permutation, or sampling from a pre-trained LM (T5, GPT), we find that ordering the terms by their estimated importance in term selection brings forth the best empirical performance. The other one is about the convergence. Although the sampled permutation is always changing, we keep track of the Seq2Seq model's retrieval accuracy on a hold-out validation set. In our experiment, it merely takes two iterations to reach the convergence of accuracy growth.
## 4 Experiments
The experimental studies are performed to explore the following research questions. _RQ_ 1. AutoTSG's impact on retrieval quality against the existing auto-regressive search engines. _RQ_ 2. The impact from each of the technical designs in AutoTSG. _RQ_ 3. The impact on running efficiency.
### Settings
**Datasets.** We leverage two popular datasets which are widely used by previous evaluations for auto-regressive search engines. One is the NQ320k dataset (Tay et al., 2022; Bevilacqua et al., 2022) curated from Natural Questions (Kwiatkowski et al., 2019), including 320k training queries and 7830 testing queries. Each query is corresponding to a Wikipedia article containing its answer. The other one is the MS300k dataset (Chen et al., 2023; Zhou et al., 2022) curated from MSMARCO (Nguyen et al., 2016), which contains 320k documents, 360k training queries, and 772 testing queries.
**Metrics.** Two evaluation metrics are introduced to measure the retrieval quality at the top-\(K\) cut-off: MRR@K and Recall@K, which focus on the perspective of ranking and recall, respectively.
**Implementations.** Some critical facts about the implementations are presented as follows. _Backbone LM_. We leverage T5 (Raffel et al., 2020) as our backbone, which is consistent with the majority of previous works (Chen et al., 2023; Mehta et al., 2022; Wang et al., 2022). T5 (base) is the default option, yet T5 (large) is also explored. _Term Granularity_. We treat each single word, separated by space, as one term. Since a term may contain multiple tokens, we append a "," to the last token, which indicates the termination of term. (The same treatment can be applied to handle other granularities, e.g., n-grams.) _Data Augmentation_. Following the previous works (Wang et al., 2022; Mehta et al., 2022; Zhou et al., 2022), we leverage DocT5 (Cheriton, 2019) to generate pseudo training queries. _Beam Size_. The beam size is 100 throughout the experiments, which is also same as previous works. We've uploaded our implementations to an anonymous repo 3 for the reference of more details.
Footnote 3: [https://github.com/namespace-Pt/Adon/tree/AutoTSG](https://github.com/namespace-Pt/Adon/tree/AutoTSG)
**Baselines.** To analyze the effectiveness of AutoTSG, especially the proposed formulation of document identifier and its generation workflow, we introduce a diverse collection of auto-regressive search
engines with different forms of document identifier. GENRE (Cao et al., 2021): using titles; DSI (Tay et al., 2022): using semantic IDs; SEAL (Bevilacqua et al., 2022): using n-grams and FM index; Ulron (Zhou et al., 2022): using titles and urls; NCI (Wang et al., 2022): enhancing DSI with data augmentation. Given the limitation of space, we omit many of the repetitive comparisons with other conventional retrieval methods, as they have been extensively analyzed in the above works.
### Main Analysis
The overall evaluations on NQ320k and MS300k are shown in Table 1 and 2, respectively. According to the experiment results, AutoTSG notably improves the retrieval quality over the existing auto-regressive search engines. For example, on NQ320k, it outperforms the strongest baseline by \(+3.9\%\) and \(+2.4\%\) on MRR@100 and Recall@10; on MS300k, it also achieves the relative improvements of \(+12.3\%\) on MRR@100 and \(+11.6\%\) on Recall@10 over the baseline methods. In our detailed analysis, we'll demonstrate that the new formulation of document identifier and the corresponding generation workflow are the main contributors to such advantages. Despite the overall advantages, we may observe that the conventional approach DPR (Karpukhin et al., 2020) leads to the highest recall@100 on MS300k. In fact, this observation reveals a general challenge for the current auto-regressive search engines: it is easier to achieve high ranking performances (reflected by MRR) thanks to the expressiveness of Seq2Seq models, but comparatively difficult to achieve equally competitive recall. Much of the reason is due to the aforementioned false-pruning problem: once the document identifier is false predicted in any step of the generation process, it is impossible for back-tracking (thus unfavorable for recall); however, if the document can be returned by generation, it will probably be ranked with a favorable position. Fortunately with AutoTSG, we make a critical step-forward to mitigate the above problem: it relaxes the requirement of exact generation, and enables the Seq2Seq model to explore the optimal permutation of identifier w.r.t. the given query. Both designs substantially improve the recall, and further expand the advantage on ranking.
Besides the above overall evaluations, we present more detailed analysis between the auto-regressive search engines in terms of their memorization and generation capability. Particularly, the existing auto-regressive search engines highly rely on the presence of training queries (Wang et al., 2022; Zhou et al., 2022; Tay et al., 2022; Mehta et al., 2022): it is desirable to provide each document identifier with sufficient training queries. By learning to generate a document's identifier with training queries, it will be much easier to make exact generation of the document's identifier for its testing queries, given that the queries associated with the same document are somewhat similar. In other words, the existing auto-regressive models are more of memorization rather than generalization, which is unfavorable to handling a massive or constantly changing corpus. To evaluate AutoTSG's impact for the corresponding capabilities, we design the experiment where the corpus is partitioned into two halves: with training queries preserved for 50% of the documents (Seen), and with training queries removed for the other 50% of the documents (Unseen). Given the above setting, the Seq2Seq
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **MRR@10** & **MRR@100** & **Recall@1** & **Recall@10** & **Recall@100** \\ \hline BM2\(\sharp\) & – & 0.211 & 0.151 & 0.325 & 0.505 \\ DPR\(\dagger\) & – & 0.366 & 0.287 & 0.534 & 0.732 \\ \hline GENRE & 0.653 & 0.656 & 0.591 & 0.756 & 0.814 \\ DSI & 0.594 & 0.598 & 0.533 & 0.715 & 0.816 \\ SEAL\(\dagger\) & – & 0.655 & 0.570 & 0.800 & 0.914 \\ Ulron & 0.726 & 0.729 & 0.654 & 0.854 & 0.911 \\ NCI\(\dagger\) & – & 0.731 & 0.659 & 0.852 & 0.924 \\ \hline
**AutoTSG** & **0.757** & **0.760** & **0.690** & **0.875** & **0.932** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall evaluations on NQ320k. \(\dagger\) denotes the results copied from (Wang et al., 2022).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **MRR@10** & **MRR@100** & **Recall@1** & **Recall@10** & **Recall@100** \\ \hline BM25 & 0.313 & 0.325 & 0.196 & 0.591 & 0.861 \\ DPR & 0.424 & 0.433 & 0.271 & 0.764 & **0.948** \\ \hline GENRE & 0.361 & 0.368 & 0.266 & 0.579 & 0.751 \\ DSI & 0.339 & 0.346 & 0.257 & 0.538 & 0.692 \\ SEAL & 0.393 & 0.402 & 0.259 & 0.686 & 0.879 \\ Ulron & 0.432 & 0.437 & 0.304 & 0.676 & 0.794 \\ NCI & 0.408 & 0.417 & 0.301 & 0.643 & 0.851 \\ \hline
**AutoTSG** & **0.484** & **0.491** & **0.359** & **0.766** & 0.907 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overall evaluations on MS300k.
model is prevented from memorizing the document identifiers on the unseen half during the training stage. According to the experiment results in Table 3 and 4: AutoTSG marginally outperforms the baselines on the "seen" half; nevertheless, its advantage is significantly magnified on the "unseen" half. As discussed, AutoTSG is largely relaxed from the requirement of exact generation, making it is less restricted by memorization; and together with the flexibility to explore optimal identifier permutation, it becomes more generalizable when dealing with unseen documents.
### Ablation Studies
The ablation studies are performed for each influential factor based on NQ320k dataset as Table 5.
**Identifiers.** We compare the proposed formulation of document identifier, the one based on unordered terms (term-set), with the conventional sequence-based formulation; that is, the terms are ordered as a sequence by their estimated importance (empirically more competitive than other sequence orders). It can be observed that the retrieval quality can be notably improved for both recall and ranking metrics on top of the proposed formulation. As discussed, the generation task is largely relaxed with term-set: there are no longer requirements to follow the exact sequence order; in contrast, any permutations of the identifier may lead to the retrieval of the corresponding document, and the Seq2Seq model may flexibly explore the favorable permutation depending on the presented query.
**Term Selection.** We compare three alternative term selection methods. Random: purely randomized selection (from the document); Title: terms within the title; Matching Oriented: the default option used by AutoTSG. We may derive the following observations from the experiment. Firstly, there are huge differences between different selection methods, which verifies the importance of term selection. Secondly, although directly making use of title is a strong baseline (also a common practice in many works (Cao et al., 2021; De Cao et al., 2022; Zhou et al., 2022)), the matching-oriented approach is more effective: by estimating the term's importance based on its utility to query-document matching, the selected terms will not only facilitate the identifier's generation (considering the higher relevance to the potential queries), but also better reflect the relationship between query and document.
**Learning.** We compare our proposed likelihood-adapted sequence-to-sequence learning with its non-adaptive variation: the document identifier's permutation is fixed as its initialization. Note that the constrained greedy search is still maintained in the testing stage for the non-adaptive baseline, despite that it relies on a fixed permutation in the training stage. It can be observed that our proposed learning method indeed contributes to the retrieval quality. Such an advantage is easy to comprehend, considering that the training objective (the permutation of document identifier) can be iteratively adapted to keep consistent with the plausible permutations in the testing stage.
**Initialization.** We make evaluations for the three alternative initialization approaches for the Seq2Seq learning. 1) Random: the selected terms are randomly permuted; 2) Likelihood: the selected terms are permuted based on the generation likelihood of a pre-trained T5; 3) Importance: the selected terms are permuted by their estimated importance (default option). We can observe that the initialization turns out to be another critical factor for the Seq2Seq learning: the importance-based method is notably stronger than the other two baselines. This is probably because the importance-based initialization
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Seen (50\%)**} & \multicolumn{2}{c|}{**Unseen (50\%)**} & \multicolumn{2}{c}{**Seen+Unseen (100\%)**} \\ \hline
**Method** & **MRR@10** & **Recall@10** & **MRR@10** & **Recall@10** & **MRR@10** & **Recall@10** \\ \hline GENNE & 0.361 & 0.579 & 0.150 & 0.312 & 0.196 & 0.411 \\ DSI & 0.339 & 0.538 & 0.030 & 0.075 & 0.171 & 0.298 \\ Ultron & 0.432 & 0.676 & 0.197 & 0.246 & 0.313 & 0.492 \\ NCI & 0.408 & 0.643 & 0.034 & 0.082 & 0.260 & 0.412 \\ \hline
**AutoTSG** & **0.484** & **0.766** & **0.390** & **0.588** & **0.391** & **0.642** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Analysis of retrieval quality w.r.t. **seen** and **unseen** documents on MS300k.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Seen (50\%)**} & \multicolumn{2}{c|}{**Unseen (50\%)**} & \multicolumn{2}{c}{**Seen+Unseen (100\%)**} \\ \hline
**Method** & **MRR@10** & **Recall@10** & **MRR@10** & **Recall@10** & **MRR@10** & **Recall@10** \\ \hline GENNE & 0.763 & 0.869 & 0.138 & 0.187 & 0.448 & 0.558 \\ DSI & 0.713 & 0.802 & 0.011 & 0.040 & 0.360 & 0.428 \\ Ultron & 0.782 & 0.891 & 0.300 & 0.383 & 0.471 & 0.570 \\ NCI & 0.751 & 0.842 & 0.050 & 0.159 & 0.393 & 0.459 \\ \hline
**AutoTSG** & **0.809** & **0.900** & **0.466** & **0.654** & **0.552** & **0.700** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Analysis of retrieval quality w.r.t. **seen** and **unseen** documents on NQ320k.
presents "a more plausible permutation" of document identifier, that is, easier to be generated and better reflect the query-document relationship. Besides, considering the iterative workflow of the learning process, the initialization will not only determine the current training objective, but also largely influence the final permutation where the Seq2Seq model will converge.
**Query Generation.** Query generation is a widely used data augmentation strategy to enhance auto-regressive search engines (Wang et al., 2022; Zhou et al., 2022; Mehta et al., 2022). It is also found helpful in AutoTSG (by using the off-the-shelf DocT5 (Cheriton, 2019)), whose impact is explored in the ablation studies. As expected, the retrieval quality is substantially improved on top of query generation. Note that the relative improvement of AutotSG is mainly from the proposed formulation of document identifier and its generation workflow, rather than the extra data augmentation. When query generation is disabled, AutoTSG maintains its advantage over the baselines.
**Model Scaling.** The scaling-up of backbone Seq2Seq model is another common approach for the enhancement of auto-regressive search engines. In our experiment, empirical improvements may also be observed when we switch to a T5-large backbone. Meanwhile, it maintains the advantage when other baselines are scaled up as well.
**Efficiency.** The running efficiency is evaluated in Table 6. Particularly, we measure the memory consumption for hosting the entire corpus; we also measure the time cost (query latency) with different beam sizes. We may observe that most of the approaches incur very close memory and time costs given their similar workflow. However, one exception is SEAL (Bevilacqua et al., 2022), where much more memory and running time are resulted from the usage of FM index.
## 5 Conclusion
In this work, we propose a novel framework for auto-regressive search engines. The new framework is featured by two designs: 1) the unordered term-based document identifier, 2) the set-oriented generation pipeline. With both features, the challenge of generating document identifier becomes significantly relaxed, where the Seq2Seq model may flexibly explore the favorable permutation of document identifier. To support high-quality document retrieval, we devise three key techniques for the proposed framework: the matching-oriented term selection, the constrained greedy search for the document identifier and its optimal permutation, the likelihood-adapted Seq2Seq learning. With comprehensive experiments, we empirically verify the following technical contributions: 1) the proposed framework achieves substantial improvements over the existing auto-regressive search engines, especially in terms of generalizability, where superior retrieval quality can be achieved; 2) all of the proposed technical designs bring forth notable positive impacts to the retrieval quality; and 3) the improvements are achieved with very little extra cost on running efficiency.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Method**} & **Memory** & **Query Latency (s)** \\ \cline{2-4} & (MB) & bs = 10 & bs = 100 \\ \hline GENRE & 27 & 0.05 & 0.57 \\ DSI & 12 & 0.03 & 0.21 \\ SEAL & 210 & 0.32 & 3.14 \\ Ulron & 27 & 0.05 & 0.57 \\ NCI & 12 & 0.03 & 0.21 \\ \hline
**AutoTSG** & 35 & 0.06 & 0.69 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Efficiency analysis on NQ320k.
\begin{table}
\begin{tabular}{l|l|l|c c c c} \hline \hline
**Factor** & **Setting** & **MRR@10** & **MRR@100** & **Recall@1** & **Recall@10** & **Recall@100** \\ \hline \multirow{2}{*}{Identify} & Sequence & 0.733 & 0.736 & 0.668 & 0.848 & 0.904 \\ & Term Set\({}^{*}\) & **0.757** & **0.760** & **0.690** & **0.875** & **0.932** \\ \hline \multirow{3}{*}{Select} & Random & 0.628 & 0.631 & 0.568 & 0.739 & 0.811 \\ & Title & 0.743 & 0.745 & 0.677 & 0.856 & 0.915 \\ & Matching Oriented\({}^{*}\) & **0.757** & **0.760** & **0.690** & **0.875** & **0.932** \\ \hline \multirow{2}{*}{Learning} & Non-adaptive & 0.743 & 0.745 & 0.671 & 0.865 & 0.927 \\ & Likelihood Adapted\({}^{*}\) & **0.757** & **0.760** & **0.690** & **0.875** & **0.932** \\ \hline \multirow{3}{*}{Initialize} & Random & 0.723 & 0.727 & 0.652 & 0.854 & 0.925 \\ & Likelihood & 0.715 & 0.718 & 0.643 & 0.844 & 0.916 \\ & Importance\({}^{*}\) & **0.757** & **0.760** & **0.690** & **0.875** & **0.932** \\ \hline \multirow{3}{*}{Q-Gen} & Ulron w.o. QG & 0.670 & 0.672 & 0.605 & 0.779 & 0.845 \\ & NCI w.o. QG & – & 0.679 & 0.602 & 0.802 & 0.909 \\ & AutoTSG w.o. QG & 0.707 & 0.710 & 0.635 & 0.836 & 0.916 \\ & AutoTSG\({}^{*}\) & **0.757** & **0.760** & **0.690** & **0.875** & **0.932** \\ \hline \multirow{3}{*}{Scale} & DSI large & 0.613 & 0.620 & 0.553 & 0.733 & 0.835 \\ & SEAL large & – & 0.677 & 0.599 & 0.812 & 0.909 \\ \cline{1-1} & NCI large & – & 0.734 & 0.662 & 0.853 & 0.925 \\ \cline{1-1} & AutoTSG large & **0.766** & **0.768** & **0.697** & **0.882** & **0.938** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies on NQ320k. The default settings of AutoTSG are marked with *. |
2303.11318 | Collisional evolution of dust and water ice in protoplanetary discs
during and after an accretion outburst | Most protoplanetary discs are thought to undergo violent and frequent
accretion outbursts, during which the accretion rate and central luminosity are
elevated for several decades. This temporarily increases the disc temperature,
leading to the sublimation of ice species as snowlines move outwards. In this
paper, we investigate how an FUor-type accretion outburst alters the growth and
appearance of dust aggregates at different locations in protoplanetary discs.
We develop a model based on the Monte Carlo approach to simulate locally the
coagulation and fragmentation of icy dust particles and investigate different
designs for their structure and response to sublimation. Our main finding is
that the evolution of dust grains located between the quiescent and outburst
water snowlines is driven by significant changes in composition and porosity.
The time required for the dust population to recover from the outburst and
return to a coagulation/fragmentation equilibrium depends on the complex
interplay of coagulation physics and outburst properties, and can take up to
4500 yr at 5 au. Pebble-sized particles, the building blocks of planetesimals,
are either deprecated in water ice or completely destroyed, respectively
resulting in drier planetesimals or halting their formation altogether. When
accretion outbursts are frequent events, the dust can be far from collisional
equilibrium for a significant fraction of time, offering opportunities to track
past outbursts in discs at millimetre wavelengths. Our results highlight the
importance of including accretion outbursts in models of dust coagulation and
planet formation. | Adrien Houge, Sebastiaan Krijt | 2023-03-20T17:56:44Z | http://arxiv.org/abs/2303.11318v1 | Collisional evolution of dust and water ice in protoplanetary discs during and after an accretion outburst
###### Abstract
Most protoplanetary discs are thought to undergo violent and frequent accretion outbursts, during which the accretion rate and central luminosity are elevated for several decades. This temporarily increases the disc temperature, leading to the sublimation of ice species as snowlines move outwards. In this paper, we investigate how an FUor-type accretion outburst alters the growth and appearance of dust aggregates at different locations in protoplanetary discs. We develop a model based on the Monte Carlo approach to simulate locally the coagulation and fragmentation of icy dust particles and investigate different designs for their structure and response to sublimation. Our main finding is that the evolution of dust grains located between the quiescent and outburst water snowlines is driven by significant changes in composition and porosity. The time required for the dust population to recover from the outburst and return to a coagulation/fragmentation equilibrium depends on the complex interplay of coagulation physics and outburst properties, and can take up to 4500 yr at 5 au. Pebble-sized particles, the building blocks of planetesimals, are either deprecated in water ice or completely destroyed, respectively resulting in drier planetesimals or halting their formation altogether. When accretion outbursts are frequent events, the dust can be far from collisional equilibrium for a significant fraction of time, offering opportunities to track past outbursts in discs at millimetre wavelengths. Our results highlight the importance of including accretion outbursts in models of dust coagulation and planet formation.
keywords: planets and satellites: composition - planets and satellites: formation - stars: protostars - protoplanetary discs - methods: numerical
## 1 Introduction
Planets form in discs of dust and gas around young stars. The first step of their formation occurs through the coagulation of the initial reservoir of sub-\(\mu\)m-sized dust grains, aggregating into \(\mathrm{scm}\)-sized pebbles through sticky collisions at low velocities (Weidenschilling & Cuzzi, 1993). It is followed by the streaming instability which achieves the formation of km-sized planetesimals from dense clumps of pebbles. After that, gravity becomes the main driver of interactions to complete the formation of planets. The complication arises from the high sensitivity of the streaming instability towards grain size, as it requires sufficiently large grains to be triggered (Bai & Stone, 2010; Drazkowska & Dullemond, 2014; Li & Youdin, 2021). Dust coagulation is thus a crucial step in the formation of planets, as its efficiency controls the occurrence of the following steps. Moreover, dust properties (e.g. composition, structure) are inherited by the planetesimals, so that understanding dust evolution allows to constrain the properties expected in larger objects (Jansson & Johansen, 2014). Beyond the impact on planets, dust also has a key importance for the structure and evolution of protoplanetary discs, by dominating the absorption and scattering opacity in most regions (Beckwith et al., 1990, 1999; Bouwman et al., 2000), transporting volatiles both radially and vertically (Cuzzi & Zahnle, 2004; Ciesla & Cuzzi, 2006; Oberg & Bergin, 2016; Krijt et al., 2018), and providing surface area to promote chemical reactions (e.g. Kress & Tielens, 2001; Ruaud & Gorti, 2019).
However, modelling the growth and evolution of dust is a challenging task, as the efficiency of coagulation is related to the complex couplings of transport processes, disc conditions, and micro-physical properties of the dust grains. One notable example concerns the presence of water ice. In fact, it has been shown that ice-covered dust grains are characterised by a higher resistance towards fragmentation than bare rocks (Supulver et al., 1997; Dominik & Tielens, 1997; Wada et al., 2013; Gundlach & Blum, 2014), allowing for the formation of larger pebbles in regions where water ice is stable, i.e. outside the water snowline (Birnstiel et al., 2010; Banzatti et al., 2015). As a consequence, the efficiency of coagulation and the overall dust distribution vary dramatically across the snowline, which may offer a sweet spot for planetesimal formation (e.g. Okuzumi et al., 2012; Drazkowska & Alibert, 2017). Dust evolution models furthermore predict a sharp change in the dust thermal emission at millimetre wavelengths as (1) grain sizes increase beyond the snowline and (2) reduced radial drift in the inner regions leads to a pile-up of small solids and an increase in the optical depth Banzatti et al. (2015).
The position of the water snowline therefore plays a crucial role
in dust evolution. However, protoplanetary discs can experience frequent accretion outbursts throughout their evolution, during which the accretion rate of the central protostar increases by \(\sim\)2 orders of magnitude and remains high for several decades (Audard et al., 2014). Such events dramatically increase the temperature of the surrounding disc, pushing snowlines outward, and leading to the sublimation of ices on scales \(\geq\)10 au. The impacts of outbursts on gas chemistry has been thoroughly studied, especially investigating whether some chemical tracers could be used to probe the occurrence of past outbursts in discs, to better constrain their causes and properties by enlarging our statistical sample of these events (e.g. Molyarova et al., 2018; Wiebe et al., 2019).
Outbursts and the ensuing sublimation of water ice in particular are expected to alter the dust size distribution. Cieza et al. (2016), building on the work of Banzatti et al. (2015), used ALMA observations of dust emission (radial profiles of the optical depth and spectral index) to argue that the water snowline in the outbursting system of V883 Ori was located at 42 au. Depending on the duration of the outburst, however, it is not necessarily clear whether the situation is directly analogous to the models of Banzatti et al. (2015) in which the snowline is static. First, the dust distribution needs time to respond to the new collisional equilibrium, and the dust pile-up is built up only after several radial drift timescales (Schoonenberg and Ormel, 2017). Indeed, using a simplified monodisperse grain model, Schoonenberg et al. (2017) showed that the features observed by Cieza et al. (2016) could be reproduced if ice-rich aggregates disintegrated following the outburst, and the remaining silicate grains were allowed to re-coagulate to sizes of approximately 300 \(\mu\)m.
The properties of the dust size distribution during and following the outburst thus depend sensitively on the mechanical response of the aggregates to losing their water, and on the details of the re-coagulation process. In this study, we investigate these processes in detail by performing local dust coagulation calculations in several specific locations of a disc undergoing a step change in temperature following an outburst. The aim is to quantify how the full dust size distribution responds to water ice leaving (when the outburst starts) and returning (soon after the end of the outburst) while at the same time undergoing collisional evolution. We also investigate the impact of different assumptions regarding the aggregate structure (e.g. compact vs. porous growth) and response to ice sublimation on the resulting dust size distribution and its (integrated) optical properties (e.g. mm spectral index). Similarly to what is done on gas tracers, we investigate whether the alteration of dust properties may offer an opportunity to track past outbursts in discs.
This paper is organized as follows. In Sect. 2, we present the disc and outburst model used in this study. Dust properties, growth, and dynamics are then described in Sect. 3. The collision model and Monte Carlo numerical approach to dust coagulation is presented in Sect. 4. Results of the coagulation simulations are presented in Sect. 5 along with their conversion into meaningful observational signatures in Sect. 6. The results are discussed in Sect. 7 followed by our conclusions in Sect. 8. Throughout this manuscript, dust of any size will be referred to as aggregates, solids, or particles. We will use 'dust grains' when specifically targeting small objects in the lower-end of the size distribution (i.e. \(<10\mu\)m), 'pebbles' for the upper-end (i.e. \(>1\)mm), and 'population' to describe the entire distribution.
## 2 Disc Model
The gas surface density profile is based on a tapered power-law (Lynden-Bell and Pringle, 1974; Hartmann et al., 1998)
\[\Sigma_{\rm g}(r)=\Sigma_{\rm c}\left(\frac{r}{r_{\rm c}}\right)^{-\gamma} \exp\left[-\left(\frac{r}{r_{\rm c}}\right)^{2-\gamma}\right], \tag{1}\]
where \(\Sigma_{\rm c}\) is the surface density normalization which is calculated from the total disc mass with
\[\Sigma_{\rm c}=\frac{M_{\rm disc}(2-\gamma)}{2\pi r_{\rm c}^{2}}. \tag{2}\]
The radial profile of the surface density is thus parameterised by three quantities, set to the following characteristic values: \(r_{\rm c}=100\) au, \(\gamma=1\), and \(M_{\rm disc}=0.01\)\(M_{\rm s}\) with \(M_{\rm s}=1\)\(M_{\odot}\). The dust-to-gas ratio is set to \(\delta_{\rm d2g}=0.01\) and is assumed constant throughout the disc.
We assume the disc vertical structure to be in hydrostatic equilibrium, so that the vertical profile of the gas density is expressed as
\[\rho_{\rm g}(z)=\frac{\Sigma_{\rm g}}{\sqrt{2\pi}h_{\rm g}}\exp\left\{-\frac {z^{2}}{2h_{\rm g}^{2}}\right\}, \tag{3}\]
where \(h_{\rm g}=c_{\rm s}/\Omega\) is the gas pressure scale height, \(c_{\rm s}=\sqrt{k_{\rm B}T/m_{\rm g}}\) is the sound-speed, \(k_{\rm B}\) is the Boltzmann constant, \(m_{\rm g}=2.34\) amu is the mean molecular mass, and \(\Omega=\sqrt{GM_{\rm s}/r^{3}}\) is the Keplerian frequency.
The temperature of the disc midplane \(T_{\rm m}(r)\), where our coagulation simulations take place, is connected to the amount of energy absorbed by the disc atmosphere from the central luminosity source (star and accretion region) and re-emitted downward. Neglecting viscous heating for simplicity, it is expressed as (Chiang and Goldreich, 1997)
\[T_{\rm m}^{4}(r)=\frac{\phi(r)}{8\pi\sigma_{\rm SB}r^{2}}(L_{*}+L_{\rm acc}), \tag{4}\]
where
\[\phi(r)\simeq\frac{0.4R_{*}}{r}\,+\,r\,\frac{{\rm d}\left(h_{\rm p}/r\right)}{ {\rm d}r}, \tag{5}\]
represents the disc opening angle, related to the scale height of the visible photosphere
\[h_{\rm p}=h_{0}\left(\frac{r}{r_{0}}\right)^{\Psi}. \tag{6}\]
We set the stellar radius to \(R_{*}=2.5\)\(R_{\odot}\), the disc flaring index to \(\Psi=1.26\), and the scale height to \(h_{0}=34.2\) au at \(r_{0}=100\) au (Benistiy et al., 2022; Lagage et al., 2006).
For our young solar-mass star, we set the stellar luminosity to \(L_{*}=0.9\)\(L_{\odot}\) and the contribution of the quiescent accretion region to \(L_{\rm acc}=0.3\)\(L_{\odot}\)(Molyarova et al., 2018). The former is assumed constant while the later will vary during episodic outbursts. With our stellar parameters, the value for the accretion luminosity is comparable with an accretion rate of \(\dot{M}=(2/3)(L_{\rm acc}R_{*}/GM_{*})\approx 10^{-8}\)\(M_{\odot}/{\rm yr}\), which is consistent with observed values (Audard et al., 2014). Note that we assume the gas and dust temperature to be equal, which is a valid assumption in the dense midplane region.
### Outburst event
We introduce an FUor-type accretion outburst in our quiescent system at \(t_{\rm obt}^{\rm start}=10^{4}\) yr, in agreement with current knowledge of such
outburst rates (e.g. Scholz et al., 2013). For our purposes, we mimic a single outburst by raising the accretion luminosity to \(L_{\rm acc}=100~{}L_{\odot}\) for a duration \(\tau_{\rm orb}=100~{}\rm{yr}\). The temperature of the disc midplane increases according to Eq. 4 (see Fig. 1), and we assume it adapts instantaneously as the heating timescale of the disc is short as compared to the outburst duration (Johnstone et al., 2013; Vorobyov et al., 2014). We further assume the gaseous environment to instantaneously find a new hydrostatic equilibrium in the vertical direction when the temperature is modified, which leads to slightly lower midplane gas density during the outburst. This assumption is valid given that the thermal timescale of the gas in our disc model is inferior to the dynamical timescale and outburst duration. Using Equation (7) from Ueda et al. (2021), we find \(\sim 5~{}\rm{yr}\) at \(5~{}\rm{au}\). The solid density follows the same behaviour to maintain the dust-to-gas ratio to its fixed value. For simplicity, we do not consider potential increases of the disc opening angle due to flaring effects.
### Water content
By raising the temperature, the outburst drives the sublimation of ices over extended regions of the disc, pushing outward the snowlines of various molecular species. In particular, water is an important compound in terms of abundance (Lodders, 2003), and its presence or absence in ice phase has a dramatic impact on dust growth as it influences the stickiness of aggregates (Supulver et al., 1997; Dominik & Tielens, 1997; Wada et al., 2013; Gundlach & Blum, 2014), allowing for the formation of larger pebbles in regions where water ice is stable (Birnstiel et al., 2010).
Upon ice sublimation, it is still unclear how the structure of aggregates is impacted, as laboratory experiments found it could lead to both a complete disruption (Aumatel & Wurm, 2011) or survival (Spadaccia et al., 2022). To fit the range of possibilities, we adopt two designs for the response to sublimation when the outburst starts: the "resilient" where aggregates survive and are just impacted by the loss of their ice mass, and "many-seeds" where all aggregates disrupt to monomer size as we consider water ice to 'glue' refractory grains together. The many-seeds model was also used by Schoonenberg & Ormel (2017) in the context of pebbles drifting inward and crossing the snowline.
The water snowline is located where the sublimation and condensation rates of H\({}_{2}\)O molecules have similar absolute values. They are given respectively by (e.g Supulver & Lin, 2000)
\[F_{\rm sub}=-\sqrt{\frac{m_{\rm H_{2}O}}{2\pi k_{\rm B}T}}P_{\rm sat}, \tag{7}\]
\[F_{\rm con}=\sqrt{\frac{m_{\rm H_{2}O}}{2\pi k_{\rm B}T}}P_{\rm H_{2}O}, \tag{8}\]
with \(P_{\rm H_{2}O}\) the water vapour pressure expressed with the ideal gas law as
\[P_{\rm H_{2}O}=\frac{k_{\rm B}T}{m_{\rm H_{2}O}}\rho_{\rm H_{2}O}, \tag{9}\]
and \(P_{\rm sat}\) the saturated vapour pressure for water on a flat surface (Supulver & Lin, 2000) given by
\[\begin{split} P_{\rm sat}&=\frac{k_{\rm B}T}{m_{ \rm H_{2}O}}\rho_{\rm sat},\\ &=1.013\times 10^{6}~{}\exp\left\{15.6-\frac{5940K}{T}\right\}\rm{dyn }/\rm{cm}^{2},\end{split} \tag{10}\]
where \(m_{\rm H_{2}O}\) is the mass of an H\({}_{2}\)O molecule, and \(\rho_{\rm H_{2}O}=\delta_{\rm w2g}\rho_{\rm g}\) assuming the water abundance to \(\delta_{\rm w2g}=0.01\). With this assumption, the dust-to-ice ratio equals unity in the outer disc.
In our disc model, the quiescent water snowline is located at \(\tau_{\rm SL}^{\rm qui}=0.8~{}\rm{AU}\) from the central protostar, corresponding to a temperature of \(167~{}\rm{K}\) (see Fig. 1). It is pushed at \(\tau_{\rm SL}^{\rm orb}=13~{}\rm{AU}\), during the accretion outburst, corresponding to a lower sublimation temperature of \(122~{}\rm{K}\) as the gas density also decreases with the distance to the star (equation 3). The snowline during the outburst will be further referred to as the excited snowline. As illustrated in Fig. 2, we can now divide the disc into three zones: A) inside the quiescent snowline, water is in a vapour phase at all time; B) in between the quiescent and excited snowline, the phase of water molecules will vary with the outburst; and C) outside the excited snowline, water always remains in an ice state. We will perform local dust coagulation simulations in the midplane within zone A and B1, respectively at \(0.5\) and \(5~{}\rm{au}\), hereafter referred to as location A and B. The local disc conditions can be found in Table 1. Our approach being local and in the midplane, we do not include the potential transport of material due to vertical settling and radial drift (see Sect. 7.4).
Figure 1: Midplane temperature and gas surface density radial profile. The vertical dotted lines denote the position of the quiescent and excited water snowlines.
\begin{table}
\begin{tabular}{l l l} \hline Local parameter & **A** & **B** \\ \hline Heliocentric distance \(r\) (au) & \(0.5\) & \(5\) \\ Gas surface density \(\Sigma_{\rm g}\) (g cm\({}^{-2}\)) & \(281.48\) & \(26.9\) \\ \hline \multicolumn{3}{l}{**Quiescent phase**} \\ \(\rho_{\rm g}\) (g cm\({}^{-3}\)) & \(7.4\times 10^{-10}\) & \(3.8\times 10^{-12}\) \\ \(T_{\rm m}\) (K) & \(207.06\) & \(70.13\) \\ \(\eta\) & \(0.0006\) & \(0.0017\) \\ \hline \multicolumn{3}{l}{**Outburst phase**} \\ \(\rho_{\rm g}\) (g cm\({}^{-3}\)) & \(4.2\times 10^{-10}\) & \(2.2\times 10^{-12}\) \\ \(T_{\rm m}\) (K) & \(627.01\) & \(212.35\) \\ \(\eta\) & \(0.0017\) & \(0.0058\) \\ \hline \end{tabular}
\end{table}
Table 1: Local disc parameters in location A and B during the quiescent and outburst phases.
## 3 Dust Models
### Monomers
In protoplanetary discs, solids initially consist of sub-\(\mu\)m-sized dust grains, referred to as monomers, whose motion is well coupled to the surrounding gas. In this work, we assume initially two distinct monomer populations based on their position in the quiescent disc. Inside the quiescent water snowline, monomers are chosen to be identical \(a_{0}=0.1\mu\)m compact spheres made of a rocky mix of silicates, troilite, and refractory organise (see Table 2). The bulk density of the mixture is \(\rho_{\rm s_{\ast}<SL}=2.11\) g cm\({}^{-3}\). Outside the quiescent water snowline, H\({}_{2}\)O molecules are accreted onto dust grains. In that case, we assume water is homogeneously mixed with the rock mix such that the water mass fraction \(f_{\rm w}=m_{\rm w}/(m_{\rm w}+m_{\rm r})=0.5\)(Lodders, 2003). The bulk density is then \(\rho_{\rm s_{\ast}>SL}=1.28\) g cm\({}^{-3}\).
### Dust dynamics
Due to the interaction with the surrounding gaseous environment, dust grains acquire non-zero relative velocities, leading to their coagulation into larger and larger aggregates. The aerodynamic behaviour of such embedded solids is quantified with the Stokes number \({\rm St}=\Omega t_{\rm s}\), where \(t_{\rm s}\) is the stopping time. For the Epstein and Stokes drag regime, it is expressed as (Okuzumi et al., 2012)
\[t_{\rm s}=\left\{\begin{array}{ll}t_{\rm s}^{\rm(Ep)}\equiv \frac{3m}{4\rho_{\rm g}v_{\rm th}A},&\quad a<\frac{2}{4}\lambda_{\rm mfp},\\ \\ t_{\rm s}^{\rm(St)}\equiv\frac{4a}{9\lambda_{\rm mfp}}t_{\rm s}^{\rm(Ep)},& \quad a\geq\frac{2}{4}\lambda_{\rm mfp},\end{array}\right. \tag{11}\]
where \(v_{\rm th}=\sqrt{8/\pi}c_{\rm s}\) is the thermal velocity, \(\lambda_{\rm mfp}=m_{\rm g}/(\sigma_{\rm mol}\rho_{\rm g})\) is the mean free path of gas particles, \(\sigma_{\rm mol}=2\times 10^{-15}\) cm\({}^{3}\) is the collision cross section of gas molecules, and \(A\) is the projected surface area of the aggregate.
We consider in our simulations the typical sources of relative velocities, namely the Brownian motion, the turbulence (based on equation 16 of Ormel & Cuzzi, 2007), and the radial and azimuthal drifts (see Sect. 3.1 of Birnstiel et al., 2016). The turbulent motion is parametrized using the \(\alpha\)-turbulence model of Shakura & Sunyaev (1973), where we assume the turbulence strength to a constant value \(\alpha=10^{-3}\)(Rosotti, 2023). As we restrict our study to the midplane, the velocity arising from vertical settling is zero. Drifting motions depend on \(\eta\), the dimensionless radial pressure gradient, expressed as
\[2\eta=-\left(\frac{c_{\rm s}}{v_{\rm K}}\right)^{2}\frac{\partial\ln\left( \rho_{\rm g}c_{\rm s}^{2}\right)}{\partial\ln r}. \tag{12}\]
Its local value during the quiescent and outburst phase is given in Table 1. As previously stated, we neglect the potential transport of material due to radial drift, but we do consider its impact as a relative velocity source.
### Aggregation
The local dust coagulation process depends sensitively on the structure of the growing grains for a variety of reasons. For example, substantial porosity impacts the aggregate mass-size relation (Blum et al., 2000), affecting its collisional cross section, its aerodynamical behavior (Eq. 11), and its ability to dissipate energy during collisions (Blum & Wurm, 2008). Furthermore, the appearance of the aggregate (i.e. its opacity at different wavelengths) is a sensitive function of porosity (Kataoka et al., 2014). Models of dust coagulation in planet-forming environments are somewhat split, with traditional approaches assuming compact, spherical particles at all times, while models that include porosity evolution have reported internal grain densities as low as \(10^{-5}\) g cm\({}^{-3}\)(Okuzumi et al., 2012). To explore the possible range of outcomes we will contrast two different cases: compact coagulation (Sect. 3.3.1) and porous growth (Sect. 3.3.2).
#### 3.3.1 Compact growth
The compact model assumes aggregates to keep a compact homogeneous spherical shape throughout their growth. In that case, an aggregate's size \(a\) and mass \(m\) are connected through \(m=(4/3)\pi a^{3}\rho_{\rm s}\), where the bulk density remains equal to their constituting material. As demonstrated by laboratory experiments (Blum & Wurm, 2008; Gutler et al., 2010), there exists a multitude of collision outcomes
\begin{table}
\begin{tabular}{l c c c} \hline \hline Material & Density [ g cm\({}^{-3}\)] & \(\bar{f}_{\rm s}\)\(<\)SL & \(\bar{f}_{\rm s}\)\(>\)SL \\ \hline Silicates & 3.30 & 0.411 & 0.206 \\ Troilite & 4.83 & 0.093 & 0.046 \\ Refractory organics & 1.50 & 0.496 & 0.248 \\ Water ice & 0.92 & 0 & 0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Monomer material density and relative mass abundance \(\bar{f}_{\rm s}\) for the two populations (Birnstiel et al., 2018; Lodders, 2003)
Figure 2: Cartoon representing the thermal structure and dust grains of a protoplanetary disc undergoing an accretion outburst. Three zones are specified. Zone A: the temperature is too high for water to be stable in ice phase, thus remaining in gas phase at all time. Zone B: water is initially deposited onto refractory cores, enhancing their stickiness and favouring growth. It sublimates during the outburst, and re-condensates after the event proportionally to the total dust surface area. Depending on the model, dust grains survive or not the sublimation of water. Zone C: even during the outburst, the temperature is low enough for water to remain in ice phase.
depending on the colliders composition, relative velocity, and mass ratio. In this work, we will only consider perfect sticking (leading to growth), and fragmentation (leading to mass loss), as there is still a large parameter space to be explored concerning other outcomes (e.g. bouncing, erosion). Fragmentation occurs if the relative velocity is above the fragmentation limit \(v_{\rm f}\) and if the mass ratio \(R_{\rm m}\) of the colliders is superior to \(R_{\rm m,crit}=0.01\)(Guttler et al., 2010; Seizinger et al., 2013). We set the fragmentation velocity to \(1\;{\rm m\;s^{-1}}\) for bare rock material and \(10\;{\rm m\;s^{-1}}\) for pure water ice, in agreement with laboratory experiments finding enhanced stickiness for water-rich solids (e.g. Supulver et al., 1997; Gundlach & Blum, 2014). As in our case aggregates are rather homogeneously mixed in ice and rock, we express their fragmentation velocity as a linear interpolation between the pure rock and ice cases (Lorek et al., 2016)
\[v_{\rm f}=f_{\rm w}v_{\rm f}^{\rm H_{2}O}+(1-f_{\rm w})v_{\rm f}^{\rm rock}. \tag{13}\]
We note that these values are now under debate in the light of recent experiments on the resistance of water ice grains at low temperatures (Gundlach et al., 2018; Musioli & Wurm, 2019).
#### 3.3.2 Porous growth
In the porous model, we include the evolution of the dust aggregate's structure (i.e. porosity), as in reality aggregates can develop a significant fractal shape which alters the mass-size relation following \(m\propto a^{D_{\rm f}}\), \(D_{\rm f}\) being the fractal dimension, and leads to a much smaller internal density (e.g. Donn, 1990; Blum et al., 2000; Weidling et al., 2009). On the microscopic level, aggregates are considered to be build up of monomers whose properties and bonds dictate the mechanical behaviour of the aggregate as a whole. Because contact between microscopic spheres only involves a small surface layer of relative thickness \(\delta\approx 10^{-2}\)(e.g. Chokshi et al., 1993; Krijt et al., 2013), only a small fraction of water ice is needed to alter the surface properties from bare rock to those of pure water. Similarly to Krijt et al. (2016), we define that mass fraction threshold to be \(f^{*}=m_{\rm w}/m_{\rm r}=0.1\).
We use the porosity model from Okuzumi et al. (2012) to calculate the new aggregate volume after every sticky collision, considering the creation of new voids and the potential collisional compression. The efficiency of collisional compression is controlled by how the impact energy \(E_{\rm imp}\) compares to the rolling energy \(E_{\rm roll}\), which quantifies the ability of monomers in contact to roll over each other (Dominik & Tielens, 1997). Using \(f^{*}\) to characterise the surface properties of monomers, the rolling energy is given by2(Heim et al., 1999; Gundlach et al., 2011; Krijt et al., 2014)
Footnote 2: We choose to use the rolling energy of SiO\({}_{2}\) to represent our rocky composition.
\[E_{\rm roll}=\left\{\begin{array}{l}E_{\rm roll}^{\rm H_{2}O}=1.4\times 1 0^{-7}\;{\rm erg}\;(a_{0}/\mu{\rm m})^{5/3}\,,\quad f^{*}>0.1,\\ \\ E_{\rm roll}^{\rm rock}=2.3\times 10^{-8}\;{\rm erg}\;(a_{0}/\mu{\rm m})^{5/3}\,, \quad f^{*}<0.1,\end{array}\right. \tag{14}\]
At small sizes, when the relative velocity is governed by Brownian motion and \(E_{\rm imp}<E_{\rm roll}\), there is no dissipation of energy through re-structuration which results in gentle hit-and-stick collisions and a low fractal dimension \(D_{f}\simeq 2\)(e.g. Kempf et al., 1999). With increasing mass and impact energy, collisional compression occurs which increases the average density of dust aggregates. When the aggregates are so large their motion decoupled from the gas flow, the efficiency of collisional compression stalls, allowing the formation of highly porous aggregates with \(\rho_{\rm int}\approx 10^{-5}\;{\rm g\;cm^{-3}}\)(Okuzumi et al., 2012). However, Kataoka et al. (2013) argued that static compression by gas ram pressure and self-gravity would prevent the formation of such massive and highly porous solids. We thus added their prescription to our coagulation model (see Sect. 4.3.1).
Similarly to the compact case, colliders with a mass ratio \(R_{\rm m}\geq 0.01\) fragment if their relative velocity is above the fragmentation limit \(v_{\rm f}\). For porous aggregates, numerical simulations of individual collisions show \(v_{\rm f}\) depend on the monomer properties and can be as high as \(80\;{\rm m\;s^{-1}}\) for \(0.1\;\mu{\rm m}\) pure water ice (Wada et al., 2013). For such high \(v_{\rm f}\), the fragmentation threshold is never reached and direct growth to planetesimals may be possible in some regions of discs (Okuzumi et al., 2012). Here we use a slightly more conservative version of the results from Wada et al. (2013):
\[v_{\rm f}\simeq\left\{\begin{array}{l}30\left(\frac{a_{0}}{0.1\mu{\rm m}} \right)^{-5/6}\;{\rm m\;s^{-1}},\quad f^{*}>0.1,\\ \\ 3\left(\frac{a_{0}}{0.1\mu{\rm m}}\right)^{-5/6}\;{\rm m\;s^{-1}},\quad f^{*}< 0.1.\end{array}\right. \tag{15}\]
Although higher than the \(v_{\rm f}\) for the compact case, these values still result in fragmentation-limited growth outside the snowline.
Another aspect to outline concerning porous growth is that the relation for the surface area \(A=\pi a^{2}\) can break down for fractal aggregates (Okuzumi et al., 2009; Tazaki, 2021), we rather adopt a corrected definition as formulated by Equation (47) of Okuzumi et al. (2009).
## 4 Numerical method
### Superparticle approach
We are simulating the coagulation and fragmentation of a population of dust grains using the Monte Carlo superparticle approach from Zsom & Dullemond (2008). It follows the evolution of \(n=10^{4}\) superparticles, each one representing a large swarm of physical particles with identical properties. The total rock mass of each swarm, \(M_{\rm swm}\), is fixed, so that if the rocky content of a superparticle \(i\) changes, the number of physical particles it represents is modified following \(N_{i}=M_{\rm swm}/m_{\rm r}\). The particles are distributed evenly in a fixed volume \(V\). The water content is treated apart from the swarm consideration, to ensure the conservation of the number of particles when only the water mass changes, i.e. upon sublimation and condensation. It leads to small statistical fluctuations in the total water mass, which we discuss in Sect. 7.5. The particle properties we follow are: the mass of rock (\(m_{\rm r}\)) and water ice (\(m_{\rm w}\)), the size \(a\), and the porosity in the porous case through the internal density \(\rho_{\rm int}\).
The coagulation code works following four key steps. First, we calculate the collision rates between every pair of particles. The collision rate of a superparticle \(i\) with a physical particle represented by the superparticle \(j\) is given by \(C_{ij}=N_{j}\Delta v_{ij}\sigma_{ij}/V\), where \(\sigma_{ij}=\pi(a_{i}+a_{j})^{2}\) is the collisional cross-section, and \(\Delta v_{ij}\) is the relative velocity calculated from the motion processes mentioned in Sect. 3.2.
Then, two random numbers determine which superparticle \(i\) will collide with which physical particle of the swarm \(j\), such that pairs with large collision rates are more likely to be drawn. In addition, using the total collision rate
\[C_{\rm tot}=\sum_{i,j}C_{ij}, \tag{16}\]
and a random number \(\mathcal{R}\) drawn from a uniform distribution between 0 and 1, we determine the time-step to that next collision as
\[\delta t_{\rm col}=-\frac{\ln\mathcal{R}}{C_{\rm tot}}. \tag{17}\]
After that, the collision outcome (sticking or fragmentation) is determined based on the colliders mass ratio \(R_{\rm m}\) and the relative velocity \(\Delta v_{ij}\) as compared to the the fragmentation threshold \(v_{\rm f}\) (see Sect. 3.3). We employ the collision model from Birnstiel et al. (2011) to account for the intermediate regime between sticking and fragmentation, using a width \(\delta v_{\rm f}=v_{\rm f}/5\), as experimental results did not reveal a sharp transition (Blum & Munch, 1993). Finally, we update the properties of the superparticle \(i\) in agreement with the collision outcome, while those of the physical particle \(j\) it collides with are left unchanged (Zsom & Dullemond, 2008).
These four steps represent an individual collision cycle, during which the global time of the simulation is incremented by \(\delta t_{\rm col}\). The coagulation code repeats the cycle until it reaches \(t_{\rm end}=10^{5}\) yr, time threshold fixed by the user. Because the particles properties are constantly monitored, we can analyse their evolution and distribution amongst the population at any chosen time.
#### 4.1.1 Grouping method
The size distribution can be broad, especially when the fragmentation barrier is reached and collisions create a second generation of dust grains. As a consequence, collisions may involve a particle \(j\) considerably less massive than its pair \(i\), hence outcoming on minuscule changes for \(i\). For numerical optimization purposes, we rather form a group of \(j\)-particles of mass \(f_{\rm m}\), and modify the corresponding collision rate to \(\bar{C}_{ij}=C_{ij}f_{\rm c}^{-1}m_{j}/m_{i}\), where we set the grouping limit to \(f_{\rm c}=0.01\)(e.g. Zsom & Dullemond, 2008; Krijt & Ciesla, 2016). Doing so, the superparticle \(i\) has less probability to encounter the group of \(j\)-particles, but when it does, it collides with all the particles of that group at once.
### Collisional evolution
When a collision takes place, the superparticle properties are modified depending on the selected collision outcome (sticking or fragmentation). In this section, we detail how each collision outcome modifies particles properties in both aggregation models. We refer the reader to Fig. 3 for a cartoon summarising our dust model.
#### 4.2.1 Sticking
When colliding partners stick, the mass of the superparticle is updated to the sum of the colliders mass \(m\leftrightarrow m_{j}+m_{j}\). If they have different composition, the water mass fraction \(f_{\rm w}\) and material density \(\rho_{\rm s}\) are updated accordingly. In the compact case, we then determine the new size directly from \(a=(3m/4\pi\rho_{\rm s})^{1/3}\). In the porous model, the size is replaced by the notion of characteristic size, \(a_{\rm c}\), which is defined using the gyration radius of the aggregate (Mukai et al., 1992). This radius can be used to define an aggregate's volume using \(V=(4/3)\pi a_{\rm c}^{3}\). The new volume is computed after each collision using Eq. (15) from Okuzumi et al. (2012), which accounts for the creation of new voids as well as possible collisional compression. The new mass and volume yield an internal density \(\rho_{\rm int}=m/V\) from which we assess the porosity of aggregates. Note that if the colliding aggregates have different surface properties, the rolling energy is taken as the mass-weighted average.
#### 4.2.2 Catastrophic fragmentation
While each aggregate's fragmentation velocity is set by its composition, the effective fragmentation limit of a colliding pair is obtained from their mass-weighted average. At higher velocities, and provided the collider mass ratio \(R_{\rm m}\geq 0.01\) (see Sect. 3.3), catastrophic fragmentation occurs and the mass of both colliders is redistributed over fragments. The fragment distribution follows a power-law function
\[n_{\rm f}(m)=\begin{cases}C_{\rm f}m^{-\xi}&\text{for $m_{0}\leq m<m_{\rm f,max}$,}\\ 0&\text{else,}\end{cases} \tag{18}\]
where \(m_{0}\) is the monomer mass, \(m_{\rm f,max}\) is the mass of the largest fragment being the larger collider, and \(C_{\rm f}\) is a constant equals to the sum of both collider masses. The power-law of the distribution is set to \(\xi=\frac{11}{6}\) similarly to Birnstiel et al. (2011), such that the surface area is dominated by the smaller fragments, while the larger dominate the total mass. A random number is used to draw a single fragment mass amongst the distribution.
In the compact case, we then yield the selected fragment's size using the sphere equation similarly to the sticking outcome prescription. For porous aggregates, we assume that the fragment internal density follows the historical evolution of its predecessor, i.e. it remains constant unless the resulting volume is larger than what would be expected in the hit-and-stick regime, \(V_{\rm hhks}\). in which case the volume is set to this value and the internal density is adjusted.
### Non-collisional evolution
In this section, we detail how we treat changes in aggregates properties due to processes unrelated to their collisional evolution. Again, we refer the reader to Fig. 3 for a cartoon summarising our dust model.
#### 4.3.1 Gas and self-gravity compression
As mentioned in Sect. 3.3.2, the internal density of porous aggregates can be increased due to static compression by gas ram pressure and self-gravity. We implement this effect in our coagulation simulation following Kataoka et al. (2013), by calculating the compressive strength of aggregates whenever their properties are modified due to collisions or the outburst (changing disc conditions, sublimation, condensation), and compressing them if they cannot withstand the aforementioned external pressures.
#### 4.3.2 Sublimation and condensation
We now detail how we treat the modification of aggregates properties upon water sublimation (for both resilient and many-seeds models), and re-condensation. We begin with the case of compact aggregates. Upon sublimation, for the many-seeds model, aggregates instantaneously disintegrate to rocky monomers (see Table 2). For the resilient model, they lose half of their mass, as initially \(f_{\rm w}=0.5\), and we assume the rocky left-overs to remain compact spheres. The variation in material density and size are determined accordingly to the mass loss. Upon re-condensation at the end of the outburst, water is distributed between aggregates of different sizes proportionally to their surface area (see equation (8)), updating the properties of all particles under the assumption they remain spherical and homogeneous ice/rock mixtures. We assume the total condensing water mass to be equal to what sublimated at \(t_{\rm orb}^{\rm start}\), neglecting potential losses through e.g. diffusion/advection or gas-phase chemical reactions.
Unlike sublimation, the freeze-out of molecules on grain surfaces can take a considerable amount of time, especially in the outer disc (e.g. \(10^{3}-10^{4}\) yr for CO molecules, see Vorobyov et al., 2013). It is quantified by the freeze-out timescale \(\tau_{\rm f}\), which depends on the gas density and size distribution. We evaluate \(\tau_{\rm f}\) of H\({}_{2}\)O molecules on compact and porous aggregates as
\[\tau_{\rm f}=\left(\frac{v_{\rm th}}{V}\sum_{i}\frac{M_{i}}{m_{i}}A_{i}\right)^ {-1}, \tag{19}\]
where \(M_{i}\) is the total mass of particles at a given mass \(m_{i}\), and \(A_{i}\) is their surface area. This expression falls back to the timescale given by Eq. (26) in Krijt et al. (2016) for the compact case. In our simulations, the freeze-out timescale is typically below 10 yr, so we assume re-condensation to be instantaneous at \(t_{\rm orb}^{\rm end}\). We note that if condensation onto small grains is inefficient, for example because of grain curvature (Sirono, 2011), or if the slow cooling rate leads to preferential condensation on a favourable grain size (Hubbard, 2016), ices will not accrete freely on the entire distribution and may boost ice formation in a specific size range. Such effects are not included in this work.
In the porous case, it is more complex to capture the structural impact of sublimation and condensation as it may lead to aggregates constituted of monomers with heterogeneous properties, for which the mechanical and collisions properties are not well known. We therefore make the following assumption: First, we only consider the impact of water on the monomer surface properties (i.e. their stickiness and rolling energy), and neglect the influence of water ice on the monomers' mass and size. Then, we add the mass of condensed water when calculating macroscopic aggregate properties such as total mass, mean density, Stokes number, etc., while using the rocky component to define the total volume and size (see bottom row of Fig. 3).
## 5 Results
In this section we describe the results for the different locations and dust models. To facilitate discussions, we will refer to each model following the notation introduced in Table 3.
### Compact growth
#### 5.1.1 Location A
We begin with the analysis of A-comp simulating the compact growth of dry particles in location A. When studying dust coagulation, it is common to display the growth of dust aggregates using the size distribution in terms of \(m\)\(a\)\(n\)(\(a\)), which highlights how the mass is distributed into the population when using logarithmic size bins. Such mass distributions are shown in Fig. 4 for different key times, along with the results from the other compact models, respectively B-comp-resi and B-comp-m.s. The distribution of the water mass fraction is also plotted in the bottom panels. For each model, we performed three runs with different random seeds to reduce the statistical noise arising from the Monte Carlo approach. The standard deviation is shown as shaded area on the size distributions.
Location A being located in the very inner disc, the density is high and aggregates collide often, resulting in a short coagulation timescale and a rapid growth. In fact, aggregates grow close to mm-size within 50 yr, in agreement with other coagulation simulations performed in the literature (e.g. Brauer et al., 2008). The size distribution at 100 yr is characteristic of a coagulation/fragmentation equilibrium, also referred to as collisional equilibrium in this manuscript, where the fragmentation of the large pebbles balances the growth of the fragments. At this stage, the population is in a collisional equilibrium with most of its mass in the upper-end of the distribution, close to the maximum size \(a_{\rm max}\) whose exact position is determined by disc and dust properties (Birnstiel et al., 2011). During the outburst, we observe a decrease of the maximum size by a factor \(\approx\)3, before growing back to the pre-outburst state after the event. In fact, \(a_{\rm max}\) is inversely proportional to the temperature \(T\)(Birnstiel et al., 2011), as an increased temperature results in higher relative velocities, which forces the population to find a new collisional equilibrium corresponding to a scaled-down version of the pre-outburst distribution (see also Fig.6 from Birnstiel et al., 2011). The water ice content (bottom-left panel in Fig. 4) remains zero as location A is inside the water snowline at all times.
A similar situation arises in Zone C (see Fig. 2), where water always remains in the ice phase. We performed simulations in this region at 25 au, however, due to the low surface density and increased coagulation timescale, the outburst was found to be too short to lead to any noticeable changes in the size distribution. We conclude that the only way for an accretion outburst to effectively alter dust aggregates in the outer disc, where \(t_{\rm cong}\gg\tau_{\rm orb}\), is by inducing a compositional change, itself leading to an instantaneous modification of dust structure and properties.
#### 5.1.2 Location B: Resilient model
In location B (middle and right column of Fig. 4), the growth is slower and it takes \(\approx\)3000 yr for B-comp-resi and B-comp-m.s to reach the fragmentation-limited distribution. Pebbles are more than an order of magnitude larger than in A-comp, which is notably explained by the higher resistance to fragmentation of ice-rich aggregates (\(a_{\rm max}\propto v_{\rm f}^{2}\), see Birnstiel et al., 2009).
At \(t_{\rm orb}^{\rm start}\), aggregates in the resilient model survive water ice sublimation, but still lose 50% of their mass. The fragmentation velocity decreases to the bare rock value, causing the largest surviving pebbles (those close to \(a_{\rm max}^{\rm ice}\)) to fragment upon their next collision, efficiently redistributing mass to smaller grains and raising the tail of the size distribution. At \(t_{\rm orb}^{\rm end}\), the temperature decreases again and water re-condenses. Even though pebbles dominate the total mass of the population, it is the small dust grains that dominate the total surface area (e.g. Stammler et al., 2017). As a result, the relative gain in water content is larger for smaller particles, which creates a compositional variation amongst the size distribution, highly diverging from the constant \(f_{\rm w}=0.5\) before the outburst. Pebbles slowly regain their water content through collisional mixing with water-rich grains, but, in the meantime, they keep fragmenting efficiently due to their lowered resistance. The water ice distribution has still not fully re
\begin{table}
\begin{tabular}{l c c c} \hline Model & Location & Aggregation & Sublimation \\ \hline A-comp & A & Compact & \(\times\) \\ A-por & A & Porous & \(\times\) \\ B-comp-resi & B & Compact & Resilient \\ B-comp-m.s & B & Compact & Many-seeds \\ B-por-resi & B & Porous & Resilient \\ B-por-m.s & B & Porous & Many-seeds \\ \hline \end{tabular}
\end{table}
Table 3: Model notations depending on the disc location, aggregation, and response to sublimation.
turned to pre-outburst conditions even after 1000 yr (dark-blue dots in Fig. 4).
#### 5.1.3 Location B: Many-seeds model
In the many-seeds model, the quiescent growth phase is identical to the one in the resilient case, but icy aggregates are assumed to disintegrate as water ice leaves upon heating, effectively resetting the size distribution at \(t_{\rm orb}^{\rm start}\). The size distribution during the outburst is therefore particularly different, as after only 50 yr dust grains are still in the early-growth phase (red curve in Fig. 4). Having a narrower size distribution, we observe, upon re-condensation, a smaller spread in the water fraction than in the resilient model. 1000 yr after the outburst, the population is still growing and has not reach its collisional equilibrium yet, but the water fraction has bounced back to the initial state.
### Porous growth
We now discuss the growth of dust aggregates in the porous aggregation. Aggregates porosity is assessed by their internal density, \(\rho_{\rm int}\), showed in Fig. 5 (bottom panels) at different times for the three models featuring porous growth. Similarly to Fig. 4, we also represent the size distribution (top panels) and water mass fraction (middle panels).
#### 5.2.1 Location A
Similarly to A-comp, aggregates in A-por evolve rapidly and reach the coagulation/fragmentation equilibrium within 50 yr during the quiescent disc phase. Note that the distribution reaches a maximum size about 2 orders of magnitude larger than the compact case. Several factors contribute to the difference with the compact scenario, as stated in Eq. 15, porous aggregates have a higher resistance to fragmentation. But what mostly influences their larger maximum size arises from their modified aerodynamical behaviour and relative velocities. It can be seen on Fig. A1, where identical relative velocities are reached by porous aggregates with much larger sizes (see also Fig. 2 from Krijt et al., 2015). The internal density plot displays the different regions introduced in Sect. 3.3.2, with the hit-and-stick regime at small sizes followed by an almost constant phase characteristic of the balance between compression mechanisms and the creation of new voids (Okuzumi et al., 2012). The first generation of aggregates (light-grey dots after 10 yr in Fig. 5) is more porous, it is then compressed at higher sizes by gas ram pressure before fragmenting into equal or higher density fragment (see fragmentation prescription in Sect. 4.2.2), which explains why they do not reappear in later stages. Fragmentation also prevents the formation of even larger solids which would become denser due to self-gravity compression (Kataoka et al., 2013).
As in the compact case, the outburst leads to a temporary decrease in the maximum size. The amplitude of the variation is identical,
Figure 3: Cartoon summarising how compact and porous dust aggregates evolve in our simulations, including: coagulation, fragmentation, compression, sublimation, and condensation. In the compact model, monomers grow into sphere homogeneously mixed in ice and rock. Aggregates remain spherical throughout their evolution, even after sublimation and condensation. In the porous case, a few assumptions are made to avoid dealing with multiple monomer properties within a single aggregate. Water ice influences the aggregate surface properties, internal density, mass and inertia, but has no impact on its size and volume (except when disintegrating in the many-seeds model).
as changing disc conditions vary independently of the aggregation model. We also see a slight increase in the aggregate internal density, which is related to an enhanced compression by collisional restructuring and gas ram pressure. After the outburst, the system quickly comes back to the pre-outburst state.
#### 5.2.2 Location B: Resilient model
Porous aggregates in location B grow until establishing their collisional equilibrium, once again corresponding to superior maximum size as location A due to the presence of water ice. The internal density follows the aforementioned porous model, although the lower gas ram pressure and larger rolling energy of icy monomers leads to more porous aggregates.
At \(t_{\rm orb}^{\rm start}\), resilient aggregates survive the sublimation of water and lose 50% of their mass. The impact on the internal density depends on their size. Aggregates below \(\approx\)10 cm retain their size, and the loss of mass then results in a decrease in the internal density. For larger aggregates, however, the story is more complex. Here, the lowered rolling energy \(E_{\rm roll}^{\rm rock}\) and increased gas ram pressure leads to an additional compression, and the internal density actually increases. These denser aggregates then fragment and generate grains of equal or higher density (see Sect. 4.2.2), which ends up creating a broader density distribution (red dots in the bottom panel in Fig. 5).
The re-condensation of water follows the same trend as in B-comp-resi, with few differences arising from the impact of the porosity on the aggregates' surface area. The central slope appears broader (light-blue dots in the central panel in Fig. 5), due to the similar trend in the internal density distribution. We also notice that grains in the hit-and-stick regime take an even larger fraction of water ice, all ending with similar and very high ice contents. The reason for this is that in this hit-and-stick phase, the fractal dimension \(D_{\rm f}\approx 2\), leading to \(m\propto a^{2}\) and a surface area per mass unit that is similar for each aggregate. They thus receive an amount of water leading to a similar fraction \(f_{\rm w}\) as the others in the hit-and-stick regime. We see that water re-condensation gives rise to intermediate-sized aggregates filled with water ice and displaying large internal density. After 1000 yr, the population still did not reach the pre-outburst state.
#### 5.2.3 Location B: Many-seeds model
In B-por-m.s, all aggregates are still in the hit-and-stick regime when the outburst ends, leading to even narrower water mass distribution than in B-comp-m.s. Despite leading to dramatic size alteration, we see that aggregates following the many-seeds response are characterised with narrower water distribution than resilient ones in both aggregation model. However it may differ for longer outbursts, where dust aggregates would have longer time to grow before re-condensation occurs, especially if they reach a different stage of their porous evolution. The system then keeps growing, and 1000 yr after the outburst it is still growing in the hit-and-stick regime while having almost fully recover the initial water distribution at \(f_{\rm w}=0.5\)
Figure 4: Size distribution function and water fraction of compact aggregates at different times: in the initial quiescent phase (grey shades), during the accretion outburst (red), and after (blue shades). \(t_{\rm orb}^{\rm start}-\epsilon\) represents the state of the system right before sublimation and \(t_{\rm orb}^{\rm end}+\epsilon\) right after re-condensation. The left panels stand for the solids population in Location A, while the middle and right panels represent the Location B respectively for the resilient and many-seeds model. The area under the size distribution is normalised to 1 by the total solid mass, being the total rock mass, or \(2M_{\rm tot}\) if water ice is present. The shaded areas show the statistical uncertainties, larger for small grains due to the low resolution of the superparticle approach in this part. Each data point in the lower panels represents the properties of a superparticle, itself representing \(N_{\rm f}\) physical particles.
### Mass-weighted size and Stokes number
In this section, we investigate the temporal evolution of the mass-weighted average size \(\langle a\rangle_{\text{m}}\) and Stokes number \(\langle\text{St}\rangle_{\text{m}}\) (calculated from Eq. 11), shown in Fig. 6. Because in the coagulation/fragmentation equilibrium most of the mass is located close to the maximum size (see Fig. 4 and Fig. 5), these quantities are a good indicator of the properties of the largest aggregates. They are also helpful in determining when the population enters a collisional equilibrium, as a constant size distribution would lead to a constant \(\langle a\rangle_{\text{m}}\) and \(\langle St\rangle_{\text{m}}\). In the bottom panels of Fig. 6, horizontal grey lines indicate estimates of \(\langle a\rangle_{\text{m}}\) in the collisional equilibrium expected in the quiescent and outburst phase of the different models. It is found by solving for the size at which equal aggregates collide at \(v_{\text{f}}-\delta v_{\text{f}}\) (see Appendix A), where collisions begin to lead to mass loss.
During the quiescent phase, \(\langle a\rangle_{\text{m}}\) increases with time for each model until reaching the plateau characteristic of their respective coagulation/fragmentation equilibrium. Porous aggregates grow more rapidly but to greater sizes, so that the time it takes to reach the equilibrium is similar to the compact case. At \(t_{\text{otb}}^{\text{start}}\), high collision rates in location A leads to a rapid adjustment, and the new scaled-down equilibrium is reached within a few years. After \(t_{\text{otb}}^{\text{end}}\), the population recovers to the pre-outburst distribution also within a few years in both aggregation models. \(\langle a\rangle_{\text{m}}\) thus closely matches the theoretical prediction before, during, and after the outburst, meaning that the population is in collisional equilibrium at almost all times.
The picture is more complex in location B, where the lower surface density leads to smaller collision rates. The compact (middle panel) and porous (right panel) populations are far from reaching the new equilibrium within the outburst duration. In the resilient model, we will find pebbles in our outbursting disc that are too large for their rocky composition. Interestingly, the lowest \(\langle a\rangle_{\text{m}}\) is reached a few hundreds of years after the event, when the largest pebbles are still deprecated in water ice and effectively fragmenting (see Fig. 4 and Fig. 5). After the water ice is redistributed through collisional mixing, \(\langle a\rangle_{\text{m}}\) returns to pre-outburst value and the collisional equilibrium is re-established.
In the many-seeds model, aggregates disintegrate to monomers, and, just like in the resilient case, they do not have the time to reach the new equilibrium within the outburst duration. Although in the many-seeds case, aggregates are now below that theoretical value. After the outburst, they take longer to re-establish the collisional equilibrium than the resilient models. We conclude that in our simulations in location B, the use of the collisional equilibrium distribution is never appropriate to describe the dust population during the outburst, and it remains so after the outburst for a duration depending on the model (up to 4500 yr at 5 au in B-por-m.s).
The Stokes number \(\langle\text{St}\rangle_{\text{m}}\) is represented on the top panels and displays a similar evolution as \(\langle a\rangle_{\text{m}}\). We include a horizontal line at \(\text{St}=0.01\), which indicates the minimum value necessary to trigger the streaming instability at dust to gas ratios close to \(10^{-2}\) (Li
Figure 5: Size distribution function, water fraction, and density evolution of porous aggregates at different times. Same as Fig. 4.
& Youdin, 2021). In location A, we remain below that limit at all times. In location B, the efficient fragmentation phase of pebbles in B-comp-resi also results in the Stokes number to drop below 0.01 for a duration of almost 1000 yr. In the many-seeds model, it takes significantly longer to re-grow pebbles above St = 0.01, about 1500 and 3000 yr for B-comp-m.s and B-por-m.s, respectively.
We performed additional coagulation simulations accounting for different values of the turbulence strength, respectively \(\alpha=10^{-4}\) and \(10^{-2}\). Lower turbulence pushes all models further from reaching the outburst collisional equilibrium within \(\tau_{\rm{oth}}\), as aggregates in the resilient model fragment less often due to lower turbulent velocity and collision rates, while in the many-seeds grains have to re-grow to larger size ( \(a_{\rm{max}}\propto\alpha^{-1}\), see Birnstiel et al., 2009) under similar lowered collision rates. For higher turbulence strength, the reverse situation occurs bringing all models closer to reaching the new equilibrium within the outburst duration. In such a case, the post-outburst phase could be similar for both sublimation models (see also fast and intermediate adjustments in Fig. 11).
## 6 Observational Signatures
In this section, we investigate how the alteration of dust properties affects their observational signatures during and after the accretion outburst. We convert the results of our coagulation simulations into absorption opacities using the DSHARP-OPAC package from Birnstiel et al. (2018). Given the dust composition we adopted (Table 2), the optical constants are taken from Warren & Brandt (2008) for water ice, Draine (2003) for astronomical silicates, and Henning & Stognienko (1996) for troilite and refractory organics. We compute the mixed dielectric function using the Bruggeman effective medium theory, that is applicable when the different materials are homogeneously mixed with no dominant medium. Note that directly after the outburst, this may not be the case for highly water-rich grains, but simulations show that these small particles are rapidly mixed with the rest of the population. In the porous aggregation model, we additionally use the Maxwell-Garnet rule to account for the voids arising from the porous structure (Voshchinnikov et al., 2005; Kataoka et al., 2014). Opacities of individual aggregates are computed using Mie theory, considering their unique size, composition and porosity, and combined into the total absorption opacity \(\chi_{A}^{\rm{abs,tot}}\) by summing over the distributions returned by the coagulation calculations3. Note that for computational optimization, we do not calculate the opacity of each individual superparticle, but rather group particles with similar mass, size, and composition. Fig. 8 displays the total absorption opacity of the population at different key times of the simulation. Being sensitive to the entire dust distribution, the absorption opacity could be altered by the lower resolution of the superparticle approach towards small grains, which displays important statistical noises after re-condensation in the resilient models (see Fig. 4 and Fig. 5). For B-comp-resi and B-por-resi (central panels), we thus performed opacity computations for three independent runs and include the statistical uncertainties as shaded areas. We see that they are barely noticeable, hence have a negligible impact on our results.
Footnote 3: Following Equation (6) from Birnstiel et al. (2018), the denominator equals to the total solid mass of the population, being the total rock mass \(M_{\rm{tot}}\), or \(2M_{\rm{tot}}\) if water ice is present.
We can also compute \(\beta\), the spectral index of the dust opacity \(\kappa_{\nu}\propto\nu^{\beta}\), which is widely used in the literature to trace the properties of millimetre-sized particles in protoplanetary discs (Beckwith et al., 1990). In this paper, \(\beta\) is computed using 1.3 and 3 mm, corresponding respectively to Band 6 and 3 of The Atacama Large Millimeter/submillimeter Array (ALMA), as it is the most powerful tool to study protoplanetary discs and probe particle properties near the disc midplane (Andrews, 2020). The spectral index is then given by
\[\beta=-\frac{\log(\kappa_{\rm{3mm}}/\kappa_{1.3\rm{mm}})}{\log(\nu_{\rm{3mm }}/\nu_{1.3\rm{mm}})}. \tag{20}\]
To illustrate how dust coagulation can impact the opacity index, we first represent \(\beta\) as a function of the maximum particle size on Fig. 7, which is calculated using a simplified power-law distribution with a cut-off at \(a_{\rm{max}}\) and a slope \(q=3.5\). This plot offers a comprehensive overview of the significant impact that particle size, composition, and porosity have on the opacity index. The outburst affecting each property (see Fig. 4 and Fig. 5), it will result in temporal variations of the opacity index, which we represent on Fig. 9, computed from the results of our coagulation simulations.
We begin our analysis with Location A. Early on, \(a\ll 1\) mm and \(\beta\) is constant, close to 1.7 as for dust grains in the ISM (Finkbeiner et al., 1999). It then diverges depending on the aggregation model. For compact growth, \(\beta\) peaks when the size distribution approaches \(a\approx 4/2\pi\) and resonances amplify the opacity (Fig. 7). The collisional equilibrium distribution being close to the resonance, \(\beta\) remains relatively large. The outburst leads to a strong variation in \(\beta\) due to the fragmentation of aggregates in the resonance size range (see Fig. 4). It can also be seen on the total absorption opacity, that is higher in the outburst phase, except above \(5\times 10^{-2}\) cm due to the redistribution of mass in smaller fragments. For porous aggregates, the opacity is sensitive to the mass-to-area ratio (Kataoka et al., 2014), which mostly varies when the population reaches efficient compression mechanisms early in its evolution (see light-grey dots on Fig. 5). At millimetre wavelengths, the resonant amplifications of the opacity is damped (Fig. 7), and \(\beta\) remains mostly constant throughout the quiescent and outbursting phases.
In location B, for the compact growth, we also observe the constant \(\beta\) followed by a resonant amplifications. Then, \(\beta\) decreases as the population grows above millimetre sizes to settle at the coagulation/fragmentation equilibrium. When the outburst starts, the evolution diverges depending on the sublimation model. In the many-seeds case (B-comp-m.s), aggregates fall apart, and \(\beta\approx 1.7\). Aggregates recover with time and the resonant amplifications is observed again after 1200 yr, with \(\beta\approx 2.9\). In Fig. 8, we can see the water features reappearing strongly after the outburst due to the large amount of small icy particles. It dampens with collisional mixing before recovering to the pre-outburst spectrum. For resilient aggregates (B-comp-resi), \(\beta\) decreases sharply when water ice sublimates. It increases during the outburst due to the enhanced fragmentation of dry aggregates above millimetre sizes. Fig. 8 also shows this behaviour with an overall increase in the total absorption opacity, along with a disappearance of water features. After the outburst, water features reappear and \(\beta\) peaks at about 2.3 a few hundreds of years after the event, when pebbles stop fragmenting. Similarly to the pre-outburst, \(\beta\) slightly re-increases when the collisional equilibrium is found, although it takes longer than for the many-seeds case. The recovery seems longer in Fig. 9 than in Fig. 6, as \(\beta\) depends the properties of the entire distribution which take more time than only recovering \(\langle a\rangle_{\rm{m}}\).
Concerning the porous model, the quiescent phase behaves similarly to location A (A-por) with the resonant amplifications being damped. However, aggregates in B reach larger sizes (\(>\)10\({}^{2}\) cm) where \(\beta\) starts to decrease (Fig. 7). For B-por-m.s, the evolution
of \(\beta\) is reset. After 1000 yr, the population is still in the hit-and-stick regime with a constant mass-to-area ratio, which explains why the post-outburst opacity spectra are identical. For B-por-resi, \(\beta\) does not exhibit major variations, except for a slight increase during the efficient fragmentation phase during the outburst. Note that for the four models in location B, we represented in Fig. 10 the temporal evolution of the optical depth at 1.3 mm, calculated as \(\tau_{\rm 1.3~{}mm}=\kappa_{\rm 1.3~{}mm}^{\rm abs,tot}\Sigma_{\rm d}(5~{}{\rm au})\), i.e. assuming our midplane simulations represent well the entire disc column. Note that the solid surface density is doubled when water is in ice form, as the dust-to-ice ratio is unity (Sect. 2.2). We see that the optical depth is below unity throughout the simulations, meaning that the emissions are optically thin and \(\beta\) effectively connects to the dust size distribution and properties (Testi et al., 2014).
In the end, we can see that the outburst induces a wide range of observable signatures, highly dependant on the size distribution, aggregation model, and response to sublimation. The recovery of compact aggregates in zone B leads to particularly strong variations in \(\beta\), even long after the outburst ended. Porous aggregates, however, lack strong variations at millimetre wavelengths due to the absence of resonant amplifications. We will discuss these features in Sect. 7.2. Note that in our simulations, the signals appear in the aftermath of the outburst, but in case of a longer outburst or shorter coagulation timescale, they may even emerge during the event.
## 7 Discussion
### Outburst and post-outburst adjustments
As we saw in Sect. 5, accretion outbursts modify the disc temperature and the properties of dust particles, and a certain time span is needed for the population to respond to these changes, and to recover to
Figure 6: Mass-weighted average size \(\langle a\rangle_{\rm m}\) and Stokes number \(\langle St\rangle_{\rm m}\) vs. time. The vertical lines denote respectively \(t_{\rm chh}^{\rm start}\) and \(t_{\rm chh}^{\rm end}\). The horizontal grey lines show: the typical value needed to trigger the streaming instability following Li & Youdin (2021) (top), and the theoretical position of the collisional equilibrium for each model in the quiescent and outburst phase (bottom). The logarithmic scale is modified at \(t_{\rm chh}^{\rm start}\) to better discern the variations caused by the event. The population being well resolved at large sizes (see Fig. 4 and Fig. 5), we do not include the statistical noise calculated from independent runs.
Figure 7: \(\beta\) as a function of maximum particle size, assuming a power-law size distribution \(n(a)\propto a^{-3.5}\) from \(10^{-4}\) cm to \(a_{\rm max}\) (more details in Sect.3 from Birnstiel et al., 2018). The two compositions correspond to the mixtures in Table 2. For the porous case, we use \(\rho_{\rm int}=10^{-2}\) g cm\({}^{-3}\), similarly to what we obtain in our simulations (Fig. 5).
the initial (quiescent) equilibrium after the outburst has passed. In this section, we investigate the adjustment timescale and compare it to the outburst duration and rate. We will only focus the discussion on zone B, as zone A is characterised by relatively fast adjustment (Fig. 6) thanks to high collision rates in the inner disc and smaller size variation (5.1.1).
Depending on the coagulation physics and outburst properties, we summarize in Fig. 11 three adjustment cases. I) In the fast adjustment scenario, the dust population in zone B adapts rapidly to the scaled-down bare rock distribution during the outburst, and recovers likewise after the event. Solids thus spend most of their evolution in the corresponding collisional equilibrium. This situation may arise for example due to intrinsic high solid density in the disc, strong turbulence (see Sect. 5.3), or due to a weaker outburst amplitude (e.g.
Figure 8: Absorption opacity of the grains distribution at different key times for the different locations and models, following the same color code as Fig. 4 and Fig. 5. \(t_{\rm obs}^{\rm start}-\epsilon\) represents the state of the population right before sublimation and \(t_{\rm obs}^{\rm end}+\epsilon\) directly after re-condensation. The left panels stand for the population in Location A, while the middle and right panels represent the Location B respectively for the resilient and many-seeds model. The wavelengths used to compute \(\beta\) are denoted with vertical lines. We included the statistical uncertainties on the middle panels similarly to Fig. 4 and Fig. 5, as the absorption opacity is sensitive to the entire size distribution which was partly unresolved at small sizes for these two models after the outburst.
Figure 9: \(\beta\) vs. time for the different locations and models.The vertical lines denote respectively \(t_{\rm obs}^{\rm start}\) and \(t_{\rm obs}^{\rm end}\). The logarithmic scale is reset at \(t_{\rm obs}^{\rm start}\) to better discern the features of the outburst, similarly to Fig. 6. The population being well resolved at millimetre wavelengths (see Fig. 8), we do not include the statistical noise of \(\beta\) calculated from three independent runs.
EXor-type accretion events) keeping the excited snowline relatively close to the host star.
II) In the intermediate adjustment scenario, the dust population has the time to find the new collisional equilibrium during the outburst, but still spends a large fraction of the event out of equilibrium. The dust content of zone B recovers to the quiescent equilibrium before the occurrence of the next outburst. The complete recovery is even more likely to arise in older Class II discs, for which the time span between outbursts is longer (\(\Delta t_{\rm orb}\approx 10^{5}\) yr, Contreras Pena et al., 2019).
In these two first cases, the difference between the resilient and many-seeds model is only visible during the outburst, for a given temporal fraction where the largest aggregates are respectively above or below the bare rock maximum size \(a_{\rm max}^{\rm rock}\). As a consequence, unlike in our simulations, most features in \(\beta\) would only be visible during the outburst phase, and the re-condensation would behave similarly in both situations given they end the outburst with similar size distributions. These cases also illustrate the model of Schoonenberg et al. (2017), where the re-coagulation of silicates, after they fell apart (many-seeds model) created a visible structure at 42 au in the outbursting disc V883 Ori.
III) Finally, in the slow adjustment scenario, the dust population through zone B is out of equilibrium for the entirety of the outburst duration and for a significant fraction of the quiescent phase, before the occurrence of a next outburst. A part of the disc may even not recover at all, leading to an unrecovered annulus within which the dust population is perpetually out of equilibrium. Our local simulations in location B at 5 au fall between cases II and III: the dust population does not adjust to \(a_{\rm max}^{\rm rock}\) within the outburst duration, but does recover on timescale shorter than \(10^{4}\) yr, being the potential next event assuming a constant rate.
If the resilient model applies, an intermediate or slow adjustment may provide an explanation for the recent observations of Liu et al. (2021) of dry millimetre-sized pebbles inside the excited water snowline of the outbursting disc FU Ori. They explain their presence by invoking a higher resistance of bare rocks towards fragmentation than previously thought, but instead, we suggest that these large dry pebbles may simply not have had enough time to experience enough fragmentation collisions (Fig. 6). It is interesting to notice how different models (resilient and many-seeds) can provide satisfying explanations in two different discs (resp. FU Ori and V883 Ori). Looking at a larger sample of discs, outbursting systems could then provide laboratories for exploring further the behaviour of dust aggregates to sublimation, which is a key aspect in planetesimal formation scenarios at the water snowline (e.g. Schoonenberg & Ormel, 2017).
Accretion outbursts being probably frequent and widespread in most forming systems (Dunham & Vorobyov, 2012; Audard et al., 2014), understanding the recovery process of dust grains after the outburst is of crucial importance to comprehend observed discs, and further constrain outburst properties. We define the recovery timescale \(t_{\rm rec}\) as the time needed for \(\langle a\rangle_{\rm m}\) to grow from its value when the outburst ends back to the quiescent collisional equilibrium. At 5 au, in the resilient model, Fig. 6 shows it takes approximately \(t_{\rm rec}\approx 1000\) yr for B-comp-resi and 2000 yr for B-por-resi to recover. In the many-seeds model, it takes 2000 yr for B-comp-m. s and 4500 yr for B-por-m. s, longer than in the resilient case. We note that these values are sensitive to our disc and outburst models. As previously mentioned, a longer outburst would give more time to many-seeds aggregates to grow hence lower \(t_{\rm rec}\), unlike the resilient case where a longer fragmentation phase would decrease the maximum size thus increase \(t_{\rm rec}\).
Based on our definition, we can express the recovery timescale4 as (Birnstiel et al., 2011)
Footnote 4: This expression assumes that the dominant source for relative velocity is turbulence, and considers also the reduced scale-height of dust grains with \(St>\alpha\) resulting from vertical settling. In that specific case, the final growth timescale does not depend on the turbulence \(\alpha\). While our simulations do not include vertical settling (Sect.3.2), we opt for using this expression for the growth timescale to translate our results to the outer disc, where settling may be more pervasive.
\[t_{\rm rec}\approx\frac{1}{\delta_{\rm d2g}\Omega}\ln\left(\frac{\langle a \rangle_{\rm m}(t_{\rm orb}^{\rm start})}{\langle a\rangle_{\rm m}(t_{\rm orb }^{\rm end})}\right), \tag{21}\]
which we generalise to any heliocentric distance \(r\) using our results at 5 au as a point of reference,
\[t_{\rm rec}(r)\approx t_{\rm rec}(5\mathrm{au})\frac{\delta_{\rm d2g}(5\mathrm{ au})}{\delta_{\rm d2g}(r)}\left(\frac{r}{5\mathrm{au}}\right)^{3/2}\frac{\ln \left(\frac{\langle a\rangle_{\rm m}(t_{\rm orb}^{\rm start},r)}{\langle a \rangle_{\rm m}(t_{\rm orb}^{\rm end},r)}\right)}{\ln\left(\frac{\langle a \rangle_{\rm m}(t_{\rm orb}^{\rm start},5\mathrm{au})}{\langle a\rangle_{\rm m }(t_{\rm orb}^{\rm end},5\mathrm{au})}\right)}. \tag{22}\]
Considering the dust-to-gas ratio to be constant throughout the disc, and neglecting the last term as the slight variation in the size ratio within the natural logarithm would only impact \(t_{\rm rec}\) by a few factors, we find
\[t_{\rm rec}(r)\approx t_{\rm rec}(5\mathrm{au})\left(\frac{r}{5\mathrm{au}} \right)^{3/2}. \tag{23}\]
Assuming outbursts occur regularly every \(\Delta t_{\rm orb}\), we can estimate the fraction of time during which the dust population is out of local coagulation/fragmentation equilibrium as \(t_{\rm rec}(r)/\Delta t_{\rm orb}\). Using \(\Delta t_{\rm orb}=10^{4}\) yr, this fraction reaches 45% for B-por-m. s.
We can also find the position of the critical radius \(r_{\rm crit}\), defined as the heliocentric distance outside which \(t_{\rm rec}(r)>\Delta t_{\rm orb}\). If the outburst is sufficiently strong to push the water snowline outside the critical radius, an unrecovered annulus of width \(r_{\rm St}^{\rm orb}-r_{\rm crit}\) is formed, within which the dust distribution never reach the coagulation/fragmentation equilibrium (see Fig. 11). The annulus presence and width is thus determined by the independent combination of outburst properties
Figure 10: Optical depth at 1.3 mm vs. time for the different models in location B. The vertical lines denote respectively \(t_{\rm orb}^{\rm start}\) and \(t_{\rm orb}^{\rm end}\). The logarithmic scale is reset at \(t_{\rm orb}^{\rm start}\) (see also Fig. 6). The population being well resolved at millimetre wavelengths (see Fig. 8), we do not include the statistical noise from independent runs.
and coagulation physics. We find respectively \(r_{\rm crit}=23.2\) au and \(r_{\rm crit}=14.6\) au for B-comp-resi and B-por-resi respectively, and \(r_{\rm crit}=14.6\) au and \(r_{\rm crit}=8.5\) au for B-comp-m.s and B-por-m.s. These values are confirmed by the results of additional coagulation simulations not shown here. For our moderate-amplitude outburst, only aggregates obeying the many-seeds porous model would have an unrecovered annulus, located in between \(r_{\rm crit}=8.5\) au and \(r_{\rm SL}^{\rm orb}=13\) au. Other systems have been observed undergoing much stronger outbursts, like in V883 Ori where it has been suggested from HCO+ observations that the excited water snowline is located as far as \(\approx 100\) au (Leemker et al., 2021). In such a case, a considerable portion of the disc would not be expected to recover between repeated accretion outbursts.
### Dust emission as a past outburst tracer
As mentioned in Sect. 1, one of our objective is to investigate whether the alteration of dust properties leaves a durable observational signature on discs. It would allow us to trace past outbursts, thus build a greater statistical estimate of sources undergoing such events to better understand their cause, strength and frequency. If most discs undergo repeated accretion outbursts during their lifetime, as suggested by the episodic accretion scenario (Dunham and Vorobyov, 2012; Audard et al., 2014), then the comprehension of such signatures would be of even greater importance for any protoplanetary discs observations. In this section, we will focus on the evolution of \(\beta\), the spectral index of the dust opacity at millimetre wavelengths (see Sect. 6), as it is a quantity often accessible from ALMA observations.
We saw in Fig. 9 that the growth of compact aggregates is punctuated with a resonant amplification of \(\beta\) around millimetre sizes. As the coagulation timescale increases with the heliocentric distance, the observation of a non-outbursting disc at a time \(t\) should display a resonant peak at a specific radius \(r\), propagating outward with time. The resonance being damped for porous aggregates, Kataoka et al. (2014) predicted that the observation of such peak could infer the presence of compact growth in discs. In an outbursting system, however, an additional resonant signal could be present between the quiescent and excited snowline positions, due to the alteration of compact grains (Fig. 9). This secondary signal would be visible during the outburst (for fast and intermediate adjustments, Fig. 11) or after (slow adjustment). Given the relatively short duration of outbursts as compared to the coagulation timescale, it would most likely
Figure 11: Schematic illustrating the adjustment of the dust size distribution in the radial direction during and after the outburst, for the resilient and many-seeds models. (1) Fast adjustment: the size distribution adjusts rapidly to the outburst conditions and recovers rapidly after, so that dust grains are in collisional equilibrium for most of the outburst duration and quiescent phase. (2) Intermediate adjustment: The outer part of the disc responds less rapidly, hence it is not in equilibrium during most of the outburst duration and quiescent phase. The dust population entirely recovers before the next event. (3) Slow adjustment : All solids between \(r_{\rm SL}^{\rm orb}\) and \(r_{\rm SL}^{\rm orb}\) do not reach the collisional equilibrium during the outburst. Parts of the outer disc do not recover before the next event, leading to a non-recovery annulus whose width depends on the coagulation physics and outburst properties.
be present in the aftermath of the event, during the recovery phase. The secondary signal would be always visible in the zone B of a disc having an unrecovered annulus (Fig. 11).
We therefore speculate that the observation of two resonant peaks at two radii of a protoplanetary disc could trace the occurrence of a past outburst event, in addition to tracking the compact structure of dust grains. The shape and height of the secondary peak could help predicting whether the compact aggregates follow the resilient or many-seeds model, thus using outbursting objects as laboratory to infer the behaviour of dust to sublimation. From predictions on the position of the quiescent snowline, the time passed since the outburst could be constrained, along with a lower estimate of the excited snowline position and the strength of the accretion outburst.
However, it is important to note a few factors that may impact that picture. First, even fairly moderate proxies are already enough to impact the appearance of the resonance peak. As shown for example in Fig. 3 of Miotello et al. (2022), values corresponding to \(\rho_{\rm int}\approx 10^{-1}\ {\rm g\ cm^{-3}}\) are sufficient to dampen the resonance peak. Second, even in the compact scenario several disc parameters may severely influence the particle size distribution and modify the temporal evolution of \(\beta\)(Birnstiel et al., 2011). The turbulence, notably, can have a great impact, as higher turbulence leads to larger relative velocity and a lower maximum size in the coagulation/fragmentation equilibrium (Birnstiel et al., 2011). We present on Fig. 12 the temporal evolution of \(\beta\) for the model B-comp-resi using different values of the turbulence parameter \(\alpha\). For weaker turbulence (\(\alpha=10^{-4}\)), the size distribution reaches larger \(\alpha_{\rm max}\) leading to smaller \(\beta\) (Fig. 7). The post-outburst variations are weaker, and span on a longer duration (\(\approx 5000\) yr). For stronger turbulence (\(\alpha=10^{-2}\)), \(\alpha_{\rm max}\) is located in the resonance size range, leading to a higher opacity index. It peaks at \(\beta\approx 3.5\) when the composition changes, as the opacity index of the rocky mixture is higher (Fig. 7). Similarly to A-comp, fragmentation leads to a mass loss of mm-sized aggregates and a sharp decrease in \(\beta\), unlike the \(\alpha=10^{-3}\) and \(\alpha=10^{-4}\) cases. The post-outburst recovery is rapid, within \(200\) yr. We note that with settling effect included, the dust scale-height would also vary with the turbulence and the recovery timescale may differ from the results of our local model. In the assumption of Eq. 21, the recovery timescale would be independent of the turbulence strength.
disc lifetime, representing respectively 20% and 45% for the compact and porous growth at 5 au. In the case of a slow adjustment (see Fig. 11), the formation of planetesimals in the unrecovered annulus between \(r_{\rm crit}\) and \(r_{\rm SL}^{\rm orb}\) may be completely inhibited for the entirety of the disc lifetime.
### Limitation of the local approach
The clear limitation of our Monte Carlo coagulation code lies in its local approach. Even though we consider the radial drift for the calculation of the relative velocity, we cannot take into account that solids may be removed from their environment due to efficient drift, notably around \(St\approx 1\). In particular, if \(t_{\rm drift}<t_{\rm grow}\) for sizes below the fragmentation limit, then the growth would be halted by the radial drift, i.e. the radial drift barrier.
We estimate the growth timescale at a given size with \(t_{\rm grow}=a/(da/dt)\) and the corresponding drift timescale as the orbital radius divided by the radial drift velocity. In the inner disc, the condition \(t_{\rm drift}>t_{\rm grow}\) is largely satisfied for all sizes. In location B, calculations show that \(t_{\rm drift}>10^{5}\) yr, i.e. the length of the simulation, for almost all sizes. The drift timescale being typically larger than: 1) the time needed for the population to reach its coagulation/fragmentation equilibrium, 2) the outburst duration, and 3) the recovery timescale, we do not expect radial drift to significantly alter our findings. However, for the very largest particles close to the fragmentation barrier - with sizes \(>\)0.2 cm and \(>\)1400 cm respectively for compact and porous aggregates - the drift timescale can be as short as \(t_{\rm drift}\lesssim 10^{4}\) yr. While this is still long compared to the outburst duration and recovery time, considerable radial drift may take place between outbursts. Including the effects of radial transport will be the focus of future work.
### Conservation of the total water mass
Since we are considering a closed volume of dust and gas in a disc, the total mass of rock and water (in ice or vapour) should be conserved in time. The total rock mass conservation is ensured by our definition of the swarms, as the mass of each swarm \(M_{\rm swm}\) is kept constant through the growth by adjusting the number of particles \(N_{i}\) in that swarm. However, because of how the superparticle approach only updates the \(i\)-th particle involved in the collision, fluctuations in the total water mass may appear if the colliding pair (\(i\), \(j\)) has dissimilar water fractions, which happens for example in location B after the outburst. After the water content is re-distributed through the population by collisional mixing, the fluctuations disappear and the total water mass stabilises. Simulations used in this paper, with \(n=10^{4}\) superparticles, displayed fluctuations of the total water mass overall averaging below 1%. For B-por-resi, which displayed the broadest water distribution (Fig. 5), the fluctuations for an individual run can be as high as 5.7%, but averaging over the three independent runs leads to 0.3%. With such values, the variation of the total water mass does not have a noticeable impact on our results, but we note that using the Monte Carlo approach originally proposed by Ormel et al. (2007) could remove such statistical fluctuations, as the properties of both colliding particles are updated.
### Other ice species
Throughout this manuscript, we focused only on the impact of water ice. We note that including other volatile species and snowlines (e.g. CO, CO\({}_{2}\)) could be an interesting direction for future work. However, there is still a large parameter space to explore by laboratory experiments concerning the impact of multiple ice species on the collisional properties of dust grains, and whether the many-seeds response can be extended to the sublimation of other abundant tices. We also note that dust in high temperature environments (\(T>1200\) K) is thought to become more sticky (Pillich et al., 2021). Accretion outbursts could provide the necessary temperature to lead to boosted growth in a more extended fractions of the inner disc. Whether that may offer a pathway for the formation of terrestrial planets could be a key aspect to explore.
Figure 13: Mass-weighted average of the water fraction \(f_{\rm w}\) for different Stokes number bins of compact (left) and porous (right) aggregates in the resilient model. The many-seeds is not represented as it displays negligible variation of the water fraction. The logarithmic scale is reset at \(r_{\rm orb}^{\rm start}\) (see also Fig. 6). The statistical uncertainties are represented as shaded area and are obtained similarly to Fig. 4 and Fig. 5.
## 8 Summary and outlook
We have developed a local coagulation model based on the superparticle approach (Zsom & Dullemond, 2008) to simulate dust growth/fragmentation in a disc undergoing an FUor-type accretion outburst (Sect. 4). We followed the evolution of grain properties, and considered multiple structural designs for the aggregation and response to sublimation (summarised in Fig. 3). We applied our model at two disc locations to explore the impacts of the outburst with and without compositional changes (Sect. 2). Coagulation results (Sect. 5) were then converted into absorption opacity to investigate whether the alteration of dust properties has implications for the observation of protoplanetary discs (Sect. 6). Our main findings are summarised as follows:
1. The accretion outburst affects the size distribution in the entire disc and for all dust models (e.g. Sect. 5.1 and Sect. 5.2). The most dramatic size alteration occurs in Zone B, and when particles fall apart upon water sublimation (i.e. many-seeds model). If aggregates survive sublimation (i.e. resilient model), the size reduction is driven by fragmentation and depends on the time required by pebbles to recover their initial water content. In zone A and C, the size alteration is smaller as the change in temperature is not accompanied by modifications of dust properties (see Sect. 5.1.1).
2. Only solids in zone A adjust to the new collisional equilibrium within the end of the outburst (Sect. 5.3). In zone B, the size distribution takes longer to adjust and its peak is not well characterized by a theoretical fragmentation limit (Fig. 6). In the many-seeds model aggregates are generally smaller than the theoretical maximum size \(a_{\rm max}^{\rm rock}\). In the resilient scenario aggregates will instead be too large for their ice-free composition (see slow adjustment in Fig. 11). The latter may offer an explanation for the observation of large dry pebbles in FU Ori by Liu et al. (2021).
3. Re-condensation leads to an heterogeneous distribution of water, preferentially depositing ice on small grains dominating the total surface area (Sect. 5.1 and Sect. 5.2), which highly diverges from the constant \(f_{\rm w}\) expected in non-outbursting systems. The water fraction and internal density distributions are mostly affected if aggregates have a broad size range at the time of re-condensation. This is the case in the resilient model, and could be the case in the many-seeds model if e.g. the outburst is longer (\(\tau_{\rm orb}>100\) yr) or the turbulence stronger (\(\alpha>10^{-3}\)) than in our simulations (Sect. 5.3). The time needed to recover the pre-outburst water distribution depends on the efficiency of collisional mixing between pebbles and dust grains, reaching more than \(1000\) yr in the porous resilient model at 5 au (Fig. 13).
4. After the accretion outburst, the population returns to the initial equilibrium on a timescale that depends on the outburst duration, coagulation physics, aggregation model, and the response to sublimation (Sect. 7.1). In our simulations at 5 au, it takes up to \(4500\) yr for porous many-seeds aggregates (Fig. 6). Depending on how the recovery timescale compares to the outburst rate, there may be portions of the disc where solids never reach coagulation/fragmentation equilibrium (i.e. unrecovered annulus, see Fig. 11).
5. The changes in size distribution and ice content together result in a complex response in the absorption opacity (Fig. 8) also visible at millimetre wavelengths through the opacity index \(\beta\) (Fig. 9). Dust emissions behave quite differently whether aggregates have a compact or a porous structure (Kataoka et al., 2014). At millimetre wavelengths, emissions are optically thin at 5 au (Fig. 10).
6. If dust particles are compact, the opacity index \(\beta\) would be a good indicator of their alteration by the outburst. In our simulations, the recovery of aggregates leads to a sharp increase of \(\beta\) after the event, reaching \(\beta\approx 2.9\) after \(1200\) yr in the many-seeds case. This observational feature may provide a way to track past accretion outbursts in protoplanetary discs, and improve our statistical sample of such events. In addition, the distinct profiles associated to the resilient and many-seeds models could allow to determine how aggregates actually respond upon sublimation, making outbursting objects important laboratories for exploring the structure of dust particles (Sect. 7.2).
7. The formation of planetesimals is impacted by the outburst (Sect. 7.3). In the resilient case, efficient fragmentation leads to a mass loss of large pebbles which can lower the chance to trigger planetesimal formation through the streaming instability. If they do form, their properties will be set by the altered properties of pebbles, i.e. ice-free during the outburst, and ice-poor after, for a duration dependant of the efficiency of collisional mixing with ice-rich grains. It would additionally lead to a composition radial gradient in their structure (Fig. 13). In the many-seeds case, their formation through the streaming instability is inhibited for the time required to re-grow large pebbles (up to \(4500\) yr in the porous model).
In summary, our simulations have demonstrated how FUor-type accretion outbursts can alter the collisional evolution of dust and ice in protoplanetary disc midplanes, leading to changes in e.g., the ice distribution and maximum size that persist long after the outburst has faded. Given that most systems are thought to experience such frequent outbursts during their evolution, as suggested in the episodic accretion scenario (Dunham & Vorobyov, 2012; Audard et al., 2014), we stress that considering their impact on dust evolution is a key aspect to further understand the structure of protoplanetary discs and the process of planet formation. Investigating further the recovery front in the radial profile of the disc and comparing with observations will be the focus of follow-up works.
## Acknowledgements
We are grateful to the anonymous reviewer for their thorough and insightful comments which helped improve the manuscript. We thank Enrique Macias for useful discussions regarding interpretations of ALMA observations and the opacity index, and David J. Simon for helpful comments on the design of the schematics present in the manuscript. This project has made use of the package DSHARP-OPAC(Birnstiel et al., 2018), along with the following Python packages: MATPLOTLIB(Hunter, 2007), NUNPY(Harris et al., 2020), and PANDAS(McKinney et al., 2010).
## Data availability
The particles properties generated with our coagulation simulations used for this paper will be shared upon reasonable request. The DSHARP-OPAC6 package developed by Birnstiel et al. (2018) is publicly available.
Footnote 6: [https://github.com/birnstiel/dsharp_opac](https://github.com/birnstiel/dsharp_opac)
|
2303.06102 | Bootstrapping Dynamic Distance Oracles | Designing approximate all-pairs distance oracles in the fully dynamic setting
is one of the central problems in dynamic graph algorithms. Despite extensive
research on this topic, the first result breaking the $O(\sqrt{n})$ barrier on
the update time for any non-trivial approximation was introduced only recently
by Forster, Goranci and Henzinger [SODA'21] who achieved $m^{1/\rho+o(1)}$
amortized update time with a $O(\log n)^{3\rho-2}$ factor in the approximation
ratio, for any parameter $\rho \geq 1$.
In this paper, we give the first constant-stretch fully dynamic distance
oracle with a small polynomial update and query time. Prior work required
either at least a poly-logarithmic approximation or much larger update time.
Our result gives a more fine-grained trade-off between stretch and update time,
for instance we can achieve constant stretch of $O(\frac{1}{\rho^2})^{4/\rho}$
in amortized update time $\tilde{O}(n^{\rho})$, and query time
$\tilde{O}(n^{\rho/8})$ for a constant parameter $\rho <1$. Our algorithm is
randomized and assumes an oblivious adversary.
A core technical idea underlying our construction is to design a black-box
reduction from decremental approximate hub-labeling schemes to fully dynamic
distance oracles, which may be of independent interest. We then apply this
reduction repeatedly to an existing decremental algorithm to bootstrap our
fully dynamic solution. | Sebastian Forster, Gramoz Goranci, Yasamin Nazari, Antonis Skarlatos | 2023-03-10T17:36:36Z | http://arxiv.org/abs/2303.06102v1 | # Bootstrapping Dynamic Distance Oracles
###### Abstract
Designing approximate all-pairs distance oracles in the fully dynamic setting is one of the central problems in dynamic graph algorithms. Despite extensive research on this topic, the first result breaking the \(O(\sqrt{n})\) barrier on the update time for any non-trivial approximation was introduced only recently by Forster, Goranci and Henzinger [SODA'21] who achieved \(m^{1/\rho+\alpha(1)}\) amortized update time with a \(O(\log n)^{3\rho-2}\) factor in the approximation ratio, for any parameter \(\rho\geq 1\).
In this paper, we give the first _constant-stretch_ fully dynamic distance oracle with a small polynomial update and query time. Prior work required either at least a poly-logarithmic approximation or much larger update time. Our result gives a more fine-grained trade-off between stretch and update time, for instance we can achieve constant stretch of \(O(\frac{1}{\rho^{2}})^{4/\rho}\) in amortized update time \(\tilde{O}(n^{\rho})\), and query time \(\tilde{O}(n^{\rho/8})\) for a constant parameter \(\rho<1\). Our algorithm is randomized and assumes an oblivious adversary.
A core technical idea underlying our construction is to design a black-box reduction from decremental approximate hub-labeling schemes to fully dynamic distance oracles, which may be of independent interest. We then apply this reduction repeatedly to an existing decremental algorithm to bootstrap our fully dynamic solution.
## 1 Introduction
The All-Pairs Shortest Paths (APSP) problem is one of the cornerstone graph problems in combinatorial optimization. It has a wide range of applications, for instance in route planning, navigation systems, and routing in networks, and it has been extensively studied from both practical and theoretical perspectives. In theoretical computer science, this problem enjoys much popularity due to its historic contributions to the development of fundamental algorithmic tools and definitions as well as being used as a subroutine for solving other problems.
The APSP problem has also been studied extensively in _dynamic_ settings. Here, the underlying graph undergoes edge insertions and deletions (referred to as edge _updates_), and the goal is to quickly report an approximation to the shortest paths between _any_ source-target vertex pair. The dynamic setting is perhaps even more realistic for some of the applications of the APSP problem, e.g., in navigation systems, link statistics of road networks are prone to changes because of evolving traffic conditions. A naive (but rather expensive) solution to handle the updates is achieved by running an exact static algorithm after each update. However, at an intuitive level, one would
expect to somehow exploit the fact that a single update is small compared to the size of the network, and thus come up with much faster update times.
Much of the research literature in dynamic APSP has focused on the _partially_ dynamic setting. In contrast to the _fully_ dynamic counterpart, this weaker model restricts the types of updates to edge insertions or deletions only. Some reasons for studying partially dynamic algorithms include their application as a subroutine in speeding up static algorithms (e.g., flow problems [13]), or their utilization as a stepping stone for designing fully-dynamic algorithms, something that we will also exploit in this work. The popularity of the partially dynamic setting can also attributed to the fact that dealing with only one type of update usually leads to better algorithmic guarantees. In fact, the fully dynamic APSP problem admits strong conditional lower bounds in the _low approximation_ regimes: under plausible hardness assumptions, Abboud and Vassilevska Williams [1], and later Henzinger, Krinninger, Nanongkai, and Saranurak [14] show that there are no dynamic APSP algorithms achieving a \((3-\epsilon)\) approximation with sublinear query time and the update time being a small polynomial.
From an upper bounds perspective, there are only two works that achieve sublinear update time for fully dynamic APSP. Abraham, Chechik, and Talwar [1] showed that there is an algorithm that achieves constant approximation and sublinear update time. However, their algorithm cannot break the \(O(\sqrt{n})\) barrier on the update time. Forster, Goranci, and Henzinger [11] gave different trade-offs between approximation and update time. In particular, in \(n^{o(1)}\) amortized update time and polylogarithmic query time they achieve \(n^{o(1)}\) approximation. These two works suffer from either a large approximation guarantee or update time, leaving open the following key question:
_Is there a fully dynamic APSP algorithm that achieves constant approximation with a very small polynomial update time?_
### Our result
In this paper, we answer the question of achieving constant approximation with a very small polynomial update time for the fully dynamic APSP in the affirmative, also known as the _fully dynamic distance oracle_ problem. More generally, we obtain a trade-off between approximation, update time, and query time as follows:
**Corollary 1.1**.: _Given a weighted undirected graph \(G=(V,E)\) and a constant parameter \(0<\rho<1\), there is a randomized, fully dynamic distance oracle with constant stretch \((\frac{256}{\rho^{2}})^{4/\rho}\) that w.h.p. achieves \(\tilde{O}(n^{\rho})\) amortized update time and \(\tilde{O}(n^{\rho/8})\) query time. These guarantees work against an oblivious adversary._
In addition to the constant stretch regime, we obtain several interesting tradeoffs, as shown in Theorem 3.5. For example, our algorithm achieves \(O(\log\log n)\) stretch with a much faster query time of \(n^{o(1)}\) and very small polynomial update time (see Corollary 3.6).
Our result brings the algorithmic guarantees on fully dynamic distance oracles closer to the recent conditional hardness result by Abboud, Bringmann, Khoury, and Zamir [1] (and the subsequent refinement in [1]), who showed that there is no fully dynamic algorithm that simultaneously achieves constant approximation and \(n^{o(1)}\) update and query time. We also remark that our results are consistent with their lower bound since if we insist on constant approximation, the above trade-off shows that the update time cannot be made as efficient as \(n^{o(1)}\).
On the technical side, our result follows the widespread "high-level" approach of extending decremental algorithms to the fully dynamic setting (see e.g. [11, 12, 13, 14, 15, 16, 17, 18]) and it is inspired by recent developments on the dynamic distance oracle literature that rely on vertex sparsification [13, 14, 15]. Specifically, we design a reduction that turns a decremental hub-labeling scheme with some specific properties into a fully dynamic distance oracle, which may be of independent interest. Our key observation is that an existing state-of-the-art decremental distance oracle that works against an oblivious adversary can serve as such hub-labeling scheme. The fully dynamic distance oracle is then obtained by repeatedly applying the reduction whilst carefully tuning various parameters across levels in the hierarchy.
More generally, our reduction does not make any assumptions on the adversary and is based on properties that are quite natural. At a high-level, we consider decremental approximate hub labeling scheme with the following properties. (1) For every vertex \(v\in V\), maintain a set \(S(v)\), called a _hub set_, that has bounded size. (2) For every vertex \(v\in V\), maintain distance estimates \(\delta(v,u)\) for each \(u\in S(v)\), with bounded _recourse_, which is defined as the number of times such distance estimates are affected during the execution of the algorithm. (3) Return the final estimate between a pair of vertices \(s,t\in V\), by minimizing estimates over elements in \(S(s)\cap S(t)\).
Many known distance oracles (e.g. variants of the well-known distance oracle of [13]) have a query mechanism that satisfies the first and third properties, while efficient dynamic distance oracles are often based on bounded recourse structures satisfying the second property.
Hence we hope that this reduction can be further utilized in the future by characterizing deterministic decremental distance oracles or the ones with different stretch/time tradeoffs as such hub-labeling schemes. Similar reductions have been previously proposed in [1] and then refined in [13] in slightly different contexts. In this work, in addition to refining this approach for obtaining a constant stretch distance oracle, we aim to keep the reduction as modular as possible to facilitate potential future applications.
### Related Work
In the following, we give an overview of existing works on fully dynamic all-pairs distance oracles by dividing them into several categories based on their stretch guarantee. Unless noted otherwise, all algorithms cited in the following are randomized and have amortized update time. We report running time bounds for constant accuracy parameter \(\epsilon\) and assume that we are dealing with graphs with positive integer edge weights that are polynomial in the number of vertices. We would also like to point out that all "combinatorial" algorithms discussed in the following (i.e., algorithms that do not rely on "algebraic" techniques like dynamic matrix inverse) are internally employing decremental algorithms. Decremental algorithms have also been studied on their own with various tradeoffs [13, 1, 14, 15, 16, 17], and competitive deterministic algorithms have been devised, e.g., [14, 1, 15].
Exact.After earlier attempts on the problem [16, 15], Demetrescu and Italiano [15] presented their seminal work on exact distance maintenance achieving \(\tilde{O}(n^{2})\) update time (with log-factor improvements by Thorup [18]) and constant query time for weighted directed graphs.
Subsequently, researchers have developed algorithms with subcubic worst-case update time and constant query time [18, 1] with some of them being deterministic [19, 10]. Note that one can construct a simple update sequence for which any fully dynamic algorithm
maintaining the distance matrix or the shortest path matrix explicitly needs to perform \(\Omega(n^{2})\) changes to this matrix per update.
Algorithms breaking the \(n^{2}\) barrier at the cost of large query time have been obtained in unweighted directed graphs by Roditty and Zwick [14] (update time \(\tilde{O}(mn^{2}/t^{2})\) and query time \(O(t)\) for any \(\sqrt{n}\leq t\leq n^{3/4}\)), Sankowski [13] (worst-case update time \(O(n^{1.897})\) and query time \(O(n^{1.265})\)), and van den Brand, Nanongkai, and Saranurak [15] (worst-case update time \(O(n^{1.724})\) and query time \(O(n^{1.724})\)). The latter two approaches are algebraic and their running time bounds depend on the matrix multiplication coefficient \(\omega\).
(\(1+\epsilon\))-approximation.In addition to exact algorithms, combinatorial and algebraic algorithms have also been developed for the low stretch regime of \((1+\epsilon)\)-approximation. In particular, Roditty and Zwick obtained the following trade-off with a combinatorial algorithm: update time \(\tilde{O}(mn/t)\) and query time of \(\tilde{O}(t)\) for any \(\delta>0\) and \(t\leq m^{1/2-\delta}\). Subsequently, for \(t\leq\sqrt{n}\), a deterministic variant was developed [13] and it was generalized to weighted, directed graphs [1]. Furthermore, by a standard reduction (see e.g. [1]) using a decremental approximate single-source shortest paths algorithm [13, 1], one obtains a combinatorial, deterministic algorithm with update time \(O(nm^{1+o(1)}/t)\) and query time \(t\) for any \(t\leq n\), for the fully dynamic all-pairs problem in weighted undirected graphs. Conditional lower bounds [11, 12, 13] suggest that the update and the query time cannot be both small polynomials in \(n\). For example, no algorithm can maintain a \((5/3-\epsilon)\)-approximation with update time \(O(m^{1/2-\delta})\) and query time \(O(m^{1-\delta})\) for any \(\delta>0\), unless the OMv conjecture fails [13].
Algebraic approaches can achieve subquadratic update time and sublinear query time, namely worst-case update time \(O(n^{1.863})\) and query time \(O(n^{0.666})\) in weighted directed graphs [1], or worst-case update time \(O(n^{1.788})\) and query time \(O(n^{0.45})\) in unweighted undirected graphs [1]. As the conditional lower bound by Abboud and Vassilevska Williams [12] shows, algebraic approaches seem to be necessary in this regime: unless one is able to multiply two \(n\times n\) Boolean matrices in \(O(n^{3-\delta})\) time for some constant \(\delta>0\), no fully dynamic algorithm for \(st\) reachability in directed graphs can have \(O(n^{2-\delta^{\prime}})\) update and query time and \(O(n^{3-\delta^{\prime}})\) preprocessing time (for some constant \(\delta^{\prime}>0\)).
(\(2+\epsilon\))-approximation.Apart from earlier work [14], the only relevant algorithm in the \((2+\epsilon)\)-approximation regime is by Bernstein [1] and achieves update time \(m^{1+o(1)}\) and query time \(O(\log\log\log n)\) in weighted undirected graphs. It can be made deterministic using the deterministic approximate single-source shortest path algorithm by Bernstein, Probst Gutenberg, and Saranurak [1]. The only conditional lower bound in this regime that we are aware of states that no algorithm can maintain a \((3-\epsilon)\)-approximation with update time \(O(n^{1/2-\delta})\) and query time \(O(n^{1-\delta})\) for any \(\delta>0\), unless the OMv conjecture fails [13].
Larger approximation.In the regime of stretch at least \(3\), the following trade-offs between stretch and update time have been developed: Abraham, Chechik, and Talwar [1] designed an algorithm for unweighted undirected graphs with stretch \(2^{O(\rho k)}\), update time \(\tilde{O}(m^{1/2}n^{1/k})\), and query time \(O(k^{2}\rho^{2})\), where \(k\geq 1\) is a freely chosen parameter and \(\rho=1+\lceil\log n^{1-1/k}/\log(m/n^{1-1/k})\rceil\). Forster, Goranci, and Henzinger [1] designed an algorithm for weighted undirected graphs with stretch \(O(\log n)^{3k-2}\), update time time \(m^{1/k+o(1)}\cdot O(\log n)^{4k-2}\), and query time \(O(k(\log n)^{2})\), where \(k\geq 2\) is an arbitrary integer parameter. Finally, note that any algorithm whose update time
depends on the sparsity of the graph (possibly also a static one) can be run on a spanner of the input graph maintained by a fully dynamic spanner algorithm [1]. These upper bounds are complemented by the following conditional lower bound: for any integer constant \(k\geq 2\), there is no dynamic approximate distance oracle with stretch \(2k-1\), update time \(O(m^{u})\) and query time \(O(m^{q})\) with \(ku+(k+1)q<1\), unless the 3-SUM conjecture fails [1].
## 2 Preliminaries
We consider weighted undirected graphs \(G=(V,E,w)\) with positive integer edge weights. We denote by \(n=|V|\) the number of vertices, by \(m=|E|\) the number of edges, and by \(W\) the maximum weight of an edge. We denote by \(\operatorname{dist}_{G}(u,v)\) the value of a shortest path from \(u\) to \(v\) in \(G\).
In dynamic graph algorithms, the graph is subject to updates and the algorithm has to process these updates by spending as little time as possible. In this paper, we consider updates that insert a single edge to the graph or delete a single edge from the graph. Moreover, observe that an update that changes the weight of an edge can be simulated by two updates, where the first update deletes the corresponding edge and the second update re-inserts the edge with the new weight. Let \(G^{(0)}\) be the initial graph, and \(G^{(t)}\) be the graph at time \(t\) which is the time after \(t\) updates have been performed to the graph.
In this paper we are interested in designing _fully dynamic_ algorithms which can process edge insertions and edge deletions, and thus, weight changes as well. A _decremental_ algorithm can process only edge deletions and weight increases. We assume that the updates to the graph are performed by an _oblivious adversary_ who fixes the sequence of updates before the algorithm starts. Namely, the adversary cannot adapt the updates based on the choices of the algorithm during the execution. We say that an algorithm has _amortized update time_\(u(n,m)\) if its total time spent for processing any sequence of \(\ell\) updates is bounded by \(\ell\cdot u(n,m)\), when it starts from an empty graph with \(n\) vertices and during all the updates has at most \(m\) edges (the time needed to initialize the algorithm on the empty graph before the first update is also included).
In our analysis we use \(\tilde{O}(1)\) to hide factors polylogarithmic in \(nW\). Namely, we write \(\tilde{O}(1)^{d}\) to represent the term \(O(\log^{cd}nW)\), for a constant \(c\) and a parameter \(d\).
## 3 Fully Dynamic Distance Oracle
The technical details of our distance oracle are divided into three parts. Initially in Section 3.1, we give the definition of a hub-labeling scheme together with other useful definitions. Afterwards, we provide a reduction for extending a decremental approximate hub-labeling scheme properties to a fully dynamic distance oracle. Then in Section 3.2, we explain how an existing decremental algorithm gives us an approximate hub-labeling scheme that we can use in this reduction, and finally in Section 3.3 we put everything together by applying our reduction repeatedly, in order to get a family of fully dynamic distance oracles.
### Reduction from a decremental hub-labeling scheme to fully dynamic distance oracle
We start by defining approximate hub-labeling schemes, and then explain how they are used in our reduction. Hub-labeling schemes were formally defined by [1] (and were previously
introduced under the name 2-hop cover1 in [10]). We are slightly modifying the definition for our purpose, for instance by considering an approximate variant.
Footnote 1: The concept of 2-hop cover or hub labeling should not be confused with the (related) concept of a hopset that we will later see in Section 3.2.
**Definition 3.1** (Approximate Hub-Labeling Scheme).: _Given a graph \(G=(V,E)\), a hub-labeling scheme \(\mathcal{L}\) of stretch \(\alpha\) consists of_
1. _for every vertex_ \(v\in V\)_, a hub set_ \(S(v)\subseteq V\) _and_
2. _for every pair of vertices_ \(u,v\in V\)_, a distance estimate_ \(\delta(v,u)\) _such that_ \(\operatorname{dist}_{G}(v,u)\leq\delta(v,u)<\infty\) _if_ \(u\in S(v)\) _and_ \(\delta(v,u)=\infty\) _otherwise._
_and for every pair of vertices \(s\) and \(t\) guarantees that_
\[\delta_{\mathcal{L}}(s,t)\coloneqq\min_{v\in S(s)\cap S(t)}(\delta(s,v)+ \delta(t,v))\leq\alpha\cdot\operatorname{dist}_{G}(s,t)\.\]
The _distance label_ of a vertex \(v\) consists of the hub set \(S(v)\) and the corresponding distance estimates \(\delta(v,u)\), for all \(u\in S(v)\).
Note that the definition implies \(\delta_{\mathcal{L}}(s,t)\geq\operatorname{dist}_{G}(s,t)\) for every pair of vertices \(s\) and \(t\). Furthermore, a hub-labeling scheme of stretch \(\alpha\) directly implements a distance oracle of stretch \(\alpha\) with query time \(O(\max_{v\in V}|S(v)|)\) that consists of the collection of distance labels for all vertices \(v\in V\). We also remark that the entries of value \(\infty\) in the distance estimate \(\delta(\cdot,\cdot)\) do not need to be stored explicitly if the hub sets are stored explicitly and that the distance estimate \(\delta(\cdot,\cdot)\) is not necessarily symmetric.
In the following we consider _decremental_ algorithms for maintaining approximate hub-labeling schemes, that is, _decremental approximate hub-labeling schemes_ which process each edge deletion in the graph by first updating their internal data structures and then outputting the changes made to the hub sets and the distance estimates \(\delta(\cdot,\cdot)\). Namely for a vertex \(v\in V\), vertices may leave or join \(S(v)\), or the distance estimates of vertices belonging to \(S(v)\) may change, since the decremental algorithm has to update this information for maintaining correctness at query time.
Denote by \(S^{(t)}(v)\) the hub set of a vertex \(v\in V\), after \(t\) updates have been processed by the decremental approximate hub-labeling scheme (we may omit the superscript \(t\) whenever time is fixed), where \(t\geq 1\) is an integer parameter. Then for a pair of vertices \(u,v\in V\), the distance estimate \(\delta(v,u)\) after \(t\) updates is defined based on Definition 3.1 and \(S^{(t)}(v)\). Namely, if \(u\) is inside the hub set of \(v\) after \(t\) updates (i.e., \(u\in S^{(t)}(v)\)) then \(\operatorname{dist}_{G^{(t)}}(v,u)\leq\delta(v,u)<\infty\), otherwise \(\delta(v,u)=\infty\).
After \(t\) edge deletions processed by the decremental approximate hub-labeling scheme, there are three possible changes to the distance estimates \(\delta(v,\cdot)\) corresponding to a vertex \(v\in V\). (1) The distance estimate \(\delta(v,u)\) changes for a vertex \(u\in S^{(t-1)}(v)\cap S^{(t)}(v)\) that remains inside the hub set of \(v\). (2) The distance estimate \(\delta(v,u)\) becomes \(\infty\) because a vertex \(u\in S^{(t-1)}(v)\setminus S^{(t)}(v)\) leaves the hub set of \(v\). (3) The distance estimate \(\delta(v,u)\) receives a finite value because a vertex \(u\in S^{(t)}(v)\setminus S^{(t-1)}(v)\) enters the hub set of \(v\). Let \(\chi^{(t)}(v)\) be the number of all these changes to \(\delta(v,\cdot)\) corresponding to \(v\) at time \(t\), and \(X(v)=\sum_{t}\chi^{(t)}(v)\) be the total number of such changes to \(\delta(v,\cdot)\) corresponding to \(v\) over the course of the algorithm.
In the following lemma, we present a reduction from a decremental approximate hub-labeling scheme to a fully dynamic distance oracle.
**Lemma 3.2**.: _Consider a decremental hub-labeling scheme \(\mathcal{A}\) of stretch \(\alpha\) with total update time \(T_{\mathcal{A}}(n,m,W)\) and query time \(Q_{\mathcal{A}}(n,m,W)\), with the following properties:_
1. \(\forall v\in V\) _and_ \(\forall t:|S^{(t)}(v)|\leq\gamma\)_. In other words, the size of the hub set of any vertex is bounded by_ \(\gamma\) _at any moment of the algorithm._
2. \(\forall v\in V:X(v)\leq\zeta\)_. In other words, for every vertex_ \(v\in V\) _the total number of changes to_ \(\delta(v,\cdot)\) _is at most_ \(\zeta\) _over the course of the algorithm. Moreover the algorithm detects and reports these changes explicitly._
_Then given \(\mathcal{A}\) and a fully dynamic distance oracle \(\mathcal{B}\) of stretch \(\beta\) with amortized update time \(t_{\mathcal{B}}(n,m,W)\) and query time \(Q_{\mathcal{B}}(n,m,W)\), for any integer \(\ell\geq 1\), there is a fully dynamic distance oracle \(\mathcal{C}\) of stretch \(\alpha\beta\) with amortized update time \(t_{\mathcal{C}}(n,m,W)=T_{\mathcal{A}}(n,m,W)/\ell+t_{\mathcal{B}}(\min(\ell( 2+2\mu),n),\ell(1+2\mu),nW)\cdot(2+4\mu)\) and query time \(Q_{\mathcal{C}}(n,m,W)=Q_{\mathcal{A}}(n,m,W)+\gamma^{2}\cdot Q_{\mathcal{B}} (\min\{\ell(2+2\mu),n\},\ell(1+2\mu),nW)\), where \(\mu=\gamma+\zeta\)._
Proof.: We organize the proof in three parts. The first part gives the reduction from \(\mathcal{A}\) and \(\mathcal{B}\) to \(\mathcal{C}\), and the second and third part concerns the correctness and the running times respectively.
Reduction.The fully dynamic distance oracle \(\mathcal{C}\) proceeds in phases of length \(\ell\). At the beginning of the first phase (which is also the beginning of the algorithm), \(\mathcal{C}\) initializes the fully dynamic distance oracle \(\mathcal{B}\) on the initially empty graph \(G\) on \(2\ell\) vertices2 and sets an update counter to \(0\). Whenever an update to \(G\) occurs in the first phase, the update is directly processed by \(\mathcal{B}\).3 As soon as the number of updates is more than \(\ell\), the second phase is started. We define several sets and the graph \(H\) that the fully dynamic distance oracle \(\mathcal{C}\) maintains during each subsequent phase:
Footnote 2: This minor technical detail makes sure that \(\mathcal{B}\) does not have to deal with vertex insertions.
Footnote 3: The special treatment of the first \(\ell\) updates is just a technical necessity for a rigorous amortization argument in the running time analysis.
* Let \(F\) be the set of edges present in \(G\) at the beginning of the phase, \(E\) be the current set of edges in \(G\), and \(D\) be the set of edges deleted from \(G\) during the phase.
* Let \(I=E\setminus(F\setminus D)\) be the set of edges inserted to \(G\) since the beginning of the phase without subsequently having been deleted during the phase, and \(U=\{v\in V\mid\exists e\in I:v\in e\}\) be the set of endpoints of edges in \(I\).
* Let \(H\) be the auxiliary graph that consists of all edges \((u,v)\in I\), together with their hub sets \(S^{(t)}(u)\) and \(S^{(t)}(v)\) after \(t\) edge deletions have been processed by \(\mathcal{A}\). Specifically, \(V(H)=\{v\in V\mid v\in U\text{ or }(u\in U\text{ and }v\in S^{(t)}(u))\}\) and \(E(H)=\{(u,v)\mid(u,v)\in I\text{ or }(v\in U\text{ and }u\in S^{(t)}(u))\}\). Note that at any fixed moment, the size of \(V(H)\) is at most \(\ell\cdot(2+2\gamma)\) and the size of \(E(H)\) is at most \(\ell\cdot(1+2\gamma)\).
At the beginning of each subsequent phase, \(\mathcal{C}\) stores \(F,E\) and \(H\), and sets an update counter to \(0\). Furthermore, \(\mathcal{C}\) initializes the decremental approximate hub-labeling scheme \(\mathcal{A}\) on the current graph \(G\), and the fully dynamic distance oracle \(\mathcal{B}\) on \(H\) which is initially an empty "sketch" graph on \(\ell\cdot(2+2\mu)\) vertices. The graph \(H\) can be thought of as responsible for maintaining estimates for paths that use inserted edges.
Whenever an update to \(G\) occurs, \(\mathcal{C}\) first checks via the update counter whether the number of updates since the beginning of the phase is more than \(\ell\). If this is the case, then \(\mathcal{C}\) starts a new
phase. Otherwise, after an update the fully dynamic distance oracle \(\mathcal{C}\) does the following. On the insertion of an edge \((u,v)\) to \(G\), \(\mathcal{C}\) adds \((u,v)\) to \(I\), adds \(u\) and \(v\) to \(U\), and adds the edge \((u,v)\) to \(H\), together with the edges \((u,p)\) for every \(p\in S(u)\) and \((v,p)\) for every \(p\in S(v)\). Any time an edge \((u,v)\) is added to \(H\), its weight is set to:
\[w_{H}(u,v)=\min(w_{G}(u,v),\delta(u,v),\delta(v,u)).\]
Whenever the first edge incident to some vertex \(v\) is added to \(H\), the algorithm finds a "fresh" vertex (of degree 0) in \(H\) and henceforth identifies it as \(v\). This is always possible, since by the two properties, the number of such vertices in a phase of length \(\ell\) is at most \(\ell\cdot(2+2\mu)\). On the deletion of an edge \((u,v)\in E\) from \(G\), there are two cases to consider.
1. If the edge \((u,v)\) was not present at the beginning of the current phase, or has been deleted and re-inserted (i.e., \((u,v)\in I\)), \(\mathcal{C}\) removes \((u,v)\) from \(I\), adds \((u,v)\) to \(D\), and updates the set \(U\) and the graph \(H\) accordingly. In particular, if \(u\in U\) and \(v\in S(u)\), or \(v\in U\) and \(u\in S(v)\), \(\mathcal{C}\) updates the weight of the edge \((u,v)\) in \(H\) to \(w_{H}(u,v)=\min(\delta(u,v),\delta(v,u))\) (as \(w_{G}(u,v)=\infty\) after the deletion), otherwise \(\mathcal{C}\) removes \((u,v)\) from \(H\). Also, for all the vertices \(v\) that left \(U\) and all the edges \((v,p)\in E(H)\) such that \(p\in S(v)\), if \(p\in U\) and \(v\in S(p)\), then \(\mathcal{C}\) updates the weight of \((v,p)\) in \(H\) to \(w_{H}(v,p)=\delta(p,v)\) (as \(v\notin U\) after the deletion), and otherwise \(\mathcal{C}\) removes \((v,p)\) from \(H\).
2. If the edge \((u,v)\) was present at the beginning of the current phase and has not been deleted yet (i.e., \((u,v)\in F\setminus D\)), \(\mathcal{C}\) adds \((u,v)\) to \(D\) and the deletion is processed by \(\mathcal{A}\). Whenever \(\mathcal{A}\) changes a distance estimate \(\delta(u,\cdot)\) corresponding to a vertex \(v\in V\) and its hub set \(S(v)\), \(\mathcal{C}\) updates the graph \(H\) accordingly. In particular, there are three possible scenarios at time \(t\) of \(\mathcal{A}\).4 (1) Whenever the value of \(\delta(v,u)\) changes for a vertex \(u\in S^{(t-1)}(v)\cap S^{(t)}(v)\) that remains inside the hub set of \(v\), \(\mathcal{C}\) updates the weight of the edge \((v,u)\) in \(H\) to \(w_{H}(u,u)=\min(w_{G}(v,u),\delta(v,u),\delta(u,v))\). (2) Whenever a vertex \(u\in S^{(t-1)}(v)\setminus S^{(t)}(v)\) leaves the hub set of \(v\), then if \((v,u)\in I\) or \(u\in U\) and \(v\in S(u)\), \(\mathcal{C}\) updates the weight of the edge \((v,u)\) in \(H\) to \(w_{H}(v,u)=\min(w_{G}(v,u),\delta(u,v))\) (as \(\delta(v,u)=\infty\) after the deletion), otherwise \(\mathcal{C}\) removes \((v,u)\) from \(H\). (3) Whenever a vertex \(u\in S^{(t)}(v)\setminus S^{(t-1)}(v)\) enters the hub set of \(v\), then if \((v,u)\in I\) or \(u\in U\) and \(v\in S(u)\), \(\mathcal{C}\) updates the weight of the edge \((v,u)\) in \(H\) to \(w_{H}(v,u)=\min(w_{G}(v,u),\delta(v,u),\delta(u,v))\), otherwise \(\mathcal{C}\) adds the edge \((v,u)\) to \(H\) with weight equal to \(w_{H}(v,u)=\delta(v,u)\). Note that the number of these changes at time \(t\) of \(\mathcal{A}\) is equal to \(\chi^{(t)}(v)\) for a vertex \(v\in V\). Observe also that based on the two properties, the number of vertices that participate in \(H\) during a phase of length \(\ell\) is at most \(\ell\cdot(2+2\mu)\). Thus we can always find a "fresh" vertex (of degree 0) in \(H\). Footnote 4: Note that \(t\) is the number of updates processed only by \(\mathcal{A}\) during the phase.
Finally, all the changes performed to \(H\) are processed by the fully dynamic distance oracle \(\mathcal{B}\) running on \(H\), where edge weight changes are simulated by a deletion followed by a re-insertion.
Now a query for the approximate distance between any pair of vertices \(s\) and \(t\) is answered by returning:
\[\delta_{\mathcal{C}}(s,t)=\min\left(\min_{p\in S(s)\cap V(H),q\in S(t)\cap V( H)}\left(\delta(s,p)+\delta_{\mathcal{B}}(p,q)+\delta(t,q)\right),\delta_{ \mathcal{A}}(s,t)\right).\]
Whenever \(S(s)\cap V(H)=\emptyset\) or \(S(t)\cap V(H)=\emptyset\), we let the inside term \(\min(\cdot)=\infty\).
Correctness.To prove the correctness of this algorithm, we need to show that \(\mathrm{dist}_{G}(s,t)\leq\delta_{C}(s,t)\leq\alpha\beta\cdot\mathrm{dist}_{G}(s,t)\). The lower bound \(\mathrm{dist}_{G}(s,t)\leq\delta_{C}(s,t)\) is immediate, since for each approximate distance returned by \(C\), the corresponding path uses edges from \(G\) or distance estimates from the decremental approximate hub-labeling scheme which are never an underestimation of the real distance. To prove the upper bound, consider a shortest path \(\pi\) from \(s\) to \(t\) in \(G\), and let \(G_{\mathcal{A}}\) be the graph maintained by \(\mathcal{A}\) (i.e., the edge set of \(G_{\mathcal{A}}\) is \(E(G_{\mathcal{A}})=F\setminus D\)). If the path \(\pi\) contains only edges from the set \(F\setminus D\), then \(\delta_{C}(s,t)\leq\delta_{\mathcal{A}}(s,t)\leq\alpha\cdot\mathrm{dist}_{G_{ \mathcal{A}}}(s,t)=\alpha\cdot\mathrm{dist}_{G}(s,t)\), and the claim follows. Otherwise, let \((u_{1},v_{1}),\ldots,(u_{j},v_{j})\in I\) denote the edges of \(\pi\) that have been inserted since the beginning of the current phase in order of appearance on \(\pi\). Furthermore, let \(p_{0}\in S(s)\cap S(u_{1})\) be the vertex that "certifies" \(\delta_{\mathcal{A}}(s,u_{1})\), that is, \(\delta_{\mathcal{A}}(s,u_{1})=\delta(s,p_{0})+\delta(u_{1},p_{0})\). Similarly, let \(p_{j}\in S(v_{j})\cap S(t)\) be the vertex that "certifies" \(\delta_{\mathcal{A}}(v_{j},t)\), and for every \(1\leq i\leq j-1\), let \(p_{i}\in S(v_{i})\cap S(u_{i+1})\) be the vertex that "certifies" \(\delta_{\mathcal{A}}(v_{i},u_{i+1})\). These vertices must exist by the definition of an approximate hub-labeling scheme. Furthermore, by the construction of \(H\), the edges \((p_{0},u_{1})\) and \((p_{j},t)\) have been inserted to \(H\), because \(u_{1}\in U\) and \(p_{0}\in S(u_{1})\), and \(v_{j}\in U\) and \(p_{j}\in S(v_{j})\) respectively. Hence, the vertices \(p_{0}\) and \(p_{j}\) belong to \(V(H)\), and the sum \(\delta(s,p_{0})+\delta_{\mathcal{B}}(p_{0},p_{j})+\delta(t,p_{j})\) participates in the inside term \(\min(\cdot)\). Therefore to analyze the claimed upper-bound on the stretch, we proceed as follows:
\[\delta_{C}(s,t) \leq\delta(s,p_{0})+\delta_{\mathcal{B}}(p_{0},p_{j})+\delta(t, p_{j})\] \[\quad\leq\delta(s,p_{0})+\beta\cdot\mathrm{dist}_{H}(p_{0},p_{j} )+\delta(t,p_{j})\] \[\quad\leq\delta(s,p_{0})+\beta\cdot\mathrm{dist}_{H}(p_{0},u_{1} )+\sum_{1\leq i\leq j-1}\beta\cdot(\mathrm{dist}_{H}(u_{i},v_{i})+\mathrm{ dist}_{H}(v_{i},p_{i})+\mathrm{dist}_{H}(p_{i},u_{i+1}))\] \[\quad\quad+\beta\cdot(\mathrm{dist}_{H}(u_{j},v_{j})+\mathrm{ dist}_{H}(v_{j},p_{j}))+\delta(t,p_{j})\] \[\quad\leq\delta(s,p_{0})+\beta\cdot w_{H}(p_{0},u_{1})+\sum_{1 \leq i\leq j-1}\beta\cdot(w_{H}(u_{i},v_{i})+w_{H}(v_{i},p_{i})+w_{H}(p_{i},u_{ i+1}))\] \[\quad\quad+\beta\cdot(w_{H}(u_{j},v_{j})+w_{H}(v_{j},p_{j}))+ \delta(t,p_{j})\]
By the construction of \(H\), the edges \((u_{i},v_{i})\) of \(\pi\) and the corresponding edges \((p_{i-1},u_{i})\) and \((v_{i},p_{i})\) have been inserted to \(H\), because \((u_{i},v_{i})\in I\), \(u_{i}\in U\) and \(p_{i-1}\in S(u_{i})\), and \(v_{i}\in U\) and \(p_{i}\in S(v_{i})\) respectively. Hence by the definition of \(w_{H}(\cdot)\), we can replace \(w_{H}(u_{i},v_{i})\) with \(w_{G}(u_{i},v_{i})\), \(w_{H}(p_{i-1},u_{i})\) with \(\delta(u_{i},p_{i-1})\) and \(w_{H}(v_{i},p_{i})\) with \(\delta(v_{i},p_{i})\). As a result, we have that:
\[\delta_{C}(s,t) \leq\delta(s,p_{0})+\beta\cdot\delta(u_{1},p_{0})+\sum_{1\leq i \leq j-1}\beta\cdot(w_{G}(u_{i},v_{i})+\delta(v_{i},p_{i})+\delta(u_{i+1},p_{i}))\] \[\quad+\beta\cdot(w_{G}(u_{j},v_{j})+\delta(v_{j},p_{j}))+\delta(t, p_{j})\] \[\quad(\text{$\pi$ is a shortest path})\] \[=\delta(s,p_{0})+\beta\cdot\delta(u_{1},p_{0})+\sum_{1\leq i \leq j-1}\beta\cdot(\mathrm{dist}_{G}(u_{i},v_{i})+\delta(v_{i},p_{i})+\delta( u_{i+1},p_{i}))\] \[\quad+\beta\cdot(\mathrm{dist}_{G}(u_{j},v_{j})+\delta(v_{j},p_{j }))+\delta(t,p_{j})\]
\[\leq\beta\cdot(\delta(s,p_{0})+\delta(u_{1},p_{0}))+\sum_{1\leq i\leq j -1}\beta\cdot(\operatorname{dist}_{G}(u_{i},v_{i})+\delta(v_{i},p_{i})+\delta(u _{i+1},p_{i}))\] \[\quad+\beta\cdot(\operatorname{dist}_{G}(u_{j},v_{j})+\delta(v_{j },p_{j})+\delta(t,p_{j}))\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
of Lemma 3.2. Moreover, when the edge \(e\) is deleted from \(G\), at most \(1+2\gamma\) updates can occur to \(H\). Therefore, the total number of updates to \(H\) that correspond to an inserted edge in \(G\), is at most \(2+4\gamma+4\zeta=2+4\mu\) per phase. Since there can be at most \(\ell\) inserted edges per phase, the total number of updates to \(H\) during a phase is at most \(\ell(2+4\mu)\). This implies that the total time for processing all updates is \(T_{\mathcal{A}}(n,m,W)+t_{\mathcal{B}}(\min(\ell(2+2\mu),n),\ell(1+2\mu),nW) \cdot\ell(2+4\mu)\), which (when amortized over the \(\ell\) updates of the previous phase) amounts to an amortized update time of:
\[T_{C}(n,m,W)=\frac{T_{\mathcal{A}}(n,m,W)}{\ell}+t_{\mathcal{B}}(\min(\ell(2+2 \mu),n),\ell(1+2\mu),nW)\cdot(2+4\mu)\]
### Decremental approximate hub-labeling scheme
In this section, we argue that an existing decremental distance oracle from [11] also provides an approximate hub-labeling scheme whose properties make the the reduction of Lemma 3.2 quite efficient. This decremental algorithm is based on the well-known static Thorup-Zwick (TZ) distance oracle [10].
Thorup-Zwick distance oracle.Given a graph \(G=(V,E)\), the construction starts by defining a non-increasing sequence of sets \(V=A_{0}\supseteq A_{1}\supseteq\cdots\supseteq A_{k}=\emptyset\), where for each \(1\leq i<k\), the set \(A_{i}\) is obtained by subsampling each element of \(A_{i-1}\) independently with probability \(n^{-1/k}\).
For every vertex \(v\in V\) and \(1\leq i<k\), let \(\delta(v,A_{i})=\min_{u\in A_{i}}\operatorname{dist}_{G}(v,u)\) be the minimum distance from \(v\) to a vertex in \(A_{i}\). As \(A_{k}=\emptyset\), we let \(\delta(v,A_{k})=\infty\). Moreover, let \(p_{i}(v)\in A_{i}\) be a vertex in \(A_{i}\) closest to \(v\), that is, \(\operatorname{dist}_{G}(v,p_{i}(v))=\delta(v,A_{i})\). Then, the bunch \(B(v)\subseteq V\) of each \(v\in V\) is defined as:
\[B(v)=\bigcup_{i=0}^{k-1}B_{i}(v)\,\ \text{where}\ \ B_{i}(v)=\{u\in A_{i} \setminus A_{i+1}:\operatorname{dist}_{G}(v,u)<\operatorname{dist}_{G}(v,A_{ i+1})\}\]
The cluster of a vertex \(u\in A_{i}\setminus A_{i+1}\) is defined as \(C(u)=\{v\in V:\operatorname{dist}_{G}(v,u)<\operatorname{dist}_{G}(v,A_{i+1})\}\). Observe that \(u\in B(v)\) if and only if \(v\in C(u)\), for any \(u,v\in V\).
As noted in [10], this construction is a hub-labeling scheme of stretch \(2k-1\) (see Definition 3.1), where the hub set \(S(v)\) of a vertex \(v\in V\) is \(S(v)=B(v)\cup(\bigcup_{i=0}^{k-1}\{p_{i}(v)\})\). In other words, bunches and pivots of all the \(k\) levels form a hub set for \(v\). For obtaining the distance estimates \(\delta(v,\cdot)\) for all \(v\in V\) as in Definition 3.1, we need the associated distances \(\delta(v,u)=\operatorname{dist}_{G}(v,u)\) for all \(u\in S(v)\). It can be shown that with a simple modification of the stretch argument (e.g. see [13]), it is enough to only use the bunches as the hub sets, and explicit access to pivots is not necessary. Hence for simplifying the presentation in this section we assume that the hub sets are equivalent with the bunches. As shown in [10], the size of the bunch of any vertex is w.h.p. bounded by \(\tilde{O}(n^{1/k})\). Recall that the maximum hub set size is one of the parameters governing the efficiency of our reduction.
In the following, we review the decremental algorithm of [10] which maintains TZ distance oracles for \(d\)-bounded distances, and the decremental algorithm of [11] which has good properties for the reduction of Lemma 3.2.
Decremental algorithm for approximate TZ distance oracle.We use the decremental algorithm by [11] that satisfies the properties of Lemma 3.2, for sufficiently small \(\gamma\) and \(\zeta\). The properties that we need are implicit in their analysis. We rephrase their guarantees and give a high-level proof sketch for completeness, but we refer the reader to [11] for further details.
The algorithm of [11] utilizes a decremental version of the TZ distance oracles for \(d\)-bounded distances by [10] on a sequence of graphs. A crucial property of [10] algorithm can be summarized in the following lemma.
**Lemma 3.3** (Implicit in [10]).: _For every vertex \(v\in V\) and \(0\leq i<k\), there is a decremental algorithm that maintains the bunches and the estimates \(\delta\) up to a distance bound \(d\). Over the sequence of updates, the expected number of times \(\delta(v,u)\) changes for all vertices \(u\in A_{i}\setminus A_{i+1}\) such that \(v\in C(u)\) and \(\operatorname{dist}_{G}(v,u)\leq d\) is \(\tilde{O}(dn^{1/k})\). Equivalently, w.h.p the number of times \(B(v)\) or a corresponding distance estimate \(\delta(v,u)\) for \(u\in B(v)\) changes over all updates is bounded by \(\tilde{O}(dn^{1/k})\)._
This lemma allows us to bound the number of changes in the bunches (as required by Lemma 3.2) for pairs of vertices that are within bounded distances up to \(d\). In terms of \(X(v)\) as defined in Section 3.1 this implies \(X(v)\leq\tilde{O}(dn^{1/k})\).
This algorithm is not efficient when we are not restricted to bounded distance. In order to eliminate this dependence on \(d\), in [11] they use decremental hopsets with a hopbound \(\beta\) that informally speaking, allow them to do the following. Instead of working on the original graph, they maintain the decremental distance oracle of [10] on a sequence of scaled graphs up to depth \(\beta\) in time \(\tilde{O}(\beta mn^{1/k})\). A \((\beta,1+\epsilon)\)-hopset \(H^{\prime}\) for \(G=(V,E)\) is a set of weighted edges such that for all \(u,v\in V\) we have \(\operatorname{dist}_{G}(u,v)\leq\operatorname{dist}_{G\cup H^{\prime}}^{( \beta)}(u,v)\leq(1+\epsilon)\operatorname{dist}_{G}(u,v)\), where \(\operatorname{dist}_{G\cup H^{\prime}}^{(\beta)}(u,v)\) refers to a shortest path that uses at most \(\beta\) hops. A decremental hopset has the additional property that the edge weights are non-decreasing. By maintaining a hopset with hopbound \(\beta=\mathit{polylog}\ (n)\) they can maintain TZ distance oracles with the following guarantees:
**Lemma 3.4** (Implicit in [11]).: _Given a weighted undirected graph \(G=(V,E)\) and \(k>1,0<\epsilon<1\), there is a decremental hub-labeling scheme of stretch \((2k-1)(1+\epsilon)\) and w.h.p. the total update time is \(\tilde{O}(mn^{1/k})\cdot O(\log nW/\epsilon)^{2k+1}\). Moreover w.h.p. we have the following two properties:_
1. \(\forall v\in V\) _and_ \(\forall t:|S^{(t)}(v)|\leq\tilde{O}(n^{1/k})\)_. In other words, the size of the bunch of any vertex is bounded by_ \(\tilde{O}(n^{1/k})\) _at any moment of the algorithm._
2. \(\forall v\in V:X(v)\leq\tilde{O}(n^{1/k})\cdot O(\log nW/\epsilon)^{2k+1}\)_. In other words, for every vertex_ \(v\in V\) _the total number of changes to_ \(\delta(v,\cdot)\) _is at most_ \(\tilde{O}(n^{1/k})\cdot O(\log nW/\epsilon)^{2k+1}\) _over the course of the algorithm. Moreover the algorithm detects and reports these changes explicitly._
Proof sketch.: In [11] approximate TZ bunches and clusters (and hence hub sets) are maintained, roughly as follows: a decremental \((\beta,1+\epsilon)\)- hopset \(H^{\prime}\) with hopbound \(\beta=O(\log nW/\epsilon)^{2k+1}\) and size \(O(n^{1+1/k})\) can be maintained in \(\tilde{O}(\beta mn^{1/k})\) total update time. Then \(H^{\prime}\) is used to maintain a \((2k-1)(1+\epsilon)\) distance oracle by running the Roditty-Zwick [10] algorithm on a sequence of \(O(\log nW)\) scaled graphs \(G_{1},\ldots,G_{\log nW}\) up to depth \(\beta\). Roughly speaking, this scaling approach (originally proposed by [12]) for a fixed error parameter \(\epsilon_{0}\) and hopbound parameter \(\beta\) maintains a graph \(G_{r}\) for each distance interval \([2^{r},2^{r+1}]\) in which the edge weights are rounded such that \(\beta\)-_hop-bounded_ distances of length \(\in[2^{r},2^{r+1}]\) in the original graph \(G\) can be \((1+\epsilon_{0})\)-approximated by \(O(\beta/\epsilon_{0})\)-_bounded depth_ distances on the scaled graph \(G_{r}\). Hence by first adding the hopset edges \(H^{\prime}\) and then applying the scaling to \(G\cup H^{\prime}\), it is enough to consider \(\beta\) bounded distances on each
scaled graph. The final bunch of each vertex is the union of bunches over all the graphs \(G_{i}\), for \(1\leq i\leq\log nW\). Since the size of the bunches on each scaled graph at any time is also \(\tilde{O}(n^{1/k})\), the first property holds.
Also from Lemma 3.3 and by setting \(d=O(\beta/\epsilon_{0})=\operatorname{polylog}(n)\) and we have:
* The total number of times the bunch \(B(v)\) changes for each vertex \(v\) on _each scaled graph_ is w.h.p \(\tilde{O}(\beta n^{1/k})\). Hence the second property holds since we have \(\beta=O(\log(nW)/\epsilon)^{2k+1}\), and in total \(v\) changes its bunch on union of \(O(\log nW)\) scaled graphs at most \(\tilde{O}(\beta n^{1/k})\) times.
* The update time is \(\tilde{O}(mn^{1/k}\beta)\). Analyzing correctness of their hierarchical decremental hopsets requires handling some technicalities that we do not get into here as we can use their stretch analysis as a black-box.
Overall the distance oracle based on the _approximate_ bunches maintained lead to \((2k-1)(1+\epsilon)\)-approximate distances. Similar to the static case, the stretch guarantee of [14] carries over to the hub-labeling scheme that are based on approximate bunches as the hub set. In this case in addition to the \((2k-1)\) factor, there will be an additional \((1+\epsilon)\) factor since the scaling and use of hopsets effectively give us approximate bunches.
### Fully dynamic distance oracle
In this section we explain how we obtain our final fully dynamic distance oracle by using the decremental algorithm of Section 3.2 in our reduction of Lemma 3.2.
**Theorem 3.5**.: _For any integer parameters \(i\geq 0,k>1\), there is a fully dynamic distance oracle \(\mathcal{B}_{i}\) with stretch \((4k)^{i}\) and w.h.p. the amortized update time is \(t_{\mathcal{B}_{i}}(n,m,W)=\tilde{O}(1)^{ki}\cdot m^{3/(3i+1)}\cdot n^{4i/k}\) and the query time \(Q_{\mathcal{B}_{i}}(n,m,W)=\tilde{O}(1)^{i}\cdot n^{2i/k}\)._
Proof.: The proof is by induction on \(i\). For the base case \(i=0\), let \(\mathcal{B}_{0}\) be the trivial fully dynamic distance oracle that achieves stretch \(1\), amortized update time \(t_{\mathcal{B}_{0}}(n,m,W)=O(n^{3})\), and query time \(Q_{\mathcal{B}_{0}}(n,m,W)=O(1)\), by recomputing all-pairs shortest paths from scratch after each update (e.g., with the Floyd-Warshall algorithm).
For the induction step, let \(\mathcal{A}\) denote the decremental approximate hub-labeling scheme from Lemma 3.4 with stretch \(\alpha=4k\) and w.h.p. total update time \(T_{\mathcal{A}}(n,m,W)=\tilde{O}(1)^{k}\cdot mn^{1/k}\) and query time \(Q_{\mathcal{A}}(n,m,W)=\tilde{O}(1)\cdot n^{1/k}\), where \(\epsilon\) has been replaced with any value strictly smaller than \(\frac{1}{2}\). By inductive hypothesis, we have that \(\mathcal{B}_{i}\) (with \(i\geq 0\)) is a fully dynamic distance oracle of stretch \(\beta_{i}=(4k)^{i}\) with amortized update time \(\tilde{O}(1)^{ki}\cdot m^{3/(3i+1)}\cdot n^{4i/k}\) and query time \(\tilde{O}(1)^{i}\cdot n^{2i/k}\). Based on Lemma 3.4, the decremental approximate hub-labeling scheme \(\mathcal{A}\) satisfies the properties of Lemma 3.2 with \(\gamma=\tilde{O}(1)\cdot n^{1/k}\) and \(\zeta=\tilde{O}(1)^{k}\cdot n^{1/k}\). By applying then Lemma 3.2 to \(\mathcal{A}\) and \(\mathcal{B}_{i}\) with \(\ell=m^{(3i+1)/(3i+4)}\), the resulting fully dynamic distance oracle \(\mathcal{B}_{i+1}\) has stretch \((4k)^{i+1}\), and amortized update time:
\[t_{\mathcal{B}_{i+1}}(n,m,W)=\frac{T_{\mathcal{A}}(n,m,W)}{\ell}+t_{\mathcal{ B}_{i}}(n,\ell(1+2\mu),nW)\cdot(2+4\mu)\]
The first term is equal to:
\[\frac{T_{\mathcal{A}}(n,m,W)}{\ell} =\frac{\tilde{O}(1)^{k}\cdot mn^{1/k}}{\ell}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ |
2310.09309 | Quantum Field Theory of neutrino mixing in spacetimes with torsion | In the framework of quantum field theory, we analyze the neutrino
oscillations in the presence of a torsion background. We consider the
Einstein-Cartan theory and we study the cases of constant torsion and of
linearly time dependent torsion. We derive new neutrino oscillation formulae
which are depending on the spin orientation. Indeed the energy splitting
induced by the torsion influences oscillation amplitudes and frequencies. This
effect is maximal for values of torsion of the same order of the neutrino
masses and for very low momenta, and disappears for large values of torsion.
Moreover, neutrino oscillation is inhibited for intensities of torsion term
much larger than neutrino masses and momentum. The modifications induced by
torsion on the $CP$-asymmetry has been also presented. Future experiments, such
as PTOLEMY, could provide insights into the effect shown here. | Antonio Capolupo, Giuseppe De Maria, Simone Monda, Aniello Quaranta, Raoul Serao | 2023-10-12T09:02:05Z | http://arxiv.org/abs/2310.09309v2 | # Quantum Field Theory of neutrino mixing in spacetimes with torsion
###### Abstract
In the framework of quantum field theory, we analyze the neutrino oscillations in the presence of a torsion background. By considering the Einstein-Cartan theory and the case of constant torsion, we derive new neutrino oscillation formulae which are depending on the spin orientation. Indeed the energy splitting induced by the torsion influences oscillation amplitudes and frequencies. This effect is maximal for values of torsion of the same order of the neutrino masses and for very low momenta, and disappears for large values of torsion. Moreover, neutrino oscillation is inhibited for intensities of torsion term much larger than neutrino masses and momentum. The modifications induced by torsion on the \(CP\)-asymmetry has been also presented. Future experiments, such as PTOLEMY, could provide insights into the effect shown here.
## I Introduction
Theories of gravity beyond General Relativity (GR) have a long and complex history [1]. Stimulated by the need of dealing with the shortcomings of GR, providing an explanation for the dark components of the universe, and possibly to set a viable framework for the quantization of gravity, there is by now a plethora of such theories. Some, as the early attempt to incorporate Mach's principle by Brans and Dicke [2], involve additional fields other than the metric [3; 4]. Other theories generalize the Einstein-Hilbert action, eventually including higher order curvature invariants [5]. Quite a natural generalization of GR emerges when one considers a non symmetric connection, allowing for the possibility of torsion [6; 7]. Gravitational theories including torsion might be able to account for dark matter and dark energy [8]. Torsion couples naturally to the spin density of matter, inducing a spin-dependent splitting of the energy levels [9] and spin oscillations [10].
Neutrinos, on the other hand, have a prominent role in cosmology and astrophysics. Their comparatively small interaction rates and the abundance in which they are produced make neutrinos a precious source of information on the cosmos. They are possibly linked to the original baryon asymmetry [11], to dark matter [12] and dark energy [13]. Neutrinos also pose several challenges to the standard model of particles, and many aspects of neutrino physics, including the basic mechanism behind flavor oscillations [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], the origin of their mass and their fundamental nature [32; 33], are yet to be clarified.
In this paper we analyze the propagation of neutrinos on a torsion background and study its impact on the flavor oscillations. Neutrino oscillations in presence of torsion have been studied in the quantum mechanical framework [34; 35]. We here approach the subject from the point of view of quantum field theory and quantise the neutrino fields on a torsion background. We focus on the simplest generalization of GR including torsion, the Einstein-Cartan theory. We consider the case of constant torsion and we assume that spacetime curvature is absent. We show that the energy splitting induced by the torsion term leads to spin-dependent neutrino oscillation formulae. Indeed, the spin orientation affects the frequencies, as expected also in QM framework, and the oscillation amplitudes which in QFT are ruled by the Bogoliubov coefficients. This last effect is a pure consequence of the non-trivial condensate structure induced by neutrino mixing in QFT.
The spin dependence of the oscillation formulae is maximal for intensities of torsion comparable to the neutrino masses and momentum. On the other hand, much larger values of torsion carry out to flavor oscillations which are identical for the two spins, since they become essentially independent of the spin. Another effect is that a torsion large enough can effectively inhibit the flavor oscillations, since in this case the energy differences due to the various masses become irrelevant with respect to the common torsional energy term. The presence of torsion is more relevant on neutrino oscillations in non-relativistic regimes, for which the QFT effects are also more emphasized. Some phenomenological consequences of the theoretical results presented here could then be provided, in the future, by experiments that analyze non-relativistic neutrinos, such as PTOLEMY [31; 36]. We additionally discuss the modifications induced by torsion on the \(CP\)-asymmetry, which is a byproduct of the Dirac \(CP\)-violating phase in the mixing matrix. We show that also the \(CP\) asymmetry depends on the spin orientations in presence of the torsion background.
The paper is structured as follows. In section II we introduce the concept of spacetime torsion and we quantize a Dirac field on torsional background. In section III, we analyze the three flavor neutrino mixing in the presence of constant torsion and in
section IV, we derive new oscillation formulae and new expressions of \(CP\) violation depending on the orientation of the spin. The last section is devoted to the conclusions, while in the appendix we report the analysis of currents and charges for flavor mixing in the presence of torsion.
## II Spacetime torsion and Dirac field quantization
Here, we briefly recall the notion of spacetime torsion and then we quantize the Dirac field minimally coupled to the torsion.
### Spacetime Torsion
In general relativity the requirements of metricity of the covariant derivative and of symmetry uniquely determine the connection coefficients (Christoffel symbols) in terms of the metric as follows:
\[\Gamma^{\rho}_{\mu\nu}=\frac{1}{2}g^{\rho\sigma}\left(\partial_{\mu}g_{\sigma \nu}+\partial_{\nu}g_{\sigma\mu}-\partial_{\sigma}g_{\mu\nu}\right)=\Gamma^{ \rho}_{\nu\mu}\;.\]
A more general theory, the so called Einstein-Cartan (or Riemann-Cartan geometry), is obtained if the assumption of symmetry is relaxed, keeping only metricity. In this case, the connection coefficients acquire an antisymmetric part given by
\[\tilde{\Gamma}^{\rho}_{\mu\nu}-\tilde{\Gamma}^{\rho}_{\nu\mu}=T^{\rho}_{\mu \nu}\;\;\;\;\;\;;\;\;\;\;\;\tilde{\Gamma}^{\rho}_{\mu\nu}=\Gamma^{\rho}_{\mu \nu}+K^{\rho}_{\mu\nu}\;, \tag{1}\]
where the tensors \(T^{\rho}_{\mu\nu}\) and \(K_{\rho\mu\nu}=\frac{1}{2}\left(T_{\rho\mu\nu}+T_{\mu\nu\rho}-T_{\nu\rho\mu}\right)\) are respectively known as torsion and contorsion. It is also convenient to introduce [7] the trace vector \(V_{\mu}=T^{\rho}_{\mu\rho}\), the axial vector \(T^{\mu}=\epsilon^{\alpha\beta\gamma\mu}T_{\alpha\beta\gamma}\) and the tensor \(q^{\rho}_{\mu\nu}\), in terms of which the torsion is expressed as
\[T_{\rho\mu\nu}=\frac{1}{3}\left(V_{\mu}g_{\rho\nu}-V_{\nu}g_{\rho\mu}\right)- \frac{1}{6}\epsilon_{\rho\mu\nu\sigma}T^{\sigma}+q_{\rho\mu\nu}\;,\]
and the scalar curvature reads as
\[\tilde{R}=R-2\nabla_{\mu}V^{\mu}-\frac{4}{3}V_{\mu}V^{\mu}+\frac{1}{2}q_{\rho \mu\nu}q^{\rho\mu\nu}+\frac{1}{24}T_{\mu}T^{\mu}\;.\]
Here \(R\), is the general relativistic Ricci scalar given in terms of the metric. Notice that the covariant derivatives in this context are the usual ones involving only the Christoffel symbols. The vacuum action for Einstein-Cartan is given by the natural generalization of the Einstein-Hilbert action. It is written as
\[S_{EC}=-\frac{1}{\kappa^{2}}\int d^{4}x\sqrt{-g}\tilde{R}\;, \tag{2}\]
with \(\kappa=\frac{8\pi G}{c^{4}}\). The torsion-related terms in Eq. (2) form a total derivative, not contributing to the field equations. As a consequence the vacuum theory is equivalent to general relativity. On the other hand, the situation changes in presence of matter, where a coupling of the form
\[S_{Tm}=\int d^{4}x\sqrt{-g}K^{\rho}_{\mu\nu}\Sigma^{\mu\nu}_{\rho} \tag{3}\]
appears. The spin tensor, here denoted with \(\Sigma^{\mu\nu}_{\rho}\), is constructed out of matter fields. We point out that, the field equations obtained by varying the total action with respect to contorsion simply lead to the algebraic constraint \(K_{\rho\mu\nu}\propto\Sigma_{\rho\mu\nu}\), expressing the proportionality of torsion and spin angular momentum. In the following we will be interested in Dirac spinors minimally coupled to torsion. The spin covariant derivatives, in presence of torsion, get modified as follows [9]
\[\tilde{D}_{\mu}\psi=D_{\mu}\psi+\frac{1}{4}K_{AB\mu}\left[\gamma^{A},\gamma^{ B}\right]\psi \tag{4}\]
where \(D_{\mu}\) is the general relativistic spin covariant derivative and the Lorentz indices on the contorsion tensor result from contraction with the tetrads \(K_{AB\mu}=e^{\rho}_{A}e^{\sigma}_{B}K_{\rho\sigma\mu}\). Then, the spinor action is simply given by
\[\tilde{S}_{D}=S_{D}+S_{TD}=\int d^{4}\sqrt{-g}\left[\frac{i}{2}\left(\bar{ \psi}\gamma^{\mu}D_{\mu}\psi-D_{\mu}\bar{\psi}\gamma^{\mu}\psi\right)-m\bar{ \psi}\psi\right]+3\int d^{4}x\sqrt{-g}T_{\mu}S^{\mu} \tag{5}\]
where \(S_{D}\) is the Dirac action in general relativity and \(S_{TD}=3\int d^{4}x\sqrt{-g}T_{\mu}S^{\mu}\) is the action term due to the Dirac - torsion coupling. Moreover, \(S^{\mu}=\frac{1}{2}\bar{\psi}\gamma^{\mu}\gamma^{5}\psi\) is the Dirac spin vector. We remark that in all the above expressions the spacetime dependence of the curved gamma matrices is kept implicit \(\gamma^{\mu}=\gamma^{\mu}(x)=e^{\mu}_{A}(x)\gamma^{A}\).
### Dirac field quantization on torsional background
From now on we shall assume that some astrophysical source other than the Dirac field itself generates a background torsion. As far as minimally coupled Dirac fields are concerned, the information about torsion is stored in the axial vector field \(T^{\mu}(x)\). Since we are specifically interested in the effects of torsion on Dirac fields, we will assume that spacetime curvature is absent (although the most general case can be treated in a similar fashion, see e.g. [37; 38; 39; 40; 41; 12]), so that the covariant derivatives in (5) are replaced with standard derivatives and the gamma matrices reduce to the flat ones. Under these assumptions the Dirac equation becomes
\[i\gamma^{\mu}\partial_{\mu}\psi\ =\ m\psi-\frac{3}{2}T_{\rho}\gamma^{\rho} \gamma^{5}\psi\;. \tag{6}\]
Canonical quantization proceeds as in flat spacetime, and the Dirac field may be expanded on any complete set of solutions of Eq. (7). We shall see that the expansion closely resembles that of flat spacetime when a constant torsion background is considered. It is important to remark that the lepton charge \(Q=\int d^{3}x\bar{\psi}\gamma^{0}\psi\) is conserved as a consequence of the \(U(1)\) gauge invariance of the action (5).
In this paper we deal with the simplest possible torsion background. We consider a constant axial torsion directed along the third spatial axis. The study of other torsion backgrounds will be carried out in future works. The Dirac equation for constant torsion reads
\[i\gamma^{\mu}\partial_{\mu}\psi\ =\ m\psi-\frac{3}{2}T_{3}\gamma^{3}\gamma^{5}\psi\;, \tag{7}\]
and is solved [9] in momentum space by the spinors
\[u_{\vec{k}}^{\uparrow}=N^{+}\left(\begin{array}{c}1\\ \frac{k_{3}}{E_{\vec{k}}^{\uparrow}+\vec{m}^{+}}\\ \frac{k_{3}}{E_{\vec{k}}^{\downarrow}+\vec{m}^{+}}\\ \frac{k_{3}+ik_{2}}{E_{\vec{k}}^{\downarrow}+\vec{m}^{+}}\\ \end{array}\right)\qquad u_{\vec{k}}^{\downarrow}=N^{-}\left(\begin{array}{c }0\\ 1\\ \frac{k_{3}-ik_{2}}{E_{\vec{k}}^{\uparrow}+\vec{m}^{-}}\\ -\frac{k_{3}}{E_{\vec{k}}^{\downarrow}+\vec{m}^{-}}\\ \end{array}\right)\qquad v_{\vec{k}}^{\uparrow}=N^{+}\left(\begin{array}{c }\frac{k_{3}}{E_{\vec{k}}^{\uparrow}+\vec{m}^{+}}\\ \frac{k_{3}+ik_{2}}{E_{\vec{k}}^{\uparrow}+\vec{m}^{+}}\\ 1\\ 0\\ 1\\ \end{array}\right)\qquad v_{\vec{k}}^{\downarrow}=N^{-}\left(\begin{array}{ c}\frac{k_{3}-ik_{2}}{E_{\vec{k}}^{\uparrow}+\vec{m}^{-}}\\ -\frac{k_{3}}{E_{\vec{k}}^{\uparrow}+\vec{m}^{-}}\\ 0\\ 1\\ \end{array}\right)\;. \tag{8}\]
These solutions are formally the same as in flat space, except for a spin-dependent mass term \(\widetilde{m}^{\pm}=m\pm\frac{3}{2}T^{3}\). The torsion has indeed the effect of lifting the degeneracy in energy between the two spin orientations \(E_{\vec{k}}^{\pm}=\sqrt{\vec{k}^{2}+\widetilde{m}^{\pm^{2}}}\). By fixing the normalization to \(u_{\vec{k}}^{\uparrow}u_{\vec{k}}^{\tau}=1=v_{\vec{k}}^{\tau\dagger}v_{\vec{ k}}^{\tau}\), the factors are determined as \(N^{\pm}=\sqrt{\frac{E^{\pm}+\widetilde{m}^{\pm}}{2E^{\pm}}}\). Setting \(u_{\vec{k}}^{\tau}(t)=e^{-iE^{\tau}t}u_{\vec{k}}^{\tau}\) and \(v_{\vec{k}}^{r}(t)=e^{iE^{\tau}t}v_{\vec{k}}^{r}\), the Dirac field is expanded as
\[\psi(\vec{x},t)=\sum_{r}\int\frac{d^{3}k}{(2\pi)^{\frac{3}{2}}}\left(u_{\vec{k }}^{r}(t)\alpha_{\vec{k}}^{r}+v_{-\vec{k}}^{r}(t)\beta_{-\vec{k}}^{r\dagger} \right)e^{i\vec{k}\cdot\vec{x}} \tag{9}\]
with the coefficients obeying the canonical anticommutation relations. Since the solutions to eq.(7) are similar to those obtained in flat space time, to derive the neutrino oscillation formulas in the presence of torsion, we can follow a procedure analogous to the one presented in ref.[18] where the oscillation formulas for neutrinos in quantum field theory in flat space were found.
## III Three flavor mixing with torsion
We now proceed to implement three-flavor mixing. The neutrino fields with definite masses \(\Psi_{m}^{T}\equiv(\nu_{1},\nu_{2},\nu_{3})\) satisfy the equation
\[i\gamma^{\mu}\partial_{\mu}\Psi_{m}-M_{d}\Psi_{m}=-\frac{3}{2}T^{3}\gamma_{3} \gamma^{5}\Psi_{m}\;, \tag{10}\]
with \(M_{d}\equiv diag(m_{1},m_{2},m_{3})\). The fields with definite masses shall be expanded as in eq.(9), except for acquiring an additional label \(j=1,2,3\) distinguishing the mass (\(u_{\vec{k},j}^{r},\alpha_{\vec{k},j}^{r},...\)). The flavor fields are obtained by performing the appropriate \(SU(3)\) rotation on the mass triplet. Choosing the CKM parametrization of the PNMS matrix, the triplet of flavor fields \(\psi_{f}^{T}=(\nu_{e},\nu_{\mu},\nu_{\tau})\) is given by
\[\Psi_{f}(x)=\left(\begin{array}{ccc}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i \delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i \delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\\ \end{array}\right)\Psi_{m}(x)\;.\]
Here the shorthand notation \(c_{ij}=\cos\theta_{ij},s_{ij}=\sin\theta_{ij}\) is used and \(\delta\) is the Dirac \(CP\)-violating phase. As shown in ref.[18] the rotation to flavor fields can be recast in terms of the mixing generator \(I_{\theta}\) as \(\nu_{\sigma}^{\alpha}=I_{\theta}^{-1}(t)\nu_{i}^{\alpha}(x)I_{\theta}(t)\,,\) where \((\sigma,i)=(e,1),(\mu,2),(\tau,3)\), and \(I_{\theta}(t)=I_{23}(t)I_{13}(t)I_{12}(t)\,,\) with
\[I_{12}(t) \equiv\exp\left[\theta_{12}\int d^{3}\mathbf{x}\left(\nu_{1}^{\dagger }(x)\nu_{2}(x)-\nu_{2}^{\dagger}(x)\nu_{1}(x)\right)\right]\,,\] \[I_{23}(t) \equiv\exp\left[\theta_{23}\int d^{3}\mathbf{x}\left(\nu_{2}^{ \dagger}(x)\nu_{3}(x)-\nu_{3}^{\dagger}(x)\nu_{2}(x)\right)\right]\,,\] \[I_{13}(t) \equiv\exp\left[\theta_{13}\int d^{3}\mathbf{x}\left(\nu_{1}^{\dagger }(x)\nu_{3}(x)e^{-i\delta}-\nu_{3}^{\dagger}(x)\nu_{1}(x)e^{i\delta}\right) \right]\,.\]
We note that the generator \(I_{\theta}^{-1}(t)\) here introduced, is formally identical to the generator \(G_{\theta}^{-1}(t)\) presented in ref.[18], where the mixing of three families of neutrinos in flat space-time has been studied. The difference consists in the fact that while \(G_{\theta}^{-1}(t)\) of ref.[18] is expressed in terms of the Dirac fields in flat space-time, \(I_{\theta}^{-1}(t)\) contains Dirac fields which are the solution of eq.(9). As we will see below, this leads to new Bogoliubov coefficients which, in the present case, are dependent on the spin. At the operational level, \(I_{\theta}^{-1}(t)\) shares the same properties as \(G_{\theta}^{-1}(t)\). However, it is essential to underline that, despite the formal analogy, the result here obtained presents completely new behaviors, since the new neutrino oscillation formulas, which will be derived in the following, have amplitudes and frequencies depending on the spin orientation. This effect, due to the torsion, can in principle affect neutrinos produced in the nuclei of spiral galaxies or in rotating black holes.
In the following, adopting the procedure used in ref.[18], and taking into account the presence of torsion, we show the intermediate steps to derive the new oscillation formulae. We start by recalling some properties of the mixing generator \(I_{\theta}^{-1}(t)\) shared with \(G_{\theta}^{-1}(t)\) and derive the expressions for the flavor annihilators. \(I_{\theta}^{-1}(t)\) is a map between the Hilbert space of free fields \(\mathcal{H}_{1,2,3}\) and that of interacting fields \(\mathcal{H}_{e,\mu,\tau}\): \(I_{\theta}^{-1}(t)\,:\,\mathcal{H}_{1,2,3}\rightarrow\mathcal{H}_{e,\mu,\tau}\,.\) At finite volume, the vacuum \(\left|0\right\rangle_{1,2,3}\), relative to the space \(\mathcal{H}_{1,2,3}\), is connected to the vacuum \(\left|0\right\rangle_{e,\mu,\tau}\), relative to the space \(\mathcal{H}_{e,\mu,\tau}\), in the following way: \(\left|0(t)\right\rangle_{e,\mu,\tau}=I_{\theta}^{-1}(t)\left|0\right\rangle_{1,2,3}\,,\) where \(\left|0\right\rangle_{e,\mu,\tau}\) is called the flavour vacuum. The action of the mixing generator defines the plane wave expansion of the flavor fields
\[\nu_{\sigma}(x)=\sum_{r}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{\frac{3}{2}}}\left[u_{k,i}^{r}\alpha_{\vec{k},\nu_{\sigma}}^{r}(t)+v_{-\vec{k},i}^{r}\beta_{-\vec{k},\nu_{\sigma}}^{r\dagger}(t)\right]\exp\{i\vec{k}\cdot\vec{x}\}\qquad\qquad \sigma=1,2,3\]
where the flavor annihilators are given by \(\alpha_{\vec{k},\nu_{\sigma}}^{r}(t)\equiv I_{\theta}^{-1}(t)\alpha_{\vec{k},i }^{r}I_{\theta}(t)\,,\quad\beta_{-\vec{k},\nu_{\sigma}}^{r\dagger}(t)\equiv I _{\theta}^{-1}(t)\beta_{-\vec{k},i}^{r\dagger}(t)I_{\theta}(t)\,.\) By definition they annihilate the flavor vacuum \(\alpha_{\vec{k},\nu_{\sigma}}^{r}\left|0\right\rangle_{f}=0=\beta_{-\vec{k}, \nu_{\sigma}}^{r}\left|0\right\rangle_{f}\) and, being the above transformations canonical, they satisfy the equal time canonical anticommutation relations. The explicit relations linking the flavor creation and destruction operators with the mass creation and destruction operators are given below, where it is assumed, without loss of generality \(\vec{k}=(0,0,\left|\vec{k}\right|)\):
\[\alpha_{\vec{k},\nu_{\sigma}}^{r} =c_{12}c_{13}\alpha_{\vec{k},1}^{r}(t)+s_{12}c_{13}\left(\left| \Xi_{12;\vec{k}}^{rr}\right|\alpha_{\vec{k},2}^{r}(t)+\varepsilon^{r}\left| \chi_{12;\vec{k}}^{rr}\right|\beta_{-\vec{k},2}^{r\dagger}(t)\right)\] \[+e^{-i\delta}s_{13}\left(\left|\Xi_{13;\vec{k}}^{rr}\right|\alpha _{\vec{k},3}^{r}(t)+\varepsilon^{r}\left|\chi_{13;\vec{k}}^{rr}\right|\beta_{- \vec{k},3}^{r\dagger}(t)\right)\,,\] \[\alpha_{\vec{k},\nu_{\mu}}^{r} =\left(c_{12}c_{23}-e^{i\delta}s_{12}s_{23}s_{13}\right)\alpha_{ \vec{k},2}(t)-\left(s_{12}c_{23}+e^{i\delta}c_{12}s_{23}s_{13}\right)\times\] \[\times\left(\left|\Xi_{12;\vec{k}}^{rr}\right|\alpha_{\vec{k},1} ^{r}(t)-\varepsilon^{r}\left|\chi_{12;\vec{k}}^{rr}\right|\beta_{-\vec{k},1}^{ r\dagger}(t)\right)+s_{23}c_{13}\left(\left|\Xi_{23;\vec{k}}^{rr}\right|\alpha_{ \vec{k},3}^{r}(t)+\varepsilon^{r}\left|\chi_{23;\vec{k}}^{rr}\right|\beta_{- \vec{k},3}^{r\dagger}(t)\right)\,,\] \[\alpha_{\vec{k},\nu_{\tau}}^{r} =c_{23}c_{13}\alpha_{\vec{k},3}^{r}(t)-\left(c_{12}s_{23}+e^{i \delta}s_{12}c_{23}s_{13}\right)\left(\left|\Xi_{23;\vec{k}}^{rr}\right|\alpha_{ \vec{k},2}^{r}(t)-\varepsilon^{r}\left|\chi_{23;\vec{k}}^{rr}\right|\beta_{- \vec{k},2}^{r\dagger}(t)\right)+\] \[+\left(s_{12}s_{23}-e^{i\delta}c_{12}c_{23}s_{13}\right)\left( \left|\Xi_{13;\vec{k}}^{rr}\right|\alpha_{\vec{k},1}^{r}(t)-\varepsilon^{r} \left|\chi_{13;\vec{k}}^{rr}\right|\beta_{-\vec{k},1}^{r\dagger}(t)\right)\,,\] \[\beta_{-\vec{k},\nu_{\tau}}^{r} =c_{12}c_{13}\beta_{-\vec{k},1}(t)+s_{12}c_{13}\left(\left|\Xi_{12; \vec{k}}^{rr}\right|\beta_{-\vec{k},2}^{r}(t)-\varepsilon^{r}\left|\chi_{1 2;\vec{k}}^{rr}\right|\alpha_{\vec{k},2}^{r\dagger}(t)\right)+\] \[+e^{i\delta}s_{13}\left(\left|\Xi_{13;\vec{k}}^{rr}\right|\beta_ {-\vec{k},3}^{r}(t)-\varepsilon^{r}\left|\chi_{13;\vec{k}}^{rr}\right|\alpha_{ \vec{k},3}^{r\dagger}(t)\right)\,,\] \[\beta_{-\vec{k},\nu_{\mu}}^{r} =\left(c_{12}c_{23}-e^{-i\delta}s_{12}s_{23}s_{13}\right)\beta_ {-\vec{k},2}^{r}(t)-\left(s_{12}c_{23}+e^{-i\delta}c_{12}s_{23}s_{13}\right)\times\] \[\times\left(\left|\Xi_{12;\vec{k}}^{rr}\right|\beta_{-\vec{k},1}^{ r}(t)+\varepsilon^{r}\left|\chi_{12;\vec{k}}^{rr}\right|\alpha_{\vec{k},1}^{r \dagger}(t)\right)+s_{23}c_{13}\left(\left|\Xi_{23;\vec{k}}^{rr}\right| \beta_{-\vec{k},3}^{r}(t)-\varepsilon^{r}\left|\chi_{23;\vec{k}}^{rr}\right| \alpha_{\vec{k},3}^{r\dagger}(t)\right)\,,\] \[\beta_{-\vec{k},\nu_{\tau}}^{r} =c_{23}c_{13}\beta_{-\vec{k},3}^{r}(t)-\left(c_{12}s_{23}+e^{-i
The Bogoljubov coefficients \(\Xi_{ij;\vec{k}}^{rs}\) and \(\chi_{ij;\vec{k}}^{rs}\) are given by the inner products of spinors with distinct masses. For their modules we have
\[\left|\Xi_{i,j;\vec{k}}^{r,s}\right|\equiv u_{\vec{k},i}^{r\dagger}u_{\vec{k},j}^ {r}=v_{-\vec{k},i}^{s\dagger}v_{-\vec{k},j}^{r}\,,\qquad\left|\chi_{i,j;\vec{k }}^{r,s}\right|\equiv\varepsilon^{r}u_{\vec{k},1}^{r\dagger}v_{-\vec{k},2}^{s }=-\varepsilon^{r}u_{\vec{k},2}^{r\dagger}v_{-\vec{k},1}^{s}\,.\]
Observe that the Bogoljubov coefficients vanish for \(r\neq s\) in the chosen frame \(\vec{k}=(0,0,\left|\vec{k}\right|)\). Inserting the spinors explicitly we arrive at
\[\Xi_{ij;\vec{k}}^{++}=N_{i}^{+}N_{j}^{+}\left[1+\frac{k^{2}}{\left( E_{\vec{k},i}^{+}\vec{m}_{i}^{+}\right)\left(E_{\vec{k},j}^{+}\vec{m}_{j}^{+} \right)}\right]=\cos(\xi_{ij;\vec{k}}^{++})\,,\] \[\chi_{ij;\vec{k}}^{++}=N_{i}^{+}N_{j}^{+}\left[\frac{k_{3}}{E_{ \vec{k},j}^{\pm}+\widehat{m}_{j}^{-}}-\frac{k_{3}}{E_{\vec{k},i}^{\pm}+ \widehat{m}_{i}^{+}}\right]=\sin(\xi_{ij;\vec{k}}^{++})\,,\] \[\Xi_{ij;\vec{k}}^{--}=N_{i}^{-}N_{j}^{-}\left[1+\frac{k^{2}}{ \left(E_{\vec{k},i}^{-}\vec{m}_{i}^{-}\right)\left(E_{\vec{k},j}^{-}\vec{m}_{ j}^{-}\right)}\right]=\cos(\xi_{ij;\vec{k}}^{--})\,,\] \[\chi_{ij;\vec{k}}^{--}=N_{i}^{-}N_{j}^{-}\left[\frac{k_{3}}{E_{ \vec{k},j}^{-}+\widehat{m}_{j}^{-}}-\frac{k_{3}}{E_{\vec{k},i}^{-}+\widehat{m} _{i}^{-}}\right]=\sin(\xi_{ij;\vec{k}}^{--})\]
with the spin-dependent masses and the normalisation coefficients given explicitly by \(\widetilde{m}_{i}^{\pm}\equiv m_{i}\pm\frac{3}{2}T^{3}\) and \(N_{i}^{\pm}=\frac{\sqrt{E_{\vec{k},i}^{\pm}+\widehat{m}_{i}^{\pm}}}{\sqrt{2E_ {\vec{k},i}^{\pm}}}\), respectively. The sign factor is defined as \(\varepsilon^{\pm}=\mp 1\). Additionally, \((E_{\vec{k},i}^{\pm})^{2}=\vec{k}^{2}+(\widetilde{m}_{i}^{\pm})^{2}\) and \(\xi_{ij;\vec{k}}^{\pm\pm}=\arctan\left(\begin{vmatrix}V_{ij;\vec{k}}^{\pm\pm} \\ \left|U_{ij;\vec{k}}^{\pm\pm}\right|\end{vmatrix}\right)\). Canonicity of the transformation which define the flavor annihilators is ensured by the basic property of the Bogoljubov coefficients
\[\left\{\begin{array}{l}\sum_{r}\left(\left|\Xi_{ij;\vec{k}}^{+r} \right|^{2}+\left|\chi_{ij;\vec{k}}^{+r}\right|^{2}\right)=1\\ \sum_{r}\left(\left|\Xi_{ij;\vec{k}}^{-r}\right|^{2}+\left|\chi_{ij;\vec{k}}^{ -r}\right|^{2}\right)=1\end{array}\right. \tag{11}\]
where \(i,j=1,2,3\) e \(j>i\). The time dependence of the Bogoljubov coefficients is expressed through the following relations:
\[\chi_{ij;\vec{k}}^{rs}(t)=\left|\chi_{ij;\vec{k}}^{rs}\right|e^{i\left(E_{\vec {k},j}^{s}+E_{\vec{k},i}^{r}\right)t}\,,\qquad\Xi_{ij;\vec{k}}^{rs}(t)=\left| \Xi_{ij;\vec{k}}^{rs}\right|e^{i\left(E_{\vec{k},j}^{s}-E_{\vec{k},i}^{r} \right)t}\,.\]
The following identities are also satisfied:
\[\chi_{23;\vec{k}}^{rr}(t)\left(\chi_{13;\vec{k}}^{rr}(t)\right)^{*}+ \left(\Xi_{23;\vec{k}}^{rr}(t)\right)^{*}\Xi_{13;\vec{k}}^{rr}(t)=\Xi_{12;\vec {k}}^{rr}(t)\,,\] \[\chi_{23;\vec{k}}^{rr}(t)\left(\Xi_{13;\vec{k}}^{rr}(t)\right)^{*}- \left(\Xi_{23;\vec{k}}^{rr}(t)\right)^{*}\chi_{13;\vec{k}}^{rr}(t)=-\chi_{12; \vec{k}}^{rr}(t)\,,\] \[\Xi_{12;\vec{k}}^{rr}(t)\Xi_{23;\vec{k}}^{rr}(t)-\left(\chi_{12; \vec{k}}^{rr}(t)\right)^{*}\chi_{23;\vec{k}}^{rr}(t)=\Xi_{13;\vec{k}}^{rr}(t)\,,\] \[\Xi_{23;\vec{k}}^{rr}(t)\chi_{12;\vec{k}}^{rr}(t)+\left(\Xi_{12; \vec{k}}^{rr}(t)\right)^{*}\chi_{23;\vec{k}}^{rr}(t)=\chi_{13;\vec{k}}^{rr}(t)\,,\] \[\left(\chi_{12;\vec{k}}^{rr}(t)\right)^{*}\chi_{13;\vec{k}}^{rr}(t)+ \left(\Xi_{12;\vec{k}}^{rr}(t)\right)^{*}\Xi_{13;\vec{k}}^{rr}(t)=\Xi_{23;\vec {k}}^{rr}(t)\,,\] \[\chi_{12;\vec{k}}^{rr}(t)\Xi_{13;\vec{k}}^{rr}(t)-\Xi_{12;\vec{k }}^{rr}(t)\chi_{13;\vec{k}}^{rr}(t)=-\chi_{12;\vec{k}}^{rr}(t)\,,\] \[\xi_{13;\vec{k}}^{rr}=\xi_{12;\vec{k}}^{rr}+\xi_{23;\vec{k}}^{rr} \quad,\qquad\xi_{ij;\vec{k}}^{rr}=\arctan\left(\frac{\left|\chi_{ij;\vec{k}}^{rr} \right|}{\Xi_{ij;\vec{k}}^{rr}}\right)\,.\]
Neutrino oscillations with background torsion
From the analysis on currents and charges similar to that shown in ref.[18] and which we report for the reader convenience in the Appendix, we define the flavor charges in the presence of torsion in the following way
\[::\,\,Q_{\nu_{\sigma}}\,\,::=\sum_{r}\int d^{3}\mathbf{k}\left(\alpha^{\tau\dagger}_ {\vec{k},\nu_{\sigma}}(t)\alpha^{\tau}_{\vec{k},\nu_{\sigma}}(t)-\beta^{\tau \dagger}_{\vec{k},\nu_{\sigma}}(t)\beta^{\tau}_{\vec{k},\nu_{\sigma}}(t)\right)\,, \qquad\sigma=e,\mu,\tau\]
where \(::\cdots::\) is the normal ordering with respect to the flavor vacuum state \(\left|0\right\rangle_{f}\).
The oscillation formulas are obtained by taking expectation values of the above charges on the (flavor) neutrino state. Consider for example an initial electron neutrino state defined as \(\left|\nu_{e}\right\rangle=\alpha^{\tau\dagger}_{\vec{k},e}\left(0\right)\left| 0\right\rangle_{f}\). Working in the Heisenberg picture, we obtain the neutrino flavour oscillation formula at a fixed momentum \(\vec{k}\) and spin \((\uparrow)\):
\[\mathcal{Q}^{\dagger}{}^{\vec{k}}_{\nu_{\rho}\rightarrow\nu_{ \sigma}}(t) \equiv\left\langle\nu^{\uparrow}_{\vec{k},\rho}(t)\right|::\,\,Q_{ \nu_{\sigma}}\,\,::\,\left|\nu^{\uparrow}_{\vec{k},\rho}(t)\right\rangle-{}_{f }\left\langle 0\right|::\,\,Q_{\nu_{\sigma}}\,\,::\,\left|0\right\rangle_{f}\] \[=\left|\left\{\alpha^{\uparrow}_{\vec{k},\nu_{\sigma}}(t),\alpha^ {\uparrow\dagger}_{\vec{k},\nu_{\sigma}}(0)\right\}\right|^{2}+\left|\left\{ \beta^{\uparrow\dagger}_{-\vec{k},\nu_{\sigma}}(t),\alpha^{\uparrow\dagger}_{ \vec{k},\nu_{\sigma}}(0)\right\}\right|^{2}\,.\]
Similarly for the antiparticle:
\[\mathcal{Q}^{\dagger}{}^{\vec{k}}_{\overline{\nu}_{\sigma} \rightarrow\overline{\nu}_{\sigma}}(t) \equiv\left\langle\overline{\nu}^{\uparrow}_{\vec{k},\rho}(t) \right|::\,\,Q_{\nu_{\sigma}}\,\,::\,\left|\overline{\nu}^{\uparrow}_{\vec{k},\rho}(t)\right\rangle-{}_{f}\left\langle 0\right|::\,\,Q_{\nu_{\sigma}}\,\,::\, \left|0\right\rangle_{f}\] \[=-\left|\left\{\beta^{\uparrow}_{\vec{k},\nu_{\sigma}}(t),\beta^ {\uparrow\dagger}_{\vec{k},\nu_{\sigma}}(0)\right\}\right|^{2}-\left|\left\{ \alpha^{\uparrow\dagger}_{-\vec{k},\nu_{\sigma}}(t),\beta^{\uparrow\dagger}_{ \vec{k},\nu_{\sigma}}(0)\right\}\right|^{2}\,.\]
By construction these are indeed probabilities, since they are contained in the range \([0,1]\) and they add up to unity when summed over all the final flavors, \(\mathcal{Q}^{\dagger}{}^{\vec{k}}_{\nu_{\rho}\rightarrow\nu_{\sigma}}(t)+ \mathcal{Q}^{\dagger}{}^{\vec{k}}_{\nu_{\rho}\rightarrow\nu_{\mu}}(t)+ \mathcal{Q}^{\dagger}{}^{\vec{k}}_{\nu_{\rho}\rightarrow\nu_{\mu}}(t)=1\). It is also easy to check that they reduce to the Pontecorvo formulae in absence of torsion in the ultrarelativistic limit \(|\vec{k}|\gg m_{1},m_{2},m_{3}\). Similar relationships are fulfilled in the case of spin down \((\downarrow)\). Defining for convenience: \(\Delta^{r}_{ij;\vec{k}}\equiv\frac{E^{r}_{j,\vec{k}}-E^{r}_{i,\vec{k}}}{2}\), \(\Omega^{r}_{ij;\vec{k}}\equiv\frac{E^{r}_{j,\vec{k}}+E^{r}_{i,\vec{k}}}{2}\), the formulae for neutrino oscillation at fixed spin \((\uparrow)\) and momentum \(\vec{k}\) read:
\[\mathcal{Q}^{\dagger}{}^{\vec{k}}_{\nu_{\sigma}\rightarrow\nu_{ \sigma}}(t) =1-\sin^{2}(2\theta_{12})\cos^{4}(\theta_{13})\left[\left|\Xi^{++}_{12; \vec{k}}\right|^{2}\sin^{2}\left(\Delta^{+}_{12;\vec{k}}t\right)+\left|\chi^{++ }_{12;\vec{k}}\right|^{2}\sin^{2}\left(\Omega^{+}_{12;\vec{k}}t\right)\right]\] \[-\sin^{2}(2\theta_{13})\cos^{2}(\theta_{12})\left[\left|\Xi^{++}_ {13;\vec{k}}\right|^{2}\sin^{2}\left(\Delta^{+}_{13;\vec{k}}t\right)+\left| \chi^{++}_{13;\vec{k}}\right|^{2}\sin^{2}\left(\Omega^{+}_{13;\vec{k}}t\right)\right]\] \[-\sin^{2}(2\theta_{13})\sin^{2}(\theta_{12})\left[\left|\Xi^{++}_ {23;\vec{k}}\right|^{2}\sin^{2}\left(\Delta^{+}_{23;\vec{k}}t\right)+\left| \chi^{++}_{23;\vec{k}}\right|^{2}\sin^{2}\left(\Omega^{+}_{23;\vec{k}}t\right) \right]\,, \tag{12}\]
\[\mathcal{Q}^{\dagger}{}^{\vec{k}}_{\nu_{\sigma}\rightarrow\nu_{\mu}} (t) =2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{2}\sin\left(2 \Delta^{+}_{12;\vec{k}}t\right)-\left|\chi^{++}_{12;\vec{k}}\right|^{2}\sin \left(2\Omega^{+}_{12;\vec{k}}t\right)\right.\] \[+\left(\left|\Xi^{++}_{12;\vec{k}}\right|^{2}-\left|\chi^{++}_ {13;\vec{k}}\right|^{2}\right)\sin\left(2\Delta^{+}_{23;\vec{k}}t\right)+ \left(\left|\chi^{++}_{12;\vec{k}}\right|^{2}-\left|\chi^{++}_{13;\vec{k}} \right|^{2}\right)\sin\left(2\Omega^{+}_{23;\vec{k}}t\right)\] \[-\left|\Xi^{++}_{13;\vec{k}}\right|^{2}\sin\left(2\Delta^{+}_{13 ;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}}\right|^{2}\sin\left(2\Omega^{+}_ {13;\vec{k}}t\right)\right]\] \[+\cos^{2}(\theta_{13})\sin(\theta_{13})\left[\cos\delta\sin(2 \theta_{12})\sin(2\theta_{23})+4\cos^{2}(\theta_{12})\sin\theta_{13}\sin^{2} \theta_{23}\right]\times\] \[\times\left[\left|\Xi^{++}_{13;\vec{k}}\right|^{2}\sin^{2}(\Delta^{ +}_{13;\vec{k}}t)+\left|\chi^{++}_{13;\vec{k}}\right|^{2}\sin^{2}(\Omega^{+}_{13 ;\vec{k}}t)\right]\] \[-\cos^{2}\theta_{13}\sin\theta_{13}\left[\cos\delta\sin(2\theta_{12 })\sin(2\theta_{23})-4\sin^{4}\theta_{12}\sin\theta_{13}\sin^{2}\theta_{23}\right]\times\] \[\times\left[\left|\Xi^{++}_{23;\vec{k}}\right|^{2}\sin^{2}( \Delta^{+}_{23;\vec{k}}t)+\left|\chi^{++}_{23;\vec{k}}\right|^{2}\sin^{2}( \Omega^{+}_{23;\vec{k}}t)\right]\] \[+\cos^{2}\theta_{13}\sin(2\theta_{12})\left[\left(\cos^{2}\theta_{ 23}-\sin^{2}\theta_{23}\sin^{2}\theta_{13}\right)\sin(2\theta_{12})\right.\] \[\left.+\cos\delta\cos(2\theta_{12})\sin\theta_{13}\sin(2\theta_{23}) \right]\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{2}\sin^{2}(\Delta^{+}_{12; \vec{k}}t)+\left|\chi^{++}_{12;\vec{k}}\right|^{2}\sin^{2}(\Omega^{+}_{12; \vec{k}}t)\right]\,, \tag{13}\]
\[{\cal Q}^{{}^{\dagger}\vec{k}}_{\nu_{e}\rightarrow\nu_{\tau}}(t)= -2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{2}\sin\left(2 \Delta^{+}_{12;\vec{k}}t\right)-\left|\chi^{++}_{12;\vec{k}}\right|^{2}\sin \left(2\Omega^{+}_{12;\vec{k}}t\right)\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{ 2}\sin\left(2\Delta^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{ 2}\sin\left(2\Delta^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{13;\vec{k}}\right| ^{2}\sin\left(2\Delta^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{ 2}\sin\left(2\Delta^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{23;\vec{k}}\right| ^{2}\sin\left(2\Delta^{+}_{23;\vec{k}}t\right)+\left|\chi^{++}_{23;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{23;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{13;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{ 2}\sin\left(2\Omega^{+}_{23;\vec{k}}t\right)+\left|\chi^{++}_{23;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{23;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{ 2}\sin\left(2\Omega^{+}_{23;\vec{k}}t\right)+\left|\chi^{++}_{23;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{23;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{23;\vec{k}}t\right)+\left|\chi^{++}_{23;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{23;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right|^{ 2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{12;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{12;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{12;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{12;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{23;\vec{k}}t\right)+\left|\chi^{++}_{23;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{23;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{12;\vec{k}} \right|^{2}\sin^{2}\left(\Omega^{+}_{12;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.\] \[\left.\phantom{-2J_{CP}\left[\left|\Xi^{++}_{12;\vec{k}}\right| ^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)+\left|\chi^{++}_{13;\vec{k}} \right|^{2}\sin\left(2\Omega^{+}_{13;\vec{k}}t\right)\right]}\right.
When \(T^{3}\gg m_{i},|\vec{k}|\), the latter are dominated by torsion so that \((E^{\pm}_{\vec{k},i})^{2}=\vec{k}^{2}+(\widetilde{m}^{\pm}_{i})^{2}\simeq( \widetilde{m}^{\pm}_{i})^{2}\simeq\left(\pm\frac{3}{2}T^{3}\right)^{2}\), so that \(E^{+}\simeq E^{-}\) and both the Bogoljubov coefficients \(\Xi^{rr},\chi^{rr}\) and the phase factors \(\Delta^{r},\Omega^{r}\) become essentially independent of the spin. Notice that a torsion large enough can effectively inhibit the flavor oscillations, since as \(T^{3}\gg m_{i}\), the energy differences due to the various masses (e.g. \(m_{1}\) and \(m_{2}\)) become irrelevant with respect to the common torsional energy term.
Notice also that in the computations, we neglected the spin-flip transition due to the torsion term. This analysis will be carried out in a further work.
### \(Cp\) Violation and Flavor Vacuum Condensation
Three flavor mixing involves a \(CP\) violation due to the Dirac phase in the mixing matrix. We now wish to study the impact of torsion on such violation. For fixed spin orientation, say \(\uparrow\), one can define the \(CP\) asymmetry \(\Delta^{\rho\sigma}_{\uparrow;CP}\) as
\[\Delta^{\rho\sigma}_{\uparrow;CP}(t) \equiv\mathcal{Q}^{\uparrow\vec{k}}_{\ \nu_{\sigma}\to\nu_{\sigma}}(t)+\mathcal{Q}^{\uparrow\vec{k}}_{\ \vec{\nu}_{\sigma}\to\vec{\nu}_{\sigma}}(t)\] \[=\left|\left\{\alpha^{\uparrow}_{k,\nu_{\sigma}}(t),\alpha^{ \uparrow\uparrow}_{k,\nu_{\sigma}}(0)\right\}\right|^{2}+\left|\left\{\beta^{ \uparrow\dagger}_{-\vec{k},\nu_{\sigma}}(t),\alpha^{\uparrow\uparrow}_{\vec{ k},\nu_{\sigma}}(0)\right\}\right|^{2}\] \[-\left|\left\{\alpha^{\uparrow\uparrow}_{-\vec{k},\nu_{\sigma}} (t),\beta^{\uparrow\dagger}_{\vec{k},\nu_{\sigma}}(0)\right\}\right|^{2}- \left|\left\{\beta^{\uparrow}_{k,\nu_{\sigma}}(t),\beta^{\uparrow\dagger}_{ \vec{k},\nu_{\sigma}}(0)\right\}\right|^{2}\,.\]
Figure 3: Plots of the oscillation formulae in a constant torsion background: in the left-hand panel \(\mathcal{Q}^{\uparrow^{\vec{k}}}_{\ \nu_{\sigma}\to\nu_{\sigma}}(t)\) (blue) and \(\mathcal{Q}^{\downarrow^{\vec{k}}}_{\ \nu_{\sigma}\to\nu_{\sigma}}(t)\) (red) as a function of time. Torsion was picked to be comparable to the momentum as \(T^{3}=2\times 10^{-4}\ \mathrm{eV}\). In the right panel, detail of the same formulae and comparison with the corresponding ultrarelativistic (Pontecorvo-like) limit (dashed).
Figure 2: Plots of the oscillation formulae in a constant torsion background: in the left-hand panel \(\mathcal{Q}^{\uparrow^{\vec{k}}}_{\ \nu_{\sigma}\to\nu_{\mu}}(t)\) (blue) and \(\mathcal{Q}^{\downarrow^{\vec{k}}}_{\ \nu_{\sigma}\to\nu_{\mu}}(t)\) (red) as a function of time. Torsion was picked to be comparable to the momentum as \(T^{3}=2\times 10^{-4}\ \mathrm{eV}\). In the right panel, detail of the same formulae and comparison with the corresponding ultrarelativistic (Pontecorvo-like) limit (dashed).
Here \(\rho,\sigma=e,\mu,\tau\) and the second equality provides the explicit expression in terms of the flavor annihilators. Notice that a \(+\) sign appears in front of the probabilities for the antineutrinos, in place of the standard \(-\), because the antineutrino states already carry a negative flavor charge \(Q_{\sigma}\). Recalling that the following relations are satisfied: \(\sum_{\sigma}Q_{\nu_{\sigma}}(t)=Q\,,\,\left\langle\nu_{\rho}\right|Q\left|\nu _{\rho}\right\rangle=1\,,\) and \(\left\langle\overline{\nu}_{\rho}\right|Q\left|\overline{\nu}_{\rho}\right\rangle =-1\,,\) it is easy to show that \(\sum_{\sigma}\Delta_{\uparrow;CP}^{\rho\sigma}=0\,,\) for \(\rho,\sigma=e,\mu,\tau\,.\) For the \(\nu_{e}\rightarrow\nu_{\mu}\) transition, the \(CP\) asymmetry reads explicitly
\[\Delta_{\uparrow;CP}^{e\mu}(t) =4J_{CP}\left[\left|\Xi_{12;\vec{k}}^{++}\right|^{2}\sin\left(2 \Delta_{12;\vec{k}}^{+}t\right)-\left|\chi_{12;\vec{k}}^{++}\right|^{2}\sin \left(2\Omega_{12;\vec{k}}^{+}t\right)+\right.\] \[+\left(\left|\Xi_{12;\vec{k}}^{++}\right|^{2}-\left|\chi_{13; \vec{k}}^{++}\right|^{2}\right)\sin\left(2\Delta_{23;\vec{k}}^{+}t\right)+ \left(\left|\chi_{12;\vec{k}}^{++}\right|^{2}-\left|\chi_{13;\vec{k}}^{++} \right|^{2}\right)\sin\left(2\Omega_{23;\vec{k}}^{+}t\right)+\] \[\left.-\left|\Xi_{13;\vec{k}}^{++}\right|^{2}\sin\left(2\Delta_{ 13;\vec{k}}^{+}t\right)+\left|\chi_{13;\vec{k}}^{++}\right|^{2}\sin\left(2 \Omega_{13;\vec{k}}^{+}t\right)\right]\,.\]
Furthermore \(\Delta_{r;CP}^{e\mu}(t)=-\Delta_{r;CP}^{e\mu}(t)\) with \(r=\uparrow,\downarrow\). The corresponding asymmetry for the opposite spin orientation \((\downarrow)\) reads
\[\Delta_{\downarrow;CP}^{e\mu}(t)= 4J_{CP}\left[\left|\Xi_{12;\vec{k}}^{--}\right|^{2}\sin\left(2 \Delta_{12;\vec{k}}^{-}t\right)-\left|\chi_{12;\vec{k}}^{--}\right|^{2}\sin \left(2\Omega_{12;\vec{k}}^{-}t\right)+\right.\] \[+\left(\left|\Xi_{12;\vec{k}}^{--}\right|^{2}-\left|\chi_{13;\vec {k}}^{--}\right|^{2}\right)\sin\left(2\Delta_{23;\vec{k}}^{-}t\right)+\left( \left|\chi_{12;\vec{k}}^{--}\right|^{2}-\left|\chi_{13;\vec{k}}^{--}\right|^{2} \right)\sin\left(2\Omega_{23;\vec{k}}^{-}t\right)+\] \[\left.-\left|\Xi_{13;\vec{k}}^{--}\right|^{2}\sin\left(2\Delta_{1 3;\vec{k}}^{-}t\right)+\left|\chi_{13;\vec{k}}^{--}\right|^{2}\sin\left(2 \Omega_{13;\vec{k}}^{-}t\right)\right]\,.\]
Remarkably the presence of torsion induces also a different \(CP\) asymmetry for the two spin orientations. This is visible in the plot of Figure \((4)\), showing \(\Delta_{\uparrow;CP}^{e\mu}(t)\) and \(\Delta_{\downarrow;CP}^{e\mu}(t)\) as a function of time for the same values of the parameters used in the plots of the oscillation formulae.
We conclude the section by making some observations on the condensate structure of the flavor vacuum in the presence of torsion. The flavor vacuum \(\left|0_{f}(t)\right\rangle\) has the structure of a condensate of particle-antiparticle pairs with definite masses. While this is true also in absence of torsion, the latter breaks the spin symmetry, resulting in a different condensation density for particles of spin up and down. Additionally, the three flavor scheme features condensation densities \(\mathcal{N}_{j}\) that depend on the mass index \(j=1,2,3\). This is at odds with the simpler two flavor scheme, which instead features a single condensation density. The number densities can be evaluated by computing the expectation values of the number operators for a fixed mass index on \(\left|0_{f}(t)\right\rangle\). We have
\[\mathcal{N}_{1;\vec{k}}^{\uparrow} =_{f}\left\langle 0(t)\right|N_{\alpha_{1},\vec{k}}^{\uparrow} \left|0(t)\right\rangle_{f}={}_{f}\left\langle 0(t)\right|N_{\beta_{1},\vec{k}}^{ \uparrow}\left|0(t)\right\rangle_{f}\] \[=s_{12}^{2}c_{13}^{2}\left|\chi_{12;\vec{k}}^{++}\right|^{2}+s_{1 3}^{2}\left|\chi_{13;\vec{k}}^{++}\right|^{2}\,, \tag{15}\]
\[\mathcal{N}_{2;\vec{k}}^{\uparrow}=_{f}\left\langle 0(t)\right|N_{ \alpha_{2},\vec{k}}^{\uparrow}\left|0(t)\right\rangle_{f}={}_{f}\left\langle 0 (t)\right|N_{\beta_{2},\vec{k}}^{\uparrow}\left|0(t)\right\rangle_{f}\] \[=\left|-s_{12}c_{23}+e^{i\delta}c_{12}s_{23}s_{13}\right|^{2} \left|\chi_{12;\vec{k}}^{++}\right|^{2}+s_{23}^{2}c_{13}^{2}c_{13}^{2}\left| \chi_{23;\vec{k}}^{++}\right|^{2}\,, \tag{16}\]
\[\mathcal{N}_{3;\vec{k}}^{\uparrow}=_{f}\left\langle 0(t)\right|N_{ \alpha_{3},\vec{k}}^{\uparrow}\left|0(t)\right\rangle_{f}={}_{f}\left\langle 0 (t)\right|N_{\beta_{3},\vec{k}}^{\uparrow}\left|0(t)\right\rangle_{f}\] \[=\left|-c_{12}s_{23}+e^{i\delta}s_{12}c_{23}s_{13}\right|^{2} \left|\chi_{23;\vec{k}}^{++}\right|^{2}+\left|s_{12}s_{23}+e^{i\delta}c_{12}c _{23}s_{13}\right|^{2}\left|\chi_{13;\vec{k}}^{++}\right|^{2}\, \tag{17}\]
where, by definition, \(N_{\alpha_{j},\vec{k}}^{r}=\alpha_{\vec{k},j}^{r\dagger}\alpha_{\vec{k},j}^{r}\) and \(N_{\beta_{j},\vec{k}}^{r}=\beta_{\vec{k},j}^{r\dagger}\beta_{\vec{k},j}^{r}\). For the opposite spin orientation \((\downarrow)\) one has the explicit results :
\[\mathcal{N}_{1;\vec{k}}^{\downarrow}=_{f}\left\langle 0(t) \right|N_{\alpha_{1},\vec{k}}^{\downarrow}\left|0(t)\right\rangle_{f}={}_{f} \left\langle 0(t)\right|N_{\beta_{1},\vec{k}}^{\downarrow}\left|0(t)\right\rangle_{f}\] \[=s_{12}^{2}c_{13}^{2}\left|\chi_{12;\vec{k}}^{--}\right|^{2}+s_{1 3}^{2}\left|\chi_{13;\vec{k}}^{--}\right|^{2}\,,\]
\[\mathcal{N}_{2;\vec{k}}^{\downarrow}=_{f}\left\langle 0(t) \right|N_{\alpha_{2},\vec{k}}^{\downarrow}\left|0(t)\right\rangle_{f}={}_{f} \left\langle 0(t)\right|N_{\beta_{2},\vec{k}}^{\downarrow}\left|0(t)\right\rangle_{f}\] \[=\left|-s_{12}c_{23}+e^{i\delta}s_{12}s_{23}s_{13}\right|^{2} \left|\chi_{12;\vec{k}}^{--}\right|^{2}+s_{23}^{2}c_{13}^{2}c_{13}^{2}\left| \chi_{23;\vec{k}}^{--}\right|^{2}\,,\]
\[\mathcal{N}_{3;\vec{k}}^{\downarrow}=_{f}\left\langle 0(t) \right|N_{\alpha_{3},\vec{k}}^{\downarrow}\left|0(t)\right\rangle_{f}={}_{f} \left\langle 0(t)\right|N_{\beta_{3},\vec{k}}^{\downarrow}\left|0(t)\right\rangle_{f}\] \[=\left|-c_{12}s_{23}+e^{i\delta}s_{12}c_{23}s_{13}\right|^{2} \left|\chi_{23;\vec{k}}^{--}\right|^{2}+\left|s_{12}s_{23}+e^{i\delta}c_{12}c _{23}s_{13}\right|^{2}\left|\chi_{13;\vec{k}}^{--}\right|^{2}\,.\]
We report in the Figure \((5)\) the plots of \(\mathcal{N}_{i;\vec{k}}^{\uparrow}\) and \(\mathcal{N}_{i;\vec{k}}^{\downarrow}\) with \(i=1,2,3\), as a function of \(\left|\vec{k}\right|\) for the parameter values used in the plots of the oscillation formulae.
## IV Conclusions
We analyzed the Einstein-Cartan theory and by studying the neutrino propagation on a torsion background in the framework QFT, we derived new oscillation formulae which are depending on the spin orientations of the neutrino fields. Indeed, we have shown that the energy splitting induced by the torsion term affects the oscillation frequencies and the Bogoliubov coefficients which represent the amplitudes of the oscillation formulae. We considered flat space-time and a constant term of torsion.
The spin dependence of the oscillation is maximal for values of torsion comparable to the neutrino momentum and masses, while much larger values of torsion lead to flavor oscillations which are almost independent of the spin. Moreover, a torsion large enough can effectively inhibit the flavor oscillations. Such behaviours characterize also the \(CP\)-asymmetry. The torsion effects are relevant on neutrino oscillations in non-relativistic regimes. Therefore, experiments studying neutrinos with very low momenta, such as PTOLEMY, could provide verification of such results in the future.
Figure 5: (Left panel) Plots of \(\mathcal{N}_{i;\vec{k}}^{\uparrow}\) as a function of \(\left|\vec{k}\right|\) (in \(\mathrm{eV}\)): \(N_{1}\) (Blue solid), \(N_{2}\) (Red dashed) and \(N_{3}\) (Orange dotted), for the same values of the parameters used in the plots of the oscillation formulae. (Right panel) Plots of \(\mathcal{N}_{i;\vec{k}}^{\downarrow}\) as a function of \(\left|\vec{k}\right|\) for the same choice of parameters.
Appendix A: Charges for Three Flavor Mixing with Torsion
Charges are introduced, by the symmetries of the Lagrangian, such that the \(SU(3)\) algebra is closed. For the invariance under global transformation of a phase factor \(U(1)\) of the type \(\Psi_{m}^{\prime}=e^{i\alpha}\Psi_{m}\), a charge is introduced via Noether's theorem \(Q=\int d^{3}\mathbf{x}\overline{\Psi}_{m}(x)\gamma^{0}\Psi_{m}\) which represents the total charge of the system. Considering a field transformation \(\Psi_{m}\) under global transformation \(SU(3)\), we obtain Noether charges \(Q_{m,j}\) of the form: \(Q_{m,j}(t)\equiv\int d^{3}\mathbf{x}J_{m,j}^{0}(x)\,,\) with \(j=1,2,\cdots,8\). Such charges satisfy the \(SU(3)\) algebra: \([Q_{m,j}(t),Q_{m,k}(t)]=if_{jkl}Q_{m,l}(t)\,.\) Note that only charges \(Q_{m,3}\) and \(Q_{m,8}\) are not time-dependent. Appropriate combinations of these charges allow the following quantities to be defined: \(Q_{1}\equiv\frac{1}{3}Q+Q_{m,3}+\frac{1}{\sqrt{3}}Q_{m,8}\,,\)\(Q_{2}\equiv\frac{1}{3}Q-Q_{m,3}+\frac{1}{\sqrt{3}}Q_{m,8}\,,\) and \(Q_{3}\equiv\frac{1}{3}Q-\frac{2}{\sqrt{3}}Q_{m,8}\,.\) The normal ordering of charge operators is:
\[:\,Q_{i}\,:\,\equiv\sum_{r}\int d^{3}\mathbf{k}\,\left(\alpha_{\vec{k},i}^{r\dagger }\alpha_{\vec{k},i}^{r}-\beta_{-\vec{k},i}^{r\dagger}\beta_{-\vec{k},i}^{r} \right)\,,\qquad i=1,2,3\]
where \(:\cdots\,:\) has been used to denote the normal ordered with respect to the vacuum state \(\left|0\right\rangle_{m}\).
Considering a field transformation \(\Psi_{f}\) under global transformation \(SU(3)\), we obtain charges \(Q_{f,j}\) of the form: \(Q_{f,j}(t)\equiv\int d^{3}\mathbf{x}J_{f,j}^{0}(x)\,,\) with \(j=1,2,\cdots,8\). Such charges satisfy the \(SU(3)\) algebra: \([Q_{f,j}(t),Q_{m,k}(t)]=if_{jkl}Q_{f,l}(t)\,.\) Note that the charges \(Q_{f,3}(t)\) and \(Q_{f,8}(t)\) are time-dependent due to non-zero off-diagonal terms in the mass matrix \(\mathbf{M}\). The current \(J_{f,j}^{\mu}(x)\) is linked to the current \(J_{m,j}^{\mu}(x)\) via the generator of the mixing transformation \(I_{\theta}\) in the following way: \(J_{f,j}^{\mu}(x)=I_{\theta}^{-1}(t)J_{m,j}^{\mu}(x)I_{\theta}(t)\,\,,\) with \(j=1,2,\cdots,8\). In analogy to the previous case, the flavor charges are introduced as follows: \(Q_{\nu_{e}}(t)\equiv\frac{1}{3}Q+Q_{f,3}(t)+\frac{1}{\sqrt{3}}Q_{f,8}(t)\,\,,\)\(Q_{\nu_{\mu}}(t)\equiv\frac{1}{3}Q-Q_{f,3}(t)+\frac{1}{\sqrt{3}}Q_{f,8}(t)\,\,,\) and \(Q_{\nu_{e}}(t)\equiv\frac{1}{3}Q-\frac{2}{\sqrt{3}}Q_{f,8}(t)\,\,,\) with \(Q_{\nu_{e}}(t)+Q_{\nu_{\mu}}(t)+Q_{\nu_{\tau}}(t)=Q\,.\) It is possible to express the normal ordering of these charges in terms of the flavour annihilators:
\[::\,Q_{\nu_{e}}\,::=\sum_{r}\int d^{3}\mathbf{k}\left(\alpha_{\vec{k},\nu_{\sigma }}^{r\dagger}(t)\alpha_{\vec{k},\nu_{\sigma}}^{r}(t)-\beta_{\vec{k},\nu_{\sigma }}^{r\dagger}(t)\beta_{\vec{k},\nu_{\sigma}}^{r}(t)\right)\,,\qquad\sigma=e,\mu,\tau\]
where \(::\cdots::\) the normal ordered with respect to the vacuum state was indicated \(\left|0\right\rangle_{f}\).
The flavour charges are connected to the Noether charges in the following way: \(::\,\,Q_{\nu_{\sigma}}\left(t\right)::=I_{\theta}^{-1}(t):\,Q_{i}\,:\,I_{ \theta}(t)\,\,,\) with \((\sigma,i)=(e,1),(\mu,2),(\tau,3)\).
## Acknowledgements
Partial financial support from MUR and INFN is acknowledged. A.C. also acknowledges the COST Action CA1511 Cosmology and Astrophysics Network for Theoretical Advances and Training Actions (CANTATA).
|
2309.03021 | Cosmological Parameter Forecasts for a CMB-HD Survey | We present forecasts on cosmological parameters for a CMB-HD survey. For a
$\Lambda$CDM + $N_{eff}$ + $\sum m_\nu$ model, we find $\sigma(n_s) = 0.0013$
and $\sigma(N_{eff}) = 0.014$ using CMB and CMB lensing multipoles in the range
of $\ell \in [30, 20000]$, after adding anticipated residual foregrounds,
delensing the acoustic peaks, and adding DESI BAO data. This is about a factor
of two improvement in ability to probe inflation via $n_s$ compared to
precursor CMB surveys. The $N_{eff}$ constraint can rule out light thermal
particles back to the end of inflation with 95% CL; for example, it can rule
out the QCD axion in a model-independent way assuming the Universe's reheating
temperature was high enough that the axion thermalized. We find that delensing
the acoustic peaks and adding DESI BAO tightens parameter constraints. We also
find that baryonic effects can bias parameters if not marginalized over, and
that uncertainties in baryonic effects can increase parameter error bars;
however, the latter can be mitigated by including information about baryonic
effects from kinetic and thermal Sunyaev-Zel'dovich measurements by CMB-HD. The
CMB-HD likelihood and Fisher estimation codes used here are publicly available;
the likelihood is integrated with Cobaya to facilitate parameter forecasting. | Amanda MacInnis, Neelima Sehgal, Miriam Rothermel | 2023-09-06T14:04:02Z | http://arxiv.org/abs/2309.03021v3 | # Cosmological Parameter Forecasts for a CMB-HD Survey
###### Abstract
We present forecasts on cosmological parameters for a CMB-HD survey. For a \(\Lambda\)CDM\(+\)\(N_{\rm eff}\)\(+\)\(\sum_{m_{\rm v}}\) model, we find \(\sigma(n_{\rm s})=0.0013\) and \(\sigma(N_{\rm eff})=0.014\) using CMB and CMB lensing multipoles in the range of \(\ell\in[30,20000]\), after adding anticipated residual foregrounds, delensing the acoustic peaks, and adding DESI BAO data. This is about a factor of two improvement in ability to probe inflation via \(n_{\rm s}\) compared to precursor CMB surveys. The \(N_{\rm eff}\) constraint can rule out light thermal particles back to the end of inflation with 95% CL; for example, it can rule out the QCD axion in a model-independent way assuming the Universe's reheating temperature was high enough that the axion thermalized. We find that delensing the acoustic peaks and adding DESI BAO tightens parameter constraints. We also find that baryonic effects can bias parameters if not marginalized over, and that uncertainties in baryonic effects can increase parameter error bars; however, the latter can be mitigated by including information about baryonic effects from kinetic and thermal Sunyaev-Zel'dovich measurements by CMB-HD. The CMB-HD likelihood and Fisher estimation codes used here are publicly available; the likelihood is integrated with Cobaya to facilitate parameter forecasting.
## I Introduction
Measurements of the Cosmic Microwave Background (CMB) have provided precise information on the cosmological parameters of our Universe [1; 2; 3]. Near future CMB experiments such as the Simons Observatory (SO) [4] and CMB-S4 [5] will improve measurements of the CMB temperature and polarization power spectra below \(\ell\sim 5000\) and the CMB lensing power spectrum considerably, yielding improved constraints on \(\Lambda\)CDM parameters, the sum of the neutrino masses (\(\sum m_{\nu}\)), and the effective number of light relativistic species (\(N_{\rm eff}\)), in particular [4; 5]. Beyond that, an even lower noise and higher resolution CMB experiment can improve parameter constraints further by measuring smaller angular scales in both temperature and polarization, measuring the lensing power spectrum with more precision and over smaller scales, and enabling more aggressive foreground removal down to lower flux limits. CMB-HD is a proposed concept for a Stage-V CMB facility [6; 7] that would have three times lower instrument noise and about six times higher resolution than CMB-S4. In this work, we forecast the cosmological parameter constraints that can be achieved by such a facility, folding in anticipated residual foregrounds, delensing, and baryonic effects.
Delensing of the CMB acoustic peaks has been demonstrated in a number of recent CMB data analyses [8; 9; 10]. It can tighten cosmological parameter constraints by sharpening the acoustic peaks, removing uncertainty in the lensing realization, and reducing lensing-induced power spectrum covariances [11; 12]. Baryonic effects such as AGN feedback can move around the matter distribution on moderately small scales, which impacts both the lensed CMB power spectra and the CMB lensing power spectrum; for next-generation CMB experiments, neglecting such effects can bias parameters [13; 14]. The residual foregrounds we include in this work, in both CMB and CMB lensing power spectra, are those that preliminary analyses suggest can be, and need to be, obtained for a CMB-HD facility to achieve its lensing science goals [15]. We emphasize that a full demonstration that such foreground residuals can be achieved using realistic simulations is a subject of ongoing research. We also include baryon acoustic oscillation (BAO) data expected from the Dark Energy Spectroscopic Instrument (DESI) [16], which serves to break a number of parameter degeneracies.
Parameters are forecast with both a Fisher estimation code and a likelihood plus Markov chain; we find that the resulting forecasts from both methods are in agreement. We make publicly available the CMB-HD temperature, polarization, and lensing (\(TT,TE,EE,BB,\kappa\)) signal and noise spectra, joint covariance matrix, binning matrix, Fisher forecasting software1, and likelihood2 in order to enable further forecasting. In addition, the CMB-HD likelihood has been integrated with the latest version of Cobaya3[17] using CAMB4[18; 19].
Footnote 1: [https://github.com/CMB-HD/hdfisher](https://github.com/CMB-HD/hdfisher)
Footnote 2: [https://github.com/CMB-HD/hdlike](https://github.com/CMB-HD/hdlike)
Footnote 3: [https://cobaya.readthedocs.io](https://cobaya.readthedocs.io)
Footnote 4: [https://camb.info/](https://camb.info/)
In Section II, we discuss the experimental configuration employed, our generation of CMB and lensing noise spectra, and our method of delensing. In Section III, we discuss the calculation of the joint covariance matrix. We describe the Fisher estimation and likelihood plus Markov chain procedures for forecasting cosmological parameters in Section IV. In Section V, we present parameter forecasts for a CMB-HD survey and show the impact of foregrounds, delensing, baryonic effects, and BAO. We discuss the implications of these forecasts for discovering new light relic particles and probing inflation in Section VI, and we conclude in Section VII.
Generating signal and noise spectra
Below we discuss our method of obtaining parameter forecasts for an ultra low-noise, ultra-high-resolution CMB survey, such as CMB-HD [6; 7]. We focus in particular on the \(\Lambda\)CDM model and minimal extensions. To assess the additional constraining power of a CMB-HD experiment, we also provide comparisons to experiments similar to the Simons Observatory (SO) [4] and CMB-S4 [5]. We assume that the CMB datasets we forecast for will provide \(TT\), \(TE\), \(EE\), and \(BB\) power spectra, as well as the \(\kappa\)K power spectra from CMB lensing. In Section II.1, we detail the experimental configurations assumed, and in Sections II.2 and II.3, we describe the instrument noise and residual foreground models. In Section II.4, we describe the computation of the CMB lensing power spectrum noise, and in Section II.5, we explain the generation of the delensed CMB spectra.
### Experimental Configuration
We create mock signal, noise, and covariance matrices for experiments similar to CMB-HD, CMB-S4, SO, and DESI. Note that we do not aim to reproduce the characteristics of these experiments exactly, but instead aim to give approximate estimates based on the experimental configurations described below and in Table 1.
For the CMB experiments we assume 60% of the sky is observed. We also model only the 90 and 150 GHz channels since they contain the bulk of the CMB signal, and we assume the other frequencies will be utilized primarily for foreground cleaning. We obtain the noise levels for HD-like, S4-like, and SO-like surveys from [4; 5; 6], respectively, stressing again that we do not aim to present official forecasts for each experiment, but rather to assess their comparative constraining power. Throughout this work, "SO-like" and "S4-like" surveys will refer to experimental configurations similar to the SO goal and CMB-S4 wide-area surveys [4; 5]. We assume SO-like and S4-like experiments will employ 5-meter dishes and a CMB-HD-like survey will use 30-meter dishes, and we determine the resolution of each facility from these specifications.
Regarding the CMB multipole ranges, we assume in our forecasts that a CMB-HD-like survey can measure multipoles in the range of \(\ell\in[1000,\ 20000]\) for \(TT\), \(TE\), \(EE\). We assume CMB-HD will not measure \(\ell<1000\) since these multipoles are a challenge for a 30-meter dish5; however, since both SO and CMB-S4 will measure the \(TT\), \(TE\), and \(EE\) spectra to the sample-variance limit for \(\ell\in[30,1000]\) over the same 60% of the sky as CMB-HD, we extend the CMB-HD multipole range down to \(\ell=30\) for \(TT\), \(TE\), and \(EE\). For the \(BB\) spectrum in the multipole range of \(30\leq\ell<1000\), we use the anticipated \(BB\) noise from an Advanced Simons Observatory (ASO) type survey; for ASO we assume the same resolution and sky area as SO, but 3.5 and 3.8 \(\mu\)K-arcmin white noise in temperature for 90 and 150 GHz, respectively.
Footnote 5: While the baseline CMB-HD design does not assume multipoles below \(\ell=1000\) will be measured by CMB-HD, we note that there are interesting science cases that benefit from CMB-HD measuring the \(BB\) spectrum down to \(\ell=100\), e.g. [20]; such low-\(\ell\) measurements will be strived for if they can be achieved technologically.
Given the resolution of SO and CMB-S4, we limit the \(TE\), \(EE\), and \(BB\) multipole range to \(\ell\leq 5000\). While we do not explicitly include foreground residuals in our SO-like and S4-like forecasts, we only consider multipoles of \(\ell\leq 3000\) for \(TT\), assuming higher \(TT\) multipoles will be foreground dominated. Similarly, in light of foreground contamination and bias, we only consider \(\kappa\)K for \(L\leq 3000\) for SO-like and S4-like experiments. In contrast, we include \(L\in[30,20000]\) for an HD-like experiment, and use the \(\kappa\)x covariance matrix from [15] for \(L\in[4800,20000]\), as we discuss in more detail below. We also include in the HD-like CMB power spectra, the foreground residual levels required for CMB-HD to achieve its lensing measurement target, as specified in [15].
\begin{table}
\begin{tabular}{c c c c c c} Exp. \(f_{\text{sky}}\) & Freq., & Noise, & FWHM, & Multipole Range \\ & GHz & \(\mu\)K-arcmin & arcmin & \(\ell,L\) \\ \hline & 90 & 0.7 & 0.42 & TT, TE, EE: [30, 20000] \\ HD & 0.6 & 150 & 0.8 & 0.25 & BB: [1000, 20000] \\ & & & & & \(\kappa\)x: [30, 20000] \\ \hline & 90 & 2.0 & 2.2 & TT: [30, 3000] \\ S4 & 0.6 & 150 & 2.0 & 1.4 & TE, EE, BB: [30, 5000] \\ & & & & & \(\kappa\)x: [30, 3000] \\ \hline & 90 & 5.8 & 2.2 & TT: [30, 3000] \\ SO & 0.6 & 150 & 6.3 & 1.4 & TE, EE, BB: [30, 5000] \\ & & & & & \(\kappa\)x: [30, 3000] \\ \hline \end{tabular}
\end{table}
Table 1: For each CMB experiment considered, we list the white noise level in temperature and beam full-width at half-maximum (FWHM) for each frequency used in the forecast, the sky fraction (\(f_{\text{sky}}\)), and the multipole ranges used for the power spectra. For the SO-like and S4-like forecasts, we only forecast for the large-aperture telescopes, and use noise levels from [4; 5], respectively. For the CMB-HD-like forecasts, we use noise levels from [7], and we assume CMB-HD only measures \(\ell\geq 1000\). Since SO will measure \(TT\), \(TE\), and \(EE\) to the sample variance limit for \(\ell<1000\) over the CMB-HD sky area, we extend the CMB-HD multipole range down to \(\ell=30\) for these spectra; for the \(BB\) spectrum, we use an ASO-like \(BB\) spectrum and noise in the multipole range of \(30\leq\ell<1000\) (see text for details). We emphasize that we do not aim to present official forecasts for each experiment, but rather to assess the comparative constraining power varying a number of parameters.
To simulate the mock DESI dataset, we assume the data is taken over 14,000 square degrees (approximately 35% of the sky) for the baseline galaxy survey and the bright galaxy survey, as specified in [16]. Our mock DESI data consists of distance ratio measurements, \(r_{\mathrm{s}}/d_{V}(z)\), obtained from CAMB [18; 19], and a covariance matrix for those measurements, which we calculate. Here \(r_{\mathrm{s}}\) is the comoving sound horizon and \(d_{V}\) is a combined distance measurement, which we define in Section III.2. We calculate the covariance matrix using the forecasted fractional errors on \(H(z)r_{\mathrm{s}}\) and \(r_{\mathrm{s}}/d_{A}(z)\) provided in [16], as we discuss in more detail in Section III.2.
### CMB Instrumental Noise Spectra
A first step towards generating theoretical CMB and CMB lensing power spectra is to compute the expected instrumental noise on the CMB power spectra. We add to these noise spectra for \(TT\) residual extragalactic foregrounds in the case of CMB-HD, as described in Section II.3. These mock CMB power spectra are then used to calculate the expected lensing reconstruction noise (Section II.4), which is then used to calculate the theoretical delensed CMB power spectra (Section II.5). The lensed or delensed CMB spectra and the lensing spectrum, along with the CMB and lensing noise spectra, are then used in the analytic calculation of the covariance matrix (Section III). This covariance matrix is then used to forecast parameter constraints either via a Fisher matrix (Section IV.1) or a likelihood and Markov Chain (Section IV.2).
The power spectra of the CMB temperature and polarization fields and the (projected) lensing potential are determined by the cosmological model that describes our Universe. We use a flat \(\Lambda\)CDM+\(N_{\mathrm{eff}}\)+\(\sum m_{\nu}\) model described by eight cosmological parameters with fiducial values from the _Planck_ baseline cosmological parameter constraints [1], which we list in Table 2. We use CAMB to generate the lensed, unlensed6, and delensed CMB power spectra \(C_{\ell}^{XY}\) for \(XY\in[TT,\ TE,\ EE,\ BB]\) and the lensing power spectrum \(C_{L}^{\mathrm{xx}}\). We use the 2016 version of HMcode [21; 22] to calculate the non-linear matter power spectrum for a cold dark matter model, and increase the accuracy of CAMB beyond its default settings as discussed in detail in Appendix A. We also use CAMB to calculate the theoretical BAO signal, i.e. \(r_{\mathrm{s}}/d_{V}(z)\), using the same fiducial cosmology and the default CAMB accuracy settings.
Footnote 6: These are used in the analytic covariance matrix calculation described in Section III.
We generate noise spectra for each CMB experiment listed in Table 1 at 90 and 150 GHz. Each frequency \(f\) has a root-mean-square white noise level \(\Delta_{f}^{TT}\) in temperature and a beam full-width at half-maximum of \(\theta_{f}^{\mathrm{FWHM}}\) given in Table 1. We assume that the noise in polarization is given by \(\Delta_{f}^{EE}=\Delta_{f}^{BB}=\sqrt{2}\Delta_{f}^{TT}\), and set \(\Delta_{f}^{TE}=0\) assuming uncorrelated noise in temperature and polarization maps. For a given frequency, we model the beam-deconvolved noise power spectrum as
\[N_{\ell_{f}}^{XY}=\left(\Delta_{f}^{XY}\right)^{2}\exp\left[\frac{\ell(\ell+1 )\left(\theta_{f}^{\mathrm{FWHM}}\right)^{2}}{8\ln 2}\right], \tag{1}\]
for \(XY\in[TT,\ EE,\ BB]\).
To model a CMB-HD-type survey, we add the anticipated residual foreground power spectra (see Section II.3) to the temperature noise spectrum at each frequency, so that \(N_{\ell_{f}}^{TT\mathrm{HD}}=N_{\ell_{f}}^{TT}+C_{\ell_{f}}^{FG}\). Since we assume that CMB-HD will not measure \(\ell<1000\) (and does not need to for \(TT,TE,\ EE\) as discussed in Section II.1), we use an ASO-like \(BB\) noise spectrum for \(\ell\in[30,1000)\). We assume polarized foregrounds can be removed to levels below the instrument noise for all CMB experiments considered here by making use of the additional frequency coverage outside of 90 and 150 GHz that each experiment will have.
We coadd the noise spectra for 90 and 150 GHz for
\begin{table}
\begin{tabular}{l l c c} Parameter & Fiducial &
\begin{tabular}{c} Step \\ size (\%) \\ \end{tabular} & Prior \\ \hline \(\Omega_{\mathrm{b}}h^{2}\) & 0.02237 & 1 & \([0.005,\ 0.1]\) \\ \(\Omega_{\mathrm{c}}h^{2}\) & 0.1200 & 1 & \([0.001,\ 0.99]\) \\ \(\ln(10^{10}A_{\mathrm{s}})\) & 3.044 & 0.3a [2; 4] \\ \(n_{\mathrm{s}}\) & 0.9649 & 1 & \([0.8,\ 1.2]\) \\ \(\tau\) & 0.0544 & 5 & \(0.054\pm 0.007\) \\ \(100\theta_{\mathrm{MC}}\) & 1.04092 & 1 & \([0.5,\ 10]\) \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 1 & \([20,\ 100]\) \\ \(N_{\mathrm{eff}}\) & 3.046 & 5 & \([0.05,\ 10]\) \\ \(\sum m_{\nu}\) [eV] & 0.06 & 10 & \([0,\ 5]\) \\ \end{tabular}
\end{table}
Table 2: Listed in the first two columns are the cosmological parameters varied in the Fisher or MCMC forecasts and their fiducial values based on the _Planck_ baseline cosmological parameter constraints [1]. The third column lists the step size for each parameter, as a percentage of the fiducial value, used when calculating the numerical derivatives of the power spectra with respect to that parameter. The last column lists the priors used for the MCMC analysis; the prior on the reionization optical depth, \(\tau\), is also applied to the Fisher forecasts. Note that we sample over \(100\theta_{\mathrm{MC}}\) and obtain \(H_{0}\) as a derived parameter in the MCMC analysis. All priors are uniform, with the exception of \(\tau\), where we use a Gaussian prior from _Planck_[1].
each experiment using inverse-noise weights, \(W^{XY}_{\ell_{f}}=(N^{XY}_{\ell_{f}})^{-1}\), such that the coadded noise power spectrum is given by
\[N^{XY}_{\ell}=\frac{\sum_{f}\left(W^{XY}_{\ell_{f}}\right)^{2}N^{XY}_{\ell_{f}}} {\left(\sum_{f}W^{XY}_{\ell_{f}}\right)^{2}}. \tag{2}\]
The total CMB power spectra are obtained by adding the coadded noise spectra to the theoretical CMB power spectra, i.e. \(C^{XY,\text{tot}}_{\ell}=C^{XY}_{\ell}+N^{XY}_{\ell}\). For parameter forecasts (Sections IV.1 and IV.2), we use a binned signal spectra given by
\[C^{XY}_{\ell_{b}}=\sum_{\ell}M_{b\ell}C^{XY}_{\ell}, \tag{3}\]
for \(XY\in[TT,\,TE,\,EE,\,BB,\,\kappa\kappa]\), where \(M_{b\ell}\) is the binning matrix and \(C^{XY}_{\ell_{b}}\) is the binned spectrum with bin centers \(\ell_{b}\). The binning is chosen to capture the CMB acoustic peaks on scales of \(\ell\lesssim 5000\) and uses a uniform bin width of \(\Delta\ell=300\) on smaller scales. The first four panels of Fig. 1 show the CMB theory, instrumental noise, residual extragalactic foregrounds, and binned forecasted error bars for CMB-HD, plotting the binned diagonals of the covariance matrix (see Section III). In the last panel, we show the lensing theory power spectrum, the lensing reconstruction noise (see Section II.4), and the binned forecasted lensing error bars.
### Residual Extragalactic Foregrounds
The observed CMB is contaminated by astrophysical foregrounds, which can be both Galactic and extragalactic in origin. Galactic foregrounds dominate on relatively large-scales, i.e. \(\ell<500\), and are frequency dependent. As mentioned above, we assume CMB-HD only measures multipoles above \(\ell=1000\), and will use lower multipoles from SO for \(TT,TE\), and \(EE\), which will already be sample variance limited over that range. SO will have observations over at least six different frequencies, and we assume in this work that SO can remove the Galactic foreground contribution to below the CMB sample variance limit for \(TT,TE\), and \(EE\). We assume CMB-HD will use the \(BB\) measurement from ASO below \(\ell=1000\), and we assume that ASO will remove Galactic foregrounds to below their noise levels for \(BB\) over this range.
For extragalactic foregrounds, the six times higher resolution and more than three times lower noise of CMB-HD compared to precursor wide-area CMB surveys, in addition to seven frequency channels, will allow for much greater foreground subtraction. These extragalactic foregrounds include the thermal and kinetic Sunyaev-Zel'dovich effects (tSZ and kSZ) from hot gas in galaxy clusters, groups, and the intergalactic medium [23, 24, 25], the cosmic infrared background (CIB) from dusty star-forming galaxies, and the radio emission from active galactic nuclei. For CMB-HD, we assume the foreground levels obtained in [15]; in particular, these residual foreground levels assume a CMB-HD-type experiment will template-subtract CIB sources above 0.03 and 0.008 mJy at 150 and 90 GHz, respectively, in part exploiting CMB-HD observations at 220 and 280 GHz to find sources and extrapolate their fluxes to lower frequencies.7 These levels also assume radio sources above 0.03 and 0.04 mJy at 150 and 90 GHz, respectively, are template subtracted, as well as all tSZ sources detected at 3\(\sigma\) significance or higher. We do not assume the kSZ effect from reionization can be removed, since that is quite challenging given its frequency independence and high redshift; however, we do assume the reionization kSZ only adds Gaussian noise uncorrelated with the lensing signal. We assume that the late-time kSZ effect can be removed from the CMB-HD maps through innovative "de-kSZ-ing" techniques that are currently in development [26], and do not include it here. We stress again that the residual foreground levels assumed are what preliminary studies suggest is possible and necessary for CMB-HD to achieved an unbiased CMB lensing power spectrum over the lensing multipole range of \(L\in[5000,20000]\)[15]. However, we leave it to future work to robustly demonstrate with realistic simulations and an end-to-end pipeline that these residual foreground levels can be achieved in practice.
Footnote 7: In the case of the CIB, see [15] for discussion of how these flux levels were obtained, accounting for confusion from blended sources and CIB spectral index uncertainties.
The total residual extragalactic foreground power from the sources described above is shown as the dashed orange curve in the top panel of Fig. 1 for the coadded 90 and 150 GHz temperature power spectra; this residual foreground power is included in the total noise spectra for the CMB-HD temperature power spectrum. We include only instrumental noise for the \(TE,EE\), and \(BB\) spectra since we assume that the combination of high resolution and seven frequency channels for CMB-HD will be sufficient to reduce polarized extragalactic foregrounds to residual levels below the instrument noise.
As mentioned in Section II.1, while we do not explicitly include residual extragalactic foreground levels in the temperature spectra for SO and CMB-S4, we only include multipoles below \(\ell=3000\) in those spectra, assuming higher multipoles will be dominated by foreground uncertainty.
### CMB Lensing Power Spectrum Noise
Lensing breaks the Gaussianity of the primordial CMB, introducing mode-coupling in the CMB on different scales. A quadratic estimator [27, 28, 29, 30] takes advan
Figure 1: Expected CMB \(TT\), \(TE\), \(EE\), \(BB\), and \(\kappa\)\(\kappa\) power spectra for a CMB-HD-type survey, with instrument noise and residual extragalactic foregrounds coadded from 90 and 150 GHz. We show the theory spectra (blue), the per-mode beam-deconvolved instrument noise (purple), the residual temperature foregrounds (orange), and the expected CMB-HD error bars, plotting the diagonals of the binned covariance matrix (red) (see Sections II.1, II.2, and II.3 for details). We also show the noise curve for the lensing power spectrum, \(\kappa\kappa\), (pink), which is a combination of noise from the primordial CMB, instrument, and residual foregrounds (see Section II.4 for details). Note that in many cases the error bars are smaller than the point indicating the bin center.
tage of this lensing-induced mode-coupling of the CMB to reconstruct the lensing potential from pairs of CMB maps that are filtered to isolate this mode-coupling. The lensing reconstruction can then be used to estimate the lensing potential power spectrum, which is the connected (non-Gaussian) part of the CMB four-point function. The disconnected (Gaussian) part of the four-point function, called the Gaussian \(N_{L}^{(0)}\) bias, arises from the primordial CMB and instrument noise even in the absence of lensing [31]; we assume that this will be subtracted with traditional realization-dependent (RDN0) subtraction techniques [32].
We use two types of quadratic estimators in this work. The traditional estimator from [27, 28, 29], henceforth called H&O, which uses the same CMB multipole ranges in both CMB maps that go into the estimator. The other quadratic estimator we use is from [30], henceforth called HDV, which allows one map to contain only large scales (\(\ell<2000\)), called a gradient map, and one map to contain only small scales (\(\ell\in(5000,20000)\)). The HDV estimator is well suited to returning the small-scale signal, and allows one to minimize bias from non-Gaussian astrophysical foregrounds by utilizing two CMB maps that do not overlap in scales [15].
While we can remove the Gaussian noise _bias_ by RDN0 subtraction techniques, the Gaussian part of the four-point function still contributes _noise_ to the lensing power spectrum which can not be completely removed.8 We hereafter call this lensing spectrum noise \(N_{L}^{\text{rx}}\) to distinguish it from the CMB noise spectra \(N_{\ell}^{XY}\) for \(XY\in[TT,\,EE,\,BB]\).
Footnote 8: We note that split-based lensing estimators can remove the contribution to \(N_{L}\) from instrument noise [33], but not from the primordial CMB. In this work, we do not assume split-based estimators are used, but their use is a potential avenue for gaining improved signal-to-noise.
We use the public CLASS delens package [12]9 to calculate \(N_{L}^{\text{rx}}\) for the H&O estimator [29].10 For SO-like and CMB-S4-like experiments, we use the H&O estimator to generate \(N_{L}^{\text{rx}}\) curves using the CMB multipole ranges specified in Table 1 and four-point lensing estimators from pairs of \(TT,TE,EE,EB\), and \(TB\) maps, noting that the other possible four-point combinations contribute less significantly to the overall signal-to-noise. For all the H&O \(N_{L}^{\text{rx}}\) curves, discussed here and below, we reduce their noise levels further by iterative delensing, which we describe in Section II.5. We coadd these \(N_{L}^{\text{rx}}\) spectra to obtain a minimum variance (MV) noise curve, \(MV\)\(N_{L}^{\text{rx}}\).
Footnote 10: We cross checked that the CLASS delens package returns the same \(N_{L}^{\text{rx}}\) as the public tempura package ([https://github.com/simonso/tempura](https://github.com/simonso/tempura)). We use the CLASS delens package because it can iteratively delens all the estimator combinations, not just \(EB\), as we discuss in Section II.5.
For the CMB-HD-like experiment, for lensing multipoles below \(L=5000\), we generate \(N_{L}^{\text{rx}}\) curves using all CMB multipoles within \(\ell\in[30,20000]\) for \(T,E\), and \(B\)11, but note that there is minimal change to \(N_{L}^{\text{rx}}\) for \(L<5000\) if we limit the \(T\) spectra to \(\ell\in[30,10000]\) due to foreground concerns. We calculate \(N_{L}^{\text{rx}}\) with the H&O estimator for \(TT,TE,EE,EB\), and \(TB\) maps, reducing these noise levels further by iterative delensing, and coadd these to obtain an \(MV\)\(N_{L}^{\text{rx}}\). We assume the \(C_{L}^{\text{rx}}\) covariance matrix for \(L\in[30,5000]\) has only diagonal
Figure 2: We show the theoretical lensing convergence power spectrum, \(\kappa\kappa\), (black solid) along with the CMB-HD minimum-variance (MV) lensing power spectrum noise, \(N_{L}^{\text{rx}}\) (red dashed). As discussed in Section II.4, the latter is a combination of the estimators of [28] (H&O) and [30] (HDV); in the lensing multipole range \(L\in[30,5000]\), we use the H&O MV reconstruction noise from the \(TT\) (blue) and \(TE\), \(EE\), \(EB\), and \(TB\) estimators (the MV of the latter four estimators are shown in purple). For each of these estimators, we assume iterative delensing will be done, and lower the noise curves accordingly following the method of [12] and summarized in Section II.5. In the range \(L\in[5000,20000]\), to reduce bias from residual extragalactic foregrounds, we replace the \(N_{L}^{\text{rx}}\) from the H&O \(TT\) estimator with that from the HDV \(TT\) estimator, which we obtain from the diagonal elements of the simulation-based covariance matrix of [15] (cyan solid); to this we coadd the H&O MV \(N_{L}^{\text{rx}}\) from \(TE\), \(EE\), \(EB\), and \(TB\). For comparison, we also show the analytically-derived HDV \(TT\)\(N_{L}^{\text{rx}}\) (cyan dotted); the excess variance of the simulation-based HDV \(N_{L}^{\text{rx}}\) compared to the analytic one is likely due to higher-order lensing corrections and non-Gaussian fluctuations of the matter power spectrum that are significant on these small scales and are captured by the simulations, as discussed in [15, 34]. See Section II.4 for full details.
contributions since the lensing convergence is assumed to be Gaussian on relatively large scales.
In Fig. 2, we show \(N_{L}^{\text{xx}}\) for CMB-HD from the H&O estimator from \(TT\) alone (solid dark blue curve), and from everything other than \(TT\) (i.e. minimum variance coadd of \(TE,EE,EB,TB\)) (solid purple curve). From the comparison of these curves, we see that the polarization estimators dominate the signal-to-noise below \(L=2500\); however, on smaller scales, the \(TT\) estimator has significantly lower noise than all the other estimators combined. For this reason, on small scales, one must be concerned about the impact of astrophysical foregrounds in the temperature maps and how they may bias the lensing power spectrum. We also show in this figure the \(N_{L}^{\text{xx}}\) for the HDV quadratic estimator using only \(TT\) and \(\ell\in[30,2000]\) in the gradient map and \(\ell\in[5000,2000]\) in the small-scale map (dotted cyan curve). This noise curve was made using the symlens package12. We see that the HDV curve matches the H&O curve well on small-scales; however, it is not as useful on large scales. We note that all the \(N_{L}^{\text{xx}}\) curves shown in Fig. 2 were calculated including both the instrument noise and residual foregrounds described in Sections II.2 and II.3 for the CMB spectra.
Footnote 12: [https://github.com/sinonsobs/symlens](https://github.com/sinonsobs/symlens)
For CMB-HD, on small lensing scales from \(L\in[5000,20000]\), we use the simulation-based lensing covariance matrix from [15] for \(TT\), which incorporates the HDV quadratic estimator with the \(\ell\) ranges for large-scale and small-scale CMB maps given above, as well as higher-order lensing corrections and non-Gaussian fluctuations of the matter power spectrum that are significant on these small scales and are captured by the simulations. The solid cyan curve in Fig. 2 shows the lensing noise obtained from the diagonals of this covariance matrix. This \(C_{L}^{\text{xx}}\) covariance matrix also includes off-diagonal contributions (which we show in Fig. 6); in generating this covariance matrix, simulation-based calculations of the RDN0 and \(N_{1}\) biases (the latter is a higher-order correction) are subtracted, which reduces the off-diagonal correlations significantly, as discussed in [15] and [34]. Restricting the HDV gradient map to \(\ell<2000\) is also important for removing the \(N_{2}\) bias [35, 34]. For the final CMB-HD \(N_{L}^{\text{xx}}\) spectra in the range of \(L\in[5000,20000]\), we coadd the simulation-based \(TT\) HDV \(N_{L}^{\text{xx}}\) with the H&O \(N_{L}^{\text{xx}}\) from \(TE,EE,EB\), and \(TB\). For \(L\in[30,5000]\), we use the H&O minimum variance \(N_{L}^{\text{xx}}\) coadding \(TT,TE,EE,EB\), and \(TB\) noise curves. We show this CMB-HD MV \(N_{L}^{\text{xx}}\) curve in Fig. 1 (dashed pink) and in Fig. 2 (dashed red).
For CMB-HD, the \(EB\) estimator dominates below \(L=2500\), and foreground biases are a negligible concern. Above \(L=5000\), we attempt to be conservative by adopting the HDV estimator and a simulation-based covariance matrix; we note that there are other more optimal estimators that can be used to reconstruct lensing at small scales that may yield a higher signal-to-noise ratio and which also hold promise for mitigating foreground bias [36, 37, 38, 9]. In the regime of \(L\in[2500,5000]\), where the \(TT\) estimator dominates, we are optimistic that foreground bias can be mitigated, but emphasize that this still needs to be demonstrated with realistic simulations; as mentioned above, the \(TT\) spectrum can be cut to only include \(\ell\in[30,10000]\) as opposed to \(\ell\in[30,20000]\), with no loss of lensing signal-to-noise for \(L<5000\).
### Creation of Delensed CMB Spectra
Given the ultra low noise of a CMB-HD-type survey, it will be possible to remove a significant amount of the lensing in the CMB spectra, a procedure called delensing. Delensing has the effect of tightening cosmological parameter constraints because it sharpens the acoustic peaks, removes uncertainty in the lensing realization that distorted the primordial CMB, and reduces lensing-induced off-diagonal power spectrum covariances [11, 12]. In order to achieve tighter parameter constraints via delensing, it is important to replace the lost lensing information in the CMB power spectra with that from the reconstructed lensing power spectrum by combining them in the likelihood, properly accounting for the delensed CMB and CMB lensing power spectrum covari
Figure 3: The residual lensing power, \(C_{L}^{\text{xx,res}}\), remaining after delensing (Eq. 4), shown as a fraction of the full lensing power spectrum, \(C_{L}^{\text{xx}}\), for SO, CMB-S4, and CMB-HD-like experiments with the specifications listed in Table 1. For CMB-HD, the change near \(L=5000\) is due to using different lensing estimators above and below that scale, as discussed in Section II.4. Below \(L=650\), CMB-HD removes over 90% of the lensing power.
ances [11].
To predict the amount of lensing that can be removed from the CMB power spectra for each experiment discussed in Section II.1, we first calculate the residual lensing power that will remain in the CMB maps after delensing. We then lens unlensed CMB spectra by this remaining lensing power using CAMB, following [8]. The residual lensing power, \(C_{L}^{\rm{cx},\rm{Res}}\), is calculated by subtracting the Wiener-filtered lensing power spectrum from the total theoretical \(C_{L}^{\rm{cx}}\), as follows
\[C_{L}^{\rm{xx},\rm{res}}=C_{L}^{\rm{cx}}\left(1-\frac{C_{L}^{\rm{xx}}}{C_{L}^{ \rm{xx}}+N_{L}^{\rm{xx}}}\right), \tag{4}\]
where \(N_{L}^{\rm{cx}}\) is the expected lensing reconstruction noise for the MV estimator described in Section II.4. This removes lensing on scales where the lensing power spectrum is detected with high signal-to-noise ratio, but not on scales where the lensing reconstruction is too noisy to estimate and remove the lensing signal. We show the residual lensing power as a fraction of the total power for SO, CMB-S4, and CMB-HD-like experiments in Fig. 3. We see that below \(L=650\), delensing with CMB-HD removes over 90% of the lensing power. The abrupt change near \(L=5000\) is due to using different lensing estimators above and below that scale, as discussed in Section II.4.
As mentioned in Section II.4, for all of the H&O \(N_{L}^{\rm{xx}}\) curves discussed above, we reduce their noise levels further by iterative delensing. Iterative delensing of the \(EB\) lensing estimator is traditionally assumed in forecasts of future CMB experiments [5]. The reason is that, assuming no primordial \(B\)-mode power (i.e., assuming the \(B\)-mode power is entirely due to lensing), there are no _unlensed_\(B\)-modes to contribute to the variance of the lensing reconstruction. Thus the \(EB\) lensing reconstruction noise can be reduced by delensing, which lowers the \(B\)-mode power. Then the delensed \(E\) and \(B\) maps can be used to obtain a reconstruction of the residual lensing, which can be used to further delens the \(E\) and \(B\) maps; this procedure can be iterated on until converged [39, 40, 41]. On small scales, where most of the CMB power in the temperature and \(E\)-mode polarization maps is due to lensing, the amount of delensing can be improved by a similar iterative approach [12], where all the estimators are iterated instead of only \(EB\).
As mentioned in Section II.4, we use the CLASS delens code [12] to calculate the expected lensing reconstruction noise when using the iterative delensing procedure, lowering the final \(MV\) lensing noise spectrum, as well as
Figure 4: Shown is the amount of lensing power remaining in CMB-HD \(TT\), \(TE\), and \(EE\) power spectra after delensing, \(\Delta C_{\ell}^{\rm{densed}}\), as a fraction of the total amount of lensing present in the CMB spectra, \(\Delta C_{\ell}^{\rm{densed}}\). \(\Delta C_{\ell}\) is defined as the difference between the lensed/delensed CMB power spectra and the unlensed power spectra. Since both \(\Delta C_{\ell}\) curves cross zero at the same multipoles, we bin them before taking their ratios, and show the bin centers as points. The lensing in the CMB-HD spectra is reduced on all scales, and about 90% of the lensing is removed in the CMB spectra for \(\ell<2000\) (indicated by points below the dotted line). See Section II.5 for details.
Figure 5: We show the amount of lensing that can be removed from any \(BB\) power spectrum via delensing with CMB-HD. In particular, we show the lensed and delensed CMB \(BB\) power spectra (top panel), and their ratio (lower panel), in the region where delensing is most effective (\(\ell<2000\)). The bottom panel shows that CMB-HD delensing can remove over 90% of the lensing signal in the \(BB\) spectrum for \(\ell<1000\); this is the case even when assuming CMB-HD will not measure \(\ell<1000\) and instead using ASO-like \(BB\) spectra and instrument noise for \(\ell\in[30,1000)\), as discussed in Section II.2. This is because the higher CMB multipoles of CMB-HD are being used to reconstruct low \(L\) lensing multipoles used for delensing.
increasing the amount of delensing of the CMB spectra, accordingly. We show the effect of iterative delensing on the CMB \(TT\), \(TE\), and \(EE\) power spectra in Fig. 4, and on the CMB \(BB\) power spectrum in Fig. 5. We see that CMB-HD removes about 90% of the lensing in the \(TT\), \(TE\), and \(EE\) spectra for \(\ell<2000\), and 90% of the lensing in the \(BB\) spectra for \(\ell<1000\).
In this work we assume that the bias that arises from internally delensing the acoustic peaks [42; 10] can either be removed following a method similar to [8] or will not arise as is the case for forward modeling methods [9]. We leave for future work the demonstration of the mitigation of this bias.
## III Covariance matrix calculation
In addition to the signal and noise spectra described in Section II, a full covariance matrix of the expected data is required to forecast cosmological parameter constraints.
### CMB Covariance Matrix
To construct the covariance matrix for the CMB experiments discussed above, we assume we will have \(TT,TE,EE,BB\), and xx spectra for each experiment for the multipole ranges listed in Table 1. We analytically calculate the covariance matrix following [12] as described below.
For the diagonal components of the covariance matrix, for the blocks giving the variance between different CMB spectra (see Fig. 6), we calculate
\[\mathbf{C}^{XY,WZ}_{\ell_{1}\ell_{2}\mathrm{KG}}=\frac{\delta_{\ell_{1}\ell_{ 2}}f_{\mathrm{sky}}^{-1}}{2\ell_{1}+1}\left[C^{XW,\mathrm{tot}}_{\ell_{1}}C^{ YZ,\mathrm{tot}}_{\ell_{1}}+C^{XZ,\mathrm{tot}}_{\ell_{1}}C^{YW,\mathrm{tot}} _{\ell_{1}}\right], \tag{5}\]
Here \(\mathbf{C}^{\mathrm{tot}}_{\ell}\) is the total CMB signal plus noise spectrum, (i.e. \(\mathbf{C}^{\mathrm{tot}}_{\ell}=\mathbf{C}_{\ell}+N_{\ell}\)), and combinations of any two of \(X\), \(Y\), \(Z\), and \(W\) can each be \(TT\), \(TE,EE\), or \(BB\). The sky fraction observed by each experiment is indicated by \(f_{\mathrm{sky}}\). We label this term given by Eq. 5 with the subscript \(\mathrm{G}\) for "Gaussian" to indicate that it arises from the Gaussian fluctuations of the lensed or delensed CMB field.
As discussed in Section II.4, lensing induces mode coupling between different multipoles of the unlensed CMB field, giving the originally Gaussian field a non-Gaussian structure. The uncertainty in the amount of mode coupling, arising from uncertainty in either the lensing potential or the unlensed CMB field due to sample variance, generates off-diagonal elements in the CMB x CMB blocks [43]. Following [12], we calculate these off-diagonal terms as
\[\mathbf{C}^{XY,WZ}_{\ell_{1}\ell_{2}\mathrm{NG}} =\sum_{L}\frac{\partial C^{XY}_{\ell_{1}}}{\partial C^{\phi\phi} _{L}}\frac{2f_{\mathrm{sky}}^{-1}}{2L+1}\left(C^{\phi\phi}_{L}\right)^{2} \frac{\partial C^{YZ}_{\ell_{2}}}{\partial C^{\phi\phi}_{L}}\] \[\quad+\sum_{\ell}\left[\frac{\partial C^{XY}_{\ell_{1}}}{ \partial C^{XY,\mathrm{u}}_{\ell}}C^{XY,WZ,\mathrm{u}}_{\ell\ell,G}\frac{ \partial C^{WZ}_{\ell_{2}}}{\partial C^{WZ,\mathrm{u}}_{\ell}}\right](1- \delta_{\ell_{1}\ell_{2}}), \tag{6}\]
where
\[\mathbf{C}^{XY,WZ,\mathrm{u}}_{\ell\ell,G}=f_{\mathrm{sky}}^{-1}\left(C^{XW, \mathrm{u}}_{\ell}C^{YZ,\mathrm{u}}_{\ell}+C^{XZ,\mathrm{u}}_{\ell}C^{YW, \mathrm{u}}_{\ell}\right) \tag{7}\]
is the sample variance of the unlensed CMB spectra \(C^{\mathrm{u}}_{\ell}\), and again combinations of any two of \(X\), \(Y\), \(Z\), and \(W\) can each be \(TT,TE,EE\), or \(BB\).13 Similarly \(2f_{\mathrm{sky}}^{-1}(C^{\phi\phi}_{L})^{2}/(2L+1)\) is the sample variance of the lensing potential spectrum. The lensing potential power spectrum, \(C^{\phi\phi}_{L}\), is related to the lensing deflection power spectrum, \(C^{dd}_{L}\), and the lensing convergence power spectrum, \(C^{\mathrm{xx}}_{L}\), by
Footnote 13: Note that when a \(BB\) combination is involved, we substitute the unlensed \(EE\) spectrum in the second term of Eq. 6 and in Eq. 7, following [12; 44].
\[C^{\mathrm{xx}}_{L}=\frac{L(L+1)}{4}C^{dd}_{L}=\frac{L^{2}(L+1)^{2}}{4}C^{\phi \phi}_{L}. \tag{8}\]
The subscript NG in Eq. 6 indicates that it is the "non-Gaussian" component. The full covariance matrix for the CMB x CMB blocks is given by
\[\mathbf{C}^{XY,WZ}_{\ell_{1}\ell_{2}}=\mathbf{C}^{XY,WZ}_{\ell_{1}\ell_{2} \mathrm{KG}}+\mathbf{C}^{XY,WZ}_{\ell_{1}\ell_{2}\mathrm{NG}}. \tag{9}\]
For the CMB lensing x CMB lensing block of the covariance matrix, the variance of the lensing potential spectrum, after RDN0 subtraction, is approximately Gaussian [44] and is given by
\[\mathbf{C}^{\phi\phi,\phi\phi}_{L_{1}L_{2}}=\delta_{L_{1}L_{2}}\frac{2f_{ \mathrm{sky}}^{-1}}{2L_{1}+1}\left(C^{\phi\phi}_{L_{1}}+N^{\phi\phi}_{L_{1}} \right)^{2}. \tag{10}\]
For CMB-HD, we use Eq. 10 for \(L<5000\), and for \(L\geq 5000\) we replace the analytic covariance matrix with a simulation-based one from [15] when including foregrounds, or from [34] when not including foregrounds, both of which use the HDV \(TT\) estimator as discussed in Section II.4. Note importantly that the simulation-based covariance matrices have off-diagonal terms, as discussed in more detail in Section II.4, and as shown in Fig. 6 for the \(\kappa\kappa\) block.14
We calculate the correlations between the lensing potential spectra and the CMB spectra from Eq. 36 in [44], which can also be obtained by extending Eq. 5 in [12] to allow either \(XY\) or \(WZ\) to be \(\phi\phi\), as
\[\mathsf{C}_{L_{1}\ell_{2}}^{\phi\phi,XY}=\frac{2}{2L_{1}+1}\left(C_{L_{1}}^{ \phi\phi}\right)^{2}\frac{\partial C_{\ell_{2}}^{XY}}{\partial C_{L_{1}}^{ \phi\phi}}. \tag{11}\]
This covariance arises because the same lenses that make up the lensing potential are also responsible for lensing the CMB spectra.
The terms given by Eqs. 5, 6, 10, and 11 are the most important terms in the analytic covariance matrix calculation, with the diagonal/Gaussian terms having the largest impact on parameters. However, there are some additional terms that are not included here [44, 45, 46], which we discuss below. The first term we neglect is the super-sample covariance [46] arising from lenses on scales larger than the survey area. Since all three experiments listed in Table 1 will survey half the sky, this effect is expected to be negligible as suggested by [46].
For the covariance between the lensing potential spectrum and the CMB spectra, there are four terms given in Eq. 33 of [44]. As discussed in [44], the first two terms cancel if one does RDN0 subtraction, which we assume will be done for all the CMB experiments considered here, as mentioned in Section II.4. The third term in Eq. 33 of [44] is Eq. 11 above, and we found that the fourth term (Type B Primary Trispectrum) had negligible impact on the parameter forecasts and is thus omitted.
In this work, we neglect a first-order correction to the lensing power spectrum known as the "N1 bias" [47, 35]. While this term is called a "bias", it is in fact a higher-order correction to the lensing signal that can be calculated from first principles similar to the zeroth-order lensing spectrum. This N1 signal is of similar or larger amplitude than the standard CMB power spectrum at high multipoles, and its inclusion may yield a higher lensing signal-to-noise ratio than we forecast here. We leave further exploration of this to future work, and note that ignoring this term is likely conservative.
Figure 6: _Left:_ We show the binned analytic correlation matrices for a CMB-HD survey between the lensed CMB \(TT\), \(TE\), \(EE\), and \(BB\) power spectra (first four rows), and between the lensed CMB and lensing \(\kappa\kappa\) power spectrum (last row). We set the diagonals of each block to zero to highlight the off-diagonal elements, which arise due to the non-Gaussian covariances, as discussed in Section III. We also show the simulation-based \(\kappa\kappa\times\kappa\) covariance matrix from [15] which we use on scales \(L,L^{\prime}\gtrsim 5000\); note that this is shown with a different color scale than the analytic correlation matrices. _Right:_ We show the corresponding binned correlation matrices for the delensed CMB-HD spectra. We find that the correlation coefficients for the off-diagonal terms of the binned analytic correlation matrices (not including \(\kappa\kappa\times\kappa\)) are less than \(\pm 1\%\) everywhere.
We use the CLASS delens package [12] to evaluate the derivatives \(\partial C^{XY}_{\ell}/\partial C^{\cancel{\phi}}_{L}\) and \(\partial C^{XY}_{\ell}/\partial C^{XY,\mu}_{\ell^{\prime}}\) in Eq. 6 and 11.15 With these derivatives, we assemble the final covariance matrix combining the terms in Eqs. 5, 6, 10, and 11. We cross checked the CLASS delens \(\partial C^{XY}_{\ell}/\partial C^{\cancel{\phi}}_{L}\) derivatives by comparing to those we obtained using the lensedov16 package [44] up to \(\ell,L=10,000\). Since lensed does not calculate covariances involving de-lensed CMB spectra, nor does it compute the derivatives \(\partial C^{XY}_{\ell}/\partial C^{XY,\mu}_{\ell^{\prime}}\), we constructed covariance matrices from the CLASS delens and lenscov packages using lensed CMB spectra and Eqs. 5, 6, 10, and 11 (excluding the second term in Eq. 6). We confirmed that the forecasted parameter errors obtained from either covariance matrix were consistent (within \(<0.05\%\) for each parameter).
Footnote 15: We increase the CLASS precision parameters beyond the default values to the following: accurate_lensing = 1, delta_l_max = 2000, perturbations_sampling_stepsize = 0.05, k_max_tau0_over_l_max = 50, and P_k_max_l/Mpc = 500 to be consistent with the accuracy settings used for CAMB (see Appendix A.).
We generated covariance matrices for both lensed and delensed CMB spectra, by setting the \(C_{\ell_{1}}\) in Eq. 5 to either lensed or delensed spectra, with the latter obtained as described in Section II.2. For the derivatives in Eqs. 6 and 11, CLASS delens provides an option to output them for delensed CMB spectra, given the lensing spectrum noise, \(N_{L}^{\rm xx}\).
We bin our final covariance matrix using the same binning matrix, \(M_{b\ell}\), as used for the one-dimensional spectra (see Eq. 3 of Section II.2), so that we obtain a binned covariance matrix given by
\[\mathbf{C}^{XY,WZ}_{\ell_{b}\ell^{\prime}_{b}\nu}=\sum_{\ell\ell^{\prime}}M_{ b\ell}\mathbf{C}^{XY,WZ}_{\ell\ell^{\prime}}M_{b^{\prime}\ell^{\prime}}^{T}, \tag{12}\]
for \(XY\) and \(WZ\) in \([TT,\,TE,\,EE,\,BB,\,\kappa\kappa]\). We then form a single block covariance matrix from the 25 binned blocks. We replace the binned CMB-HD \(\kappa\kappa\times\kappa\) covariance matrix on scales \(L_{1}\), \(L_{2}\gtrsim 5000\), with the simulation-based one from [15] when including foregrounds, or [34] when not including foregrounds, as mentioned above.
We show in Fig. 6 the binned analytic correlation matrices corresponding to the lensed and delensed covariance matrices, calculated using Eq. 5, 6, 10, and 11, for the CMB-HD experiment. We also show the binned simulation-based \(\kappa\times\kappa\) correlation matrix from [15] which is used on scales \(L_{1}\), \(L_{2}\gtrsim 5000\). We calculate the unbinned elements of the correlation matrix, \(\rho^{XY,WZ}_{\ell_{1}\ell_{2}}\), between \(C^{XY}_{\ell_{1}}\) and \(C^{WZ}_{\ell_{2}}\) by [12; 44]
\[\rho^{XY,WZ}_{\ell_{1}\ell_{2}}=\frac{\mathbf{C}^{XY,WZ}_{\ell_{1}\ell_{2}}}{ \sqrt{\mathbf{C}^{XY,XX}_{\ell_{1}\ell_{2}}\mathbf{C}^{WZ,WZ}_{\ell_{2}\ell_{2 }}}}, \tag{13}\]
where \(XY\) and \(WZ\) are \(TT\), \(TE\), \(EE\), \(BB\), or \(\kappa\kappa\) (we have used lowercase \(\ell\)'s for simplicity). We then bin this in the same way as Eq. 12 to generate Fig. 6. For this figure, in order to highlight the off-diagonal elements, the diagonal elements of each block were set to zero. In the CMB x CMB blocks for lensed spectra at low multipoles (Fig. 6, left), we see the familiar "checkerboard" pattern from the lensing-induced peak-smoothing, as noted by [43; 44; 11; 45]. For the delensed correlation matrix (Fig. 6, right), correlations in these blocks are reduced at low multipoles, as also seen in [12; 11]; in fact they are nearly zero on scales where delensing removes much of the lensing from the CMB spectra (as shown in Fig. 4). In the CMB x \(\kappa\)x blocks, correlations are visible between large-scale (low-\(L\)) lensing and lensed CMB spectra for a wide range of scales (up to \(\ell\sim 5000\)), as also discussed in [11]. The \(\kappa\kappa\times\kappa\)x blocks show off-diagonal terms from higher-order lensing corrections and non-Gaussian fluctuations of the matter power spectrum that are captured by simulations [15]. We find that in both the lensed and delensed binned analytic correlation matrices (not including \(\kappa\times\kappa\kappa\)), the correlation coefficients, \(\rho^{XY,WZ}_{\ell_{1}\ell_{2}}\), for the off-diagonal terms are less than \(\pm 1\%\).
### BAO Covariance Matrix
We also construct a covariance matrix for BAO data from a DESI-like survey. The BAO data measures the transverse and radial scales, \(r_{s}/d_{A}(z)\) and \(H(z)r_{s}\), respectively, where \(r_{s}\) is the comoving sound horizon at the end of the baryon drag epoch (i.e. \(r_{s}(z_{d})\)), \(d_{A}(z)\) is the angular diameter distance to redshift \(z\), and \(H(z)\) is the expansion rate at redshift \(z\)[16]. These are combined into a distance measurement defined by [48]
\[d_{V}(z)\equiv\left[(1+z)^{2}d_{A}^{2}(z)\,\frac{cz}{H(z)}\right]^{1/3}. \tag{14}\]
We assume a Gaussian (diagonal) covariance matrix for the DESI BAO data. We calculate the uncertainty \(\sigma_{j}\) on \(r_{s}/d_{V}(z_{j})\) following the approach of [49] as
\[\sigma_{j} \equiv\sigma\left(\frac{r_{s}}{d_{V}(z_{j})}\right)\] \[=\frac{1}{3}\frac{r_{s}}{d_{V}(z_{j})}\sqrt{\left[\frac{\sigma \left(H(z_{j})r_{s}\right)}{H(z_{j})r_{s}}\right]^{2}+\left[2\frac{\sigma \left(d_{A}(z_{j})/r_{s}\right)}{d_{A}(z_{j})/r_{s}}\right]^{2}}. \tag{15}\]
We use CAMB to calculate the theoretical BAO signal \(r_{s}/d_{V}(z_{j})\), at each redshift \(z_{j}\), for the _Planck_ fiducial cosmology used in this work. We use the forecasted fractional uncertainties on the quantities \(H(z_{j})r_{s}\) and \(d_{A}(z_{j})/r_{s}\), at redshifts \(z_{j}\), given by Table 2.3 (for \(z\in[0.65,1.85]\)) and Table 2.5 (for \(z\in[0.05,0.45]\)) in [16] for the DESI experiment covering 14,000 square degrees of
sky (i.e., the terms under the square root in the second line of Eq. 15). The diagonal elements of the covariance matrix are \(\sigma_{j}^{2}\), with all off-diagonal elements set to zero.
## IV Methods for parameter forecasts
We use both a Fisher matrix method (described in Section IV.1) and a likelihood plus Markov chain Monte Carlo (MCMC) method (described in Section IV.2) to forecast cosmological parameter constraints from the combination of CMB and BAO data. These forecasts are presented in Section V.
We forecast uncertainties on the six parameters of the \(\Lambda\)CDM model: the physical baryon and cold dark matter densities, \(\Omega_{\rm b}h^{2}\) and \(\Omega_{\rm c}h^{2}\), respectively; the primordial comoving curvature power spectrum amplitude, \(\ln(10^{10}A_{\rm s})\), and scalar spectral index, \(n_{\rm s}\), defined at the pivot scale \(k_{0}=0.05\) Mpc\({}^{-1}\); the reionization optical depth, \(\tau\); and the Hubble constant \(H_{0}\). For MCMC runs, we treat \(H_{0}\) as a derived parameter, and instead sample over the parameter \(100\theta_{\rm MC}\), where \(\theta_{\rm MC}\) is the cosmoMC17 approximation to the angular scale of the sound horizon at last scattering. We additionally vary the effective number of relativistic species \(N_{\rm eff}\) and the sum of the neutrino masses \(\sum m_{\nu}\). We refer to these eight parameters (listed in Table 2) as the \(\Lambda\)CDM+\(N_{\rm eff}+\sum m_{\nu}\) model. Since the CMB experiments considered here do not observe very large scales (\(\ell<30\)), we apply a prior on the optical depth \(\tau\) of \(\sigma(\tau)=0.007\) from _Planck_[1].
Footnote 17: [https://cosmologist.info/cosmomc/readme.html](https://cosmologist.info/cosmomc/readme.html)
Both the Fisher and MCMC methods assume that the CMB and BAO data can be described by a Gaussian likelihood function \(\mathcal{L}(d|\vec{\theta})\) for the probability distribution of the data \(\hat{d}\) given a set of model parameters \(\vec{\theta}\). For the CMB data described in Sections II and III.1, our likelihood function is given by
\[-2\ln\mathcal{L}_{\rm CMB}\left(\hat{C}_{\ell_{b}}|\vec{\theta}\right)=\sum_{ \ell_{b}\ell_{b}^{\prime}}\Delta C_{\ell_{b}}(\vec{\theta})\mathbf{C}_{\ell_{ b}^{\prime}\ell_{b}^{\prime}}^{-1}\Delta C_{\ell_{b}^{\prime}}(\vec{\theta}), \tag{16}\]
where \(\Delta C_{\ell_{b}}(\vec{\theta})=\hat{C}_{\ell_{b}}-C_{\ell_{b}}(\vec{\theta})\), \(\hat{C}_{\ell_{b}}\) is the binned (lensed or delensed) data spectra18 with bin centers \(\ell_{b}\), \(C_{\ell_{b}}(\vec{\theta})\) is the binned theory spectra evaluated at a given set of cosmological parameters \(\vec{\theta}\), and \(\mathbf{C}_{\ell_{b}\ell_{b}}\) is the binned covariance matrix of the data. \(\hat{C}_{\ell_{b}}\) and \(C_{\ell_{b}}(\vec{\theta})\) hold the binned \(TT\), \(TE\), \(EE\), \(BB\), and \(\kappa\kappa\) spectra (in the same format and order as the covariance matrix, \(\mathbf{C}_{\ell_{b}\ell_{b}^{\prime}}\)).19
Footnote 18: We use theory spectra at our fiducial cosmology, with no scatter, to simulate the data spectra. We verified that this yields the same error bars in our MCMC analysis as we would get if we had added scatter to the spectra drawn from our covariance matrix.
Footnote 19: While the \(BB\) power spectrum does not add significantly to the parameter constraints, we include it for completeness.
For the BAO data, given by \(f_{j}\equiv r_{\rm s}/d_{V}(z_{j})\) and described in Section III.2, we use the likelihood function
\[-2\ln\mathcal{L}_{\rm BAO}\left(\hat{f}_{j}\ |\vec{\theta}\right)=\sum_{j} \frac{\left[\hat{f}_{j}-f_{j}(\vec{\theta})\right]^{2}}{\sigma_{j}^{2}}, \tag{17}\]
where \(\hat{f}_{j}\) is the BAO data with variance \(\sigma_{j}^{2}\) at redshift \(z_{j}\), and \(f_{j}(\vec{\theta})\) is the theory evaluated at the set of cosmological parameters \(\vec{\theta}\)[50].
We make both our Fisher and likelihood code publicly available, as mentioned above, and describe them in more detail below.
### Fisher Matrix Estimation
Fisher matrices can be used as an alternative to MCMC runs for parameter forecasts. While the Fisher approach is much quicker than a full MCMC run, it assumes that the parameters follow Gaussian posterior distributions; this is a good approximation for the six \(\Lambda\)CDM parameters, but may not be suitable when allowing other parameters (such as neutrino mass) to vary. Thus we cross check our Fisher matrix results with a likelihood/MCMC analysis and find good agreement (see Appendix B).
To form a Fisher matrix, we calculate the change in the theoretical signal as each parameter is varied (i.e. the numerical derivatives of spectra with respect to each parameter); the covariance matrix of the data tells us how well we can detect this change, and therefore how well we can constrain parameters. We assume that the data can be described by a Gaussian likelihood, such as the ones in Eqs. 16 and 17. We also assume that the likelihood is maximized when evaluated at a fiducial set of parameters \(\vec{\theta}_{0}\), and Taylor expand about this point. The elements \(F_{\alpha\beta}\) of the Fisher matrix \(F\) are given by the coefficients of the quadratic terms, i.e.
\[F_{\alpha\beta}\equiv-\left\langle\frac{\partial^{2}\ln\mathcal{L}}{\partial \theta_{\alpha}\partial\theta_{\beta}}\right\rangle\Biggr{|}_{\vec{\theta}_{0}},\]
where the indices \(\alpha\) and \(\beta\) correspond to specific parameters. This characterizes how rapidly the likelihood changes as the parameters move away from their fiducial values. The covariance matrix _for the parameters_ is found by taking the inverse of the Fisher matrix, i.e. \(\mathrm{cov}\left(\theta_{\alpha},\ \theta_{\beta}\right)=\left(F^{-1}\right)_{\alpha\beta}\). The forecasted marginalized error \(\sigma_{\alpha}\) on parameter \(\theta_{\alpha}\) is then found from, \(\sigma_{\alpha}=\sqrt{\left(F^{-1}\right)_{\alpha\alpha}}\).
For the CMB likelihood function given by Eq. 16, the elements of the Fisher matrix are
\[F_{\alpha\beta}^{\rm CMB}=\sum_{\ell_{b}\ell_{b}^{\prime}}\left.\left(\frac{ \partial C_{\ell_{b}}}{\partial\theta_{\alpha}}\mathbf{C}_{\ell_{b}^{\prime} \ell_{b}^{\prime}}^{-1}\frac{\partial C_{\ell_{b}^{\prime}}}{\partial\theta_{ \beta}}\right)\right|_{\vec{\theta}_{0}}. \tag{18}\]
We use Eq. 18 to compute the CMB Fisher matrix for a given set of cosmological parameters, numerically evaluating the derivatives with respect to parameters using a finite difference method with the parameter step sizes listed in Table 2. We apply the Gaussian prior on \(\tau\) by adding its inverse variance \(1/\sigma^{2}(\tau)\) to the corresponding element \(F_{\tau\tau}\) of the Fisher matrix [51].
For the BAO likelihood function given by Eq. 17, the elements of the Fisher matrix are given by
\[F^{\text{BAO}}_{\alpha\beta}=\sum_{j}\left.\left(\frac{\partial f_{j}}{ \partial\theta_{\alpha}}\frac{1}{\sigma_{j}^{2}}\frac{\partial f_{j}}{ \partial\theta_{\beta}}\right)\right|_{\vec{\theta}_{0}}, \tag{19}\]
where the sum is taken over the redshifts \(z_{j}\), and the derivatives \(\partial f_{j}/\partial\theta_{\alpha}\) are calculated in the same way as in Eq. 18. To forecast parameter errors from the combination of CMB and BAO data, we take the sum of the two Fisher matrices (since they are from independent data sets), and then invert this sum to obtain the parameter covariance matrix and marginalized error bars [51].
The Fisher matrix method described above gives the statistical uncertainties on a set of parameters, assuming that the fiducial model used in the calculation of the numerical derivatives accurately describes the true universe. We can extend this method to calculate the expected _bias_ to the estimated parameter values due to an incorrect fiducial model, following the approach described in [52, 53, 54] and adopted by [12, 14]. If we assume a fiducial model with power spectra \(C_{\ell}^{\text{fid}}\), while the true model that describes our Universe is \(C_{\ell}^{\text{true}}\), the bias \(\Delta\theta_{\alpha}\) to the parameter \(\theta_{\alpha}\) is given by
\[\Delta\theta_{\alpha}=\sum_{\beta}F_{\alpha\beta}^{-1}\sum_{\ell_{b},\ell_{b} ^{\prime}}\frac{\partial C_{\ell_{b}}^{\text{fid}}}{\partial\theta_{\beta}}C _{\ell_{b}\ell_{b}^{\prime}}^{-1}\left(C_{\ell_{b}^{\prime}}^{\text{true}}-C_{ \ell_{b}^{\prime}}^{\text{fid}}\right). \tag{20}\]
Here the Fisher matrix \(F_{\alpha\beta}\) is calculated with the fiducial model. We note that the parameter biases obtained using Eq. 20 are technically conservative for the case where one has a prior on the value of a parameter (see Appendix A). This is because, while Eq. 20 allows one to specify a parameter uncertainty from prior data via the Fisher matrix \(F_{\alpha\beta}\), it does not allow one to specify the mean value of this prior, potentially resulting in artificially larger bias estimates than is the actual case. The bias estimates from Eq. 20 can be cross checked with an MCMC method to verify consistency, as done in Appendix A.
### Likelihood and Markov Chain
We use an MCMC method to verify our Fisher parameter forecasts. With an MCMC analysis, we want to recover the posterior probability distribution \(P(\vec{\theta}\mid\vec{d})\) for a set of model parameters \(\vec{\theta}\) that are most likely to describe the observed data \(\vec{d}\). The posterior distribution is proportional to the product of the likelihood function, \(\mathcal{L}(\vec{d}\mid\vec{\theta})\), and the prior probability distribution, \(P(\vec{\theta})\), for the parameter values \(\vec{\theta}\). The posterior distribution is estimated by drawing random samples of sets of parameters, \(\vec{\theta}\), via an MCMC sampler, and evaluating the likelihood and prior probability for each [55].
For the results presented in this work, we use the mcmc sampler [56, 57] from the Cobaya [17] package to sample from the likelihood functions defined in Eqs. 16 and 17, and we use CAMB to calculate the theory \(C_{\ell}(\vec{\theta})\) or \(f_{j}(\vec{\theta})\) at each step in parameter space.20 The priors adopted for each MCMC analysis are listed in Table 2; all priors are uniform except for the Gaussian prior on the reionization optical depth from _Planck_ of \(\tau=0.054\pm 0.007\)[1].
Footnote 20: Note that Cobaya calls CAMB itself, but we added an interface within our likelihood to obtain the delensed spectra from CAMB via Cobaya.
Footnote 21: Increasing the CAMB accuracy settings higher than this does not change the parameter error bars as long as the mock data is generated at the same accuracy; however, we show in Appendix A that low CAMB accuracy settings in general may result in a bias when running MCMC chains on real data.
MCMC runs in general converge faster if given an initial proposal matrix consisting of the anticipated covariance between the cosmological parameters. We generate such a proposal matrix by first using a Fisher matrix to obtain the conditional posterior width for each parameter, given by \(1/\sqrt{F_{\alpha\alpha}}\) for parameter \(\theta_{\alpha}\). We use this to determine the step size of each parameter in the sampler. We then generate mock data and run a quick preliminary MCMC using the default CAMB accuracy, except for setting lens_potential_accuracy=1 to turn on the non-linear matter power spectrum (see Appendix A).21 We use the resulting MCMC chains to calculate a parameter covariance matrix, which we use as a proposal matrix for subsequent MCMC runs with our baseline CAMB accuracy.
Footnote 22: [https://www.nersc.gov/](https://www.nersc.gov/)
We consider the MCMC parameter chains to be converged when they reach a value of about \(R-1\leq 0.01\) after removing the first 30% of each chain as "burn-in", where \(R-1\) is the Gelman-Rubin convergence parameter [58]. We find that the likelihood for delensed CMB-HD plus DESI BAO for a \(\Lambda\text{CDM}+N_{\text{eff}}+\sum m_{\nu}\) model takes about 12 hours on NERSC23 using our baseline accuracy to reach \(R-1=0.03\), and about 16 hours to reach \(R-1=0.015\). These times are obtained after including a proposal matrix generated as described above. We confirm that the marginalized mean parameter values that we obtain match the fiducial values used to generate the mock data, listed in Table 2. We also confirm that we recover parameter errors that match those obtained from the Fisher analysis described above (see Appendix B and Fig. 18 for details). Since the MCMC run takes about a day, after we verify a few base cases, we use the Fisher method to explore variations of those cases (e.g. changes in \(\ell_{\text{max}}\), inclusion of foregrounds or delensing, etc.).
## V Parameter forecasts
For the discussion below, we show parameter constraints using Fisher matrix estimation since it provides a faster way to explore the impact of several effects. However, as mentioned above, we confirm that our parameter forecasts are consistent when using either a likelihood and MCMC or Fisher matrix estimation for a subset of cases (see Appendix B for details). For the SO-like and S4-like experiments, we also confirm that our parameter constraints are consistent with those forecasted in [4] and [5], respectively.
Figure 7: Forecasted cosmological parameter constraints for a \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) model from delensed CMB \(TT\), \(TE\), \(EE\), and \(BB\) power spectra and the lensing power spectrum, \(\kappa\kappa\), for SO-like, CMB-S4-like, and CMB-HD-like surveys. The experimental configurations for each survey are listed in Table 1. We also include expected DESI BAO data [16] to these forecasts. The parameters that see the most improvement from a CMB-HD type survey compared to precursor experiments are \(n_{\rm s}\) and \(N_{\rm eff}\). We give the one-dimensional marginalized parameter errors in Table 3.
\begin{table}
\begin{tabular}{l l l l l l l} & & \multicolumn{4}{c}{\(\ell_{\text{max}}\), \(L_{\text{max}}=20,000\)} & \multicolumn{1}{c}{\(\ell_{\text{max}}\), \(L_{\text{max}}=10,000\)} \\ \hline Parameter & Fiducial & HD Lensed & HD Lensed & HD Delensed & **HD Delensed** & HD Delensed \\ \(\Lambda\)CDM+\(N_{\text{eff}}\)+\(\sum m_{v}\) & & & + FG & + FG & **+ FG + DESI BAO** & + FG + DESI BAO \\ \hline \(\Omega_{\text{b}}h^{2}\) & 0.022370 & 0.000032 & 0.000033 & 0.000027 & 0.000026 & 0.000026 \\ \(\Omega_{\text{c}}h^{2}\) & 0.12000 & 0.00064 & 0.00065 & 0.00058 & 0.00041 & 0.00041 \\ \(\ln(10^{10}A_{\text{s}})\) & 3.044 & 0.011 & 0.011 & 0.011 & 0.0098 & 0.010 \\ \(n_{\text{s}}\) & 0.9649 & 0.0023 & 0.0024 & 0.0021 & 0.0013 & 0.0014 \\ \(\tau\) & 0.0544 & 0.0056 & 0.0057 & 0.0054 & 0.0052 & 0.0054 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 0.72 & 0.74 & 0.65 & 0.29 & 0.29 \\ \(N_{\text{eff}}\) & 3.046 & 0.017 & 0.018 & 0.015 & 0.014 & 0.015 \\ \(\sum m_{v}\) [eV] & 0.06 & 0.050 & 0.051 & 0.047 & 0.025 & 0.026 \\ \end{tabular}
\end{table}
Table 4: Forecasted cosmological parameter constraints from CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra for a \(\Lambda\)CDM + \(N_{\text{eff}}\) + \(\Sigma m_{v}\) model. All forecasts include a \(\tau\) prior of \(\tau=0.054\pm 0.007\) from _Planck_[1]. The first two columns list the parameters and their fiducial values. The following two columns list their forecasted marginalized \(1\sigma\) uncertainties when using lensed spectra with or without foregrounds in the temperature maps. The fifth column shows the forecast when delensing the CMB spectra, and the sixth column shows the change when including DESI BAO data [16]. In each of these cases we use CMB and CMB lensing multipoles out to \(\ell_{\text{max}},L_{\text{max}}=20,000\) for CMB-HD. The last column lists the same information as the sixth column, but instead using a maximum multipole of \(\ell_{\text{max}},L_{\text{max}}=10,000\) for both CMB and CMB lensing power spectra. We see in particular that \(n_{\text{s}}\) and \(N_{\text{eff}}\) constraints are tightened when including multipoles beyond 10,000 for a CMB-HD type survey. We show corresponding forecasts for \(\Lambda\)CDM and \(\Lambda\)CDM + \(N_{\text{eff}}\) models in Appendix C.
\begin{table}
\begin{tabular}{l l l l l l l} & & \multicolumn{4}{c}{\(1\sigma\) Error} & \multicolumn{2}{c}{Ratio of \(1\sigma\) Errors} \\ \cline{3-7} Parameter & Fiducial & SO + & CMB-S4 + & CMB-HD + & CMB-HD / SO & CMB-HD / CMB-S4 \\ \(\Lambda\)CDM+\(N_{\text{eff}}\)+\(\sum m_{v}\) & & DESI BAO & DESI BAO & DESI BAO & & \\ \hline \(\Omega_{\text{b}}h^{2}\) & 0.022370 & 0.000057 & 0.000039 & 0.000026 & 0.46 & 0.67 \\ \(\Omega_{\text{c}}h^{2}\) & 0.12000 & 0.00074 & 0.00056 & 0.00041 & 0.55 & 0.73 \\ \(\ln(10^{10}A_{\text{s}})\) & 3.044 & 0.012 & 0.011 & 0.0098 & 0.84 & 0.86 \\ \(n_{\text{s}}\) & 0.9649 & 0.0030 & 0.0025 & 0.0013 & 0.43 & 0.52 \\ \(\tau\) & 0.0544 & 0.0061 & 0.0060 & 0.0052 & 0.85 & 0.87 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 0.36 & 0.32 & 0.29 & 0.81 & 0.91 \\ \(N_{\text{eff}}\) & 3.046 & 0.043 & 0.030 & 0.014 & 0.33 & 0.47 \\ \(\sum m_{v}\) [eV] & 0.06 & 0.030 & 0.029 & 0.025 & 0.83 & 0.86 \\ \end{tabular}
\end{table}
Table 3: Forecasted cosmological parameter constraints for a \(\Lambda\)CDM + \(N_{\text{eff}}+\sum m_{v}\) model from SO-like, CMB-S4-like, and CMB-HD-like delensed \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra when combined with DESI BAO. The first two columns list the parameters and their fiducial values. The following three columns list the forecasted marginalized \(1\sigma\) uncertainties on each parameter for the three experiments, and the last two columns list the ratios of these values. The forecasts for CMB-HD include expected residual extragalactic foregrounds in the temperature power spectrum. All forecasts include a \(\tau\) prior of \(\tau=0.054\pm 0.007\) from _Planck_[1]. We find improvement in all parameters considered for a CMB-HD-like experiment compared to precursor surveys; we also find the most significant improvement for \(n_{\text{s}}\) and \(N_{\text{eff}}\), of about a factor of two or more. These forecasts are also depicted in Fig. 7.
In Fig. 7, we show the cosmological parameter constraints for the \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) model from delensed CMB \(TT\), \(TE\), \(EE\), \(BB\) spectra and lensing \(\kappa\kappa\) spectra when combined with DESI BAO data. For the CMB-HD-like experiment (red), we have included residual extragalactic foregrounds in the temperature data. For comparison, we forecast parameter constraints for S4-like (green) and SO-like (yellow) experiments. We show the \(1\sigma\) and \(2\sigma\) parameter contours as the dark and lighter colors, respectively, and we list the \(1\sigma\) marginalized parameter constraints for all three experiments in Table 3. We find improvement in all parameters considered for a CMB-HD-like experiment compared to precursor surveys; we also find the most significant improvement for \(n_{\rm s}\) and \(N_{\rm eff}\), of about a factor of two or more (see last two columns of Table 3). In particular, for CMB-HD, we find \(\sigma(n_{\rm s})=0.0013\) and \(\sigma(N_{\rm eff})=0.014\).
In the following sections, we focus on a CMB-HD-like experiment and discuss the effects of residual extragalactic foregrounds, delensing of the acoustic peaks, and the combination with DESI BAO data on cosmological parameter constraints. The resulting parameter uncertainties in each case for a \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) model are listed in Table 4; we present results for a \(\Lambda\)CDM and a \(\Lambda\)CDM+\(N_{\rm eff}\) model in Appendix C. In the last column of Tables 4, 5, and 5, we examine the effect of using a lower maximum multipole of \(\ell_{\rm max},L_{\rm max}=10,000\) and find, in particular, that \(n_{\rm s}\) and \(N_{\rm eff}\) constraints are tightened when including multipoles beyond 10,000.
### Impact of Residual Foregrounds
We examine the effect of including residual extragalactic foregrounds in temperature maps, as described in Section II.3, on the parameter constraints for CMB-HD from lensed CMB \(TT\), \(TE\), \(EE\), and \(\kappa\kappa\) power spectra. In Fig. 8, we show the parameter constraints without (orange) and with (green) foregrounds for a subset of parameters from a \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) model; the full set of forecasted parameter errors are listed in the third and fourth columns of Table 4. We see that the residual foregrounds do not significantly increase the errors on any of the parameters considered here. The ultra-high resolution and low noise of CMB-HD is critical to reducing the foreground levels to those shown in the top panel of Fig. 1.
### Impact of Delensing
In Fig. 9, we show the parameter constrains from CMB-HD lensed (green) and delensed (blue) CMB spectra and the lensing spectrum for a subset of parameters from a \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) model. In both cases, we include residual extragalactic foregrounds. We see smaller uncertainties in both \(n_{\rm s}\) and \(N_{\rm eff}\) after delensing. We show the comparison between lensed and delensed forecasts for the full parameter set in Fig. 11. The fifth column of Table 4 gives the marginalized \(1\sigma\) parameter errors when including delensing, and shows considerable improvement for all the parameters considered, even when also including residual foregrounds. We attribute this to the removal of much of the off-diagonal covariance on scales where CMB-HD can delens efficiently, as shown in the right panel of Fig. 6. Tables 5 and 5 show the impact of delensing for \(\Lambda\)CDM and \(\Lambda\)CDM+\(N_{\rm eff}\) models, respectively. In Table 9, we show that even when including DESI BAO, delensing still improves parameter errors, especially for \(N_{\rm eff}\).
BAO for \(\Lambda\)CDM+\(N_{\text{eff}}\)+\(\sum m_{\nu}\), \(\Lambda\)CDM, and \(\Lambda\)CDM+\(N_{\text{eff}}\) models, respectively.
### Impact of Baryonic Physics
The low noise and high resolution of CMB-HD allows it to be sensitive to small scales (\(k>0.5\,h\text{Mpc}^{-1}\)) where the effects of baryonic feedback on the matter distribution become important. On these scales, feedback from active galactic nuclei (AGN) emissions, for example, can push mass out from the centers of halos, while baryonic cores, formed from star-formation and cooling, concentrate mass in the center of halos. These effects shift the mass distribution, which can be measured with CMB lensing.
In addition, the hot gas traced by the thermal and kinetic Sunyaev-Zel'dovich effects (kSZ and tSZ, respectively) is also redistributed by baryonic feedback. This makes the kSZ and tSZ effects important external measurements of the amount of feedback that has occurred and the impact of that feedback. In particular, cross-correlations between CMB lensing and the kSZ/tSZ effects can provide powerful additional constraints on the behavior of baryonic effects [59].
To model the impact of baryonic feedback, we use the updated HMCode-2020 model from [60], and in particular adopt their single-parameter baryonic feedback model characterized by the parameter \(\log_{10}(T_{\text{AGN}}/\text{K})\)23; we use a fiducial value of \(\log_{10}(T_{\text{AGN}}/\text{K})=7.8\) and a step size of \(\pm 0.05\) in the Fisher analysis. This model was used in a recent analysis combining KiDS-1000 optical lensing data with tSZ measurements from _Planck_ and ACT. This analysis measured the cross-correlation between the lensing shear and tSZ maps, and measured \(\log_{10}(T_{\text{AGN}}/\text{K})\) with a \(1\sigma\) uncertainty better than 6% [59].
Footnote 23: Previous versions of HMCode did not include star-formation in their baryonic model, and only predicted power suppression [60]. The HMCode–2020 model has six free parameters, but [60] find a single-parameter variant is only slightly less accurate (e.g. see Fig 5 of [60]).
We show in Fig. 12 the impact of freeing this baryonic feedback parameter on a subset of parameters for a \(\Lambda\)CDM+\(N_{\text{eff}}\)+\(\sum m_{\nu}\)+baryonic feedback model from CMB-HD delensed \(TT\), \(TE\), \(BE\), \(BB\) and \(\kappa\)x power spectra plus DESI BAO (light blue). We also show the marginalized \(1\sigma\) parameter errors in the second column of Table 5. We see that the parameter errors do not increase substantially when freeing this baryonic parameter. We also see that CMB-HD can constrain this baryonic feedback parameter to an accuracy of 0.45% without includ
Figure 10: Shown are the forecasted constraints for a \(\Lambda\)CDM+\(N_{\text{eff}}\)+\(\sum m_{\nu}\) model from CMB-HD delensed \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\)x power spectra, with and without including expected DESI BAO data [16] (blue and red, respectively). Here we show a subset of parameter constraints for \(H_{0}\), \(N_{\text{eff}}\), and \(n_{\text{s}}\), and show the full parameter constraints in Fig. 11. We see that the inclusion of DESI BAO data reduces uncertainties on \(H_{0}\) and \(n_{\text{s}}\), but has less impact on \(N_{\text{eff}}\).
Figure 9: Shown are forecasted constraints for a \(\Lambda\)CDM+\(N_{\text{eff}}\)+\(\sum m_{\nu}\) model from lensed (green) and de-lensed (blue) CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\)x power spectra. Here we show a subset of parameter constraints for \(H_{0}\), \(N_{\text{eff}}\), and \(n_{\text{s}}\), and show the full parameter constraints in Fig. 11. We see that delensing the CMB spectra reduces the parameter uncertainties.
ing any information from kSZ or tSZ. This tight constraint on baryonic feedback is likely due to the precision CMB-HD will have in measuring the small-scale lensing power spectrum.
We also add a prior on the baryonic feedback parameter \(\log_{10}(T_{\rm AGN}/\rm K)\) that we can expect from CMB-HD measurements of the tSZ and kSZ effects in cross-correlation with CMB lensing. We estimate this prior by extrapolating from the 6% constraint measured by [59] using KiDS-1000 lensing maps cross-correlated with _Planck_ plus ACT tSZ maps over 1000 square degrees of sky. We note that CMB-HD will survey about 24,000 square degrees of sky, gaining about a factor of 5 improvement over the current feedback constraint. In addition, we find that the CMB-HD will be sensitive to the CMB-HD measurement of the CMB-HD.
Figure 11: Shown are the forecasted constraints for a \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) model for CMB-HD. Here we show constraints from lensed CMB \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra (green), delensed CMB \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra (blue), and delensed CMB \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra plus DESI BAO [16] (red). We see that delensing the CMB spectra and adding DESI BAO data reduce uncertainties on cosmological parameters.
dition, the CMB-HD temperature map noise level will be over 20 times deeper than that of _Planck_ plus ACT. Moreover, the CMB-HD lensing map will be a factor of a few higher in signal-to-noise ratio than that of KiDS-1000, and we can anticipate also folding in information from the cross-correlation of kSZ maps with CMB lensing (not included in the KiDS-1000 analysis). To be conservative, we assume an overall improvement of two orders of magnitude in ability to constrain \(\log_{10}(T_{\rm AGN}/{\rm K})\) from CMB-HD tSZ and kSZ cross-correlations with CMB lensing compared to the current constraint. (As mentioned above, CMB-HD already can improve the constraint on \(\log_{10}(T_{\rm AGN}/{\rm K})\) by an order of magnitude without any SZ data.) Thus we apply a 0.06% prior on this baryonic feedback parameter, and show the results in Fig. 12 (dashed black) and Table 5 (third column). We see that the inclusion of this additional prior from SZ data returns the parameter errors to what they were before freeing the feedback parameter (shown in the first column of Table 5).
We use Eq. 20 in Section IV.1 to predict the bias on the estimated cosmological parameters when AGN feedback is neglected in the model. In this case, we allow \(C_{\ell}^{\rm fid}\) to be the CDM-only model whereas the true model \(C_{\ell}^{\rm true}\) is CDM+AGN feedback with \(\log_{10}(T_{\rm AGN}/{\rm K})=7.8\). We also use the combination of CMB-HD and DESI BAO data to calculate the Fisher matrix, \(F_{\alpha\beta}\), in Eq. 20. In Fig. 13, we show the forecasted parameter biases from the combination of lensed (purple) or de-lensed (blue) CMB-HD and DESI BAO data when assuming a fiducial CDM-only model, as opposed to the true CDM+feedback model. We can see in Fig. 13 that, while delensing reduces the bias due to an incorrect baryonic feedback model (as discussed in [12, 14, 61]), for many parameters the remaining bias is much larger than the statistical \(1\sigma\) error.
We also show in Fig. 13 the result of an MCMC run when marginalizing over the baryonic feedback parameter \(\log_{10}\left(T_{\rm AGN}/{\rm K}\right)\) discussed above, applying only a uniform prior of \(\log_{10}\left(T_{\rm AGN}/{\rm K}\right)\in[7.6,\,8.0]\) as suggested by [60]. We see that this marginalization over feedback models removes the parameter biases at the expense of some constraining power. However, as we show in Fig. 12, adding a prior on the baryonic feedback parameter anticipated from CMB-HD SZ measurements can mitigate this increase in parameter error. We note that the MCMC run with the baryonic feedback parameter free takes about the same time to converge on NERSC as a run with the feedback parameter fixed.
\begin{table}
\begin{tabular}{l l l l} Parameter & CDM & + feedback & + SZ prior \\ \hline \(\Omega_{b}h^{2}\) & 0.000026 & 0.000028 & 0.000027 \\ \(\Omega_{c}h^{2}\) & 0.00041 & 0.00046 & 0.00041 \\ \(\ln(10^{10}A_{\rm s})\) & 0.0095 & 0.010 & 0.010 \\ \(n_{\rm s}\) & 0.0013 & 0.0021 & 0.0013 \\ \(\tau\) & 0.0051 & 0.0056 & 0.0056 \\ \(100\theta_{\rm MC}\) & 0.000058 & 0.000064 & 0.000060 \\ \(N_{\rm eff}\) & 0.014 & 0.020 & 0.014 \\ \(\sum m_{\nu}\) [eV] & 0.024 & 0.026 & 0.027 \\ \(\log_{10}\left(T_{\rm AGN}/{\rm K}\right)\) & — & 0.040 & 0.0047 \\ \hline \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 0.28 & 0.29 & 0.27 \\ \(\sigma_{8}\) & 0.0033 & 0.0033 & 0.0033 \\ \hline \end{tabular}
\end{table}
Table 5: Shown are the cosmological constraints from the combination of CMB-HD delensed \(TT\), \(TE\), \(EE\), \(BB\), and \(\kappa\kappa\) power spectra with DESI BAO for the baseline \(\Lambda\)CDM + \(N_{\rm eff}+\sum m_{\nu}\) model, a \(\Lambda\)CDM + \(N_{\rm eff}+\sum m_{\nu}\) + baryonic feedback model from HMCode-2020 [60], and a \(\Lambda\)CDM + \(N_{\rm eff}+\sum m_{\nu}\) + baryonic feedback model including a 0.06% prior on the feedback parameter \(\log_{10}(T_{\rm AGN}/{\rm K})\) expected from from a joint analysis of CMB-HD kSZ, tSZ, and lensing data (see Section V.4 for details). All results shown here are from the likelihood and MCMC chains as opposed to Fisher forecasts. We also include \(100\theta_{\rm MC}\), and separate the two derived parameters, \(H_{0}\) and \(\sigma_{8}\), which is the linear root-mean-square matter fluctuations today.
Figure 12: Shown is the impact of freeing the baryonic feedback parameter, \(\log_{10}(T_{\rm AGN}/{\rm K})\), discussed in Section V.4, on the forecasted constraints for a \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\)+baryonic feedback model (cyan) from CMB-HD delensed \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra plus DESI BAO. We also show the parameter constraints when fixing this parameter (red) or adding a prior on the baryonic feedback parameter (dashed black); we anticipate such a prior can be obtained from a joint analysis of CMB-HD kSZ, tSZ, and lensing data (see Section V.4 for details). Here we show a subset of parameter constraints for \(H_{0}\), \(N_{\rm eff}\), and \(n_{\rm s}\) from a Fisher analysis.
## VI Discussion
In this work, we present the parameter forecasts for a CMB-HD survey. We contrast these forecasts with precursor CMB experiments, showing that the lower noise and higher multipoles of a CMB-HD survey can lead of significant improvement in many of the cosmological parameters, most notably in the scalar spectral in
Figure 13: Here we show the expected bias to the parameter constraints from a combination of lensed or delensed (purple and blue, respectively) CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\nu\) power spectra plus DESI BAO, when baryonic feedback effects are neglected. These biases were computed from Eq. 20, using a CDM-only model as the fiducial model, and assuming the true model includes the effects of baryonic feedback. We center the CDM-only contours at the estimated biased parameter values, and take the error ellipses from the Fisher estimates. In cyan we show the parameter constraints for delensed CMB-HD plus DESI BAO data from an MCMC run where we marginalize over the feedback parameter \(\log_{10}\left(T_{\rm{AGN}}/\rm{K}\right)\) discussed in Section V.4. We see that marginalization over feedback models removes the parameter biases.
dex, \(n_{\rm s}\), and in the effective number of light relativistic species, \(N_{\rm eff}\). Specifically we find for delensed CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) spectra plus DESI BAO:
\[\sigma\left(n_{\rm s}\right)=0.0013\quad(68\%,\;TT,TE,EE,BB,\kappa\kappa+{\rm BAO }),\]
\[\sigma\left(N_{\rm eff}\right)=0.014\quad(68\%,\;TT,TE,EE,BB,\kappa\kappa+{\rm BAO }).\]
We find that delensing all the CMB spectra as well as including DESI BAO data are both necessary to achieve these tight constraints. We also find that including multipoles out to \(\ell_{\rm max},L_{\rm max}=20,000\) is also required. Since these multipoles are well into the non-linear regime, we must include baryonic effects in modelling the lensing in both the CMB power spectra and CMB lensing spectra. We find that marginalizing over baryonic effects can mitigate potential bias in parameters at the expense of some constraining power. However, tSZ and kSZ measurements by CMB-HD offer an independent handle on baryonic feedback effects, and folding in that additional information can effectively eliminate the increase in parameter errors due to uncertainty in the baryonic physics.
The \(N_{\rm eff}\) parameter uncertainty achieved by CMB-HD+DESI is particularly interesting since any new light particle that was in thermal equilibrium at any time after the Universe reheated must change \(N_{\rm eff}\) by at least \(\Delta N_{\rm eff}\geq 0.027\)[62; 63]. Here reheating refers to the end of inflation and the beginning of the "Big Bang". With \(\sigma\left(N_{\rm eff}\right)=0.014\), CMB-HD+DESI can either rule out or detect any new light particle species with at least 95% confidence.
As a specific example of why this \(N_{\rm eff}\) constraint is valuable, we consider the QCD axion, which is a well-motivated candidate for being the dark matter and solving the Strong CP problem [64; 65; 66]. We note that if the reheating temperature of the Universe is high enough that the QCD axion thermalized, the \(N_{\rm eff}\) constraint above can potentially rule out the QCD axion in a model-independent way, or lead to a detection.
We re-plot Fig. 3 of [67] in Fig. 14 to highlight the QCD axion masses, \(m_{\phi}\), that would be ruled out as a function of reheating temperature, \(T_{\rm R}\), if CMB-HD+DESI finds no increase in \(N_{\rm eff}\) at the 95% confidence level. To forecast these constraints, we use Eq. 2.11 of [67] to calculate the upper limit of the QCD axion coupling \(g_{d}\) for a given reheating temperature,
\[g_{d}<1.3\times 10^{-14}\;{\rm GeV}^{-2}\left(\frac{T_{\rm R}}{10^{10}\;{\rm GeV }}\right)^{-1/2}, \tag{21}\]
and use Eq. 16 of [68] to relate this to the coupling constant \(f_{a}\) via
\[g_{d}\approx\frac{2.4\times 10^{-16}\;{\rm e\;cm}}{f_{a}}, \tag{22}\]
where \(1\;{\rm cm}^{-1}=1.97\times 10^{-14}\;{\rm GeV}\) and \({\rm e}=0.3\). We then relate the coupling constant to the axion mass using
\[m_{\phi}=0.6\;{\rm eV}\left(\frac{10^{7}\;{\rm GeV}}{f_{a}}\right) \tag{23}\]
from [64; 69; 70]. Given the well-motivated nature of the QCD axion and the many efforts to detect it underway, it is worth highlighting this model-independent additional approach to probing this new physics.
We also note that the \(n_{\rm s}\) constraint above is about a factor of two tighter than from precursor CMB surveys. While considerable attention has been devoted to improving constraints on primordial gravitational waves via the tensor-to-scalar ratio, \(r\), constraining the scalar spectral index can also rule out interesting inflationary scenarios [5].
## VII Conclusion
While we present parameter forecasts for CMB-HD above, we note that there are several ways that these forecasts can be improved and made more robust. As we have stressed a few times, the most challenging aspect of a CMB-HD survey will be removing and mitigating the impact of extragalactic foregrounds. Demonstrating that this can be achieved with realistic end-to-end simulations is an area of ongoing research, and will be critical for achieving the science presented here. In addition, more optimal lensing estim
Figure 14: Here we show constraints on the QCD axion mass from delensed CMB-HD plus DESI BAO data. The shaded region shows the region of axion mass, \(m_{\phi}\), and reheating temperature, \(T_{\rm R}\), that would be excluded if any measurement rules out \(\Delta N_{\rm eff}\geq 0.027\), which we show in this work CMB-HD+DESI BAO does at the 95% confidence level.
and are being explored [36, 37, 38, 71, 72, 73, 74, 9, 33], with the potential to yield higher lensing signal-to-noise ratios than assumed in this work, as well as to provide better paths towards foreground immunity. We also leave to future work exploring the potential gain in lensing signal-to-noise ratio from exploiting the higher-order lensing corrections contained in the N1 signal. Furthermore, we have only explored cold dark matter models in this work, with and without baryonic feedback, but note that CMB-HD has sensitivity to alternate dark matter models as well, which will be investigated in subsequent work.
We make the Fisher estimation code used here public along with a Jupyter notebook detailing how to generate new Fisher derivatives for different models. We also make the likelihood code public and integrate it with Cobaya using CAMB. We hope that this will facilitate additional cosmological parameter forecasts for a CMB-HD survey.
###### Acknowledgements.
The authors thank Dongwon Han, Gil Holder, Selim Hotinli, Mathew Madhavacheril, Joel Meyers, Vivian Miranda, Rugged Fund, Cynthia Trendafilova, and Alexander van Engelen for useful discussions. NS also thanks Itay Bloch, Rouven Essig, Peter Graham, Maxim Pospelov, Mauro Valli, and the SCGP Lighting New Lampposts Workshop for useful discussions about the QCD axion. AM, NS, and MR acknowledge support from DOE award number DE-SC0020441. AM and NS also acknowledge support from NSF award number 1907657. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award HEP-ERCAPmp107. NS also acknowledges the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, for supporting a workshop on new discoveries in the era of high-resolution, low-noise CMB experiments, which stimulated useful discussions.
## Appendix A Einstein-Boltzmann Code Accuracy Requirements
As discussed in [14], future high-resolution and low noise CMB experiments will require theory calculations for power spectra beyond the standard default CAMB accuracy settings for the Einstein-Boltzmann solver. To determine the accuracy settings required for CMB-HD, we focus on the parameter bias induced by modelling the expected data with accuracy settings that are lower than necessary. For this we use Eq. 20 discussed in Section IV.1, and we explore four accuracy parameters provided by CAMB (i.e. lens_potential_accuracy, AccuracyBoost, lAccuracyBoost, and lSampleBoost). We also by default set NonLinear=model.NonLinear_both to enable the non-linear calculation of both the matter power spectrum and the lensing spectrum. We note that as long as non-linear corrections are included, changing the four CAMB accuracy settings above does not change the error bars obtained from either Fisher or MCMC analyses, as long as the mock data is generated with the same accuracy.
We find that the lens_potential_accuracy setting has the largest impact on parameter biases. We show in the top panel of Fig. 15, the change in the lensing power spectrum for different values of lens_potential_accuracy, noting that for higher accuracy settings most of the change is at high lensing multipoles. We show in the bottom panel of Fig. 15, the change in the lensing power spectrum divided by the expected CMB-HD uncertainty per lensing multipole.
Figure 15: Here we compare the lensing power spectrum for different values of the CAMB accuracy parameter lens_potential_accuracy. The top panel shows the fractional difference between \(C_{L}^{\rm rx}\) and \(C_{L}^{\rm rx,true}\), while the bottom panel shows the difference as a fraction of the CMB-HD error bar on the lensing power at each multipole, \(\sigma_{L}^{\rm HD}\). \(C_{L}^{\rm rx,true}\) uses the accuracy settings we describe in the text for our approximation of the “true Universe”. We see that the lensing power spectrum converges to the “true Universe” model when lens_potential_accuracy is increased, and for lens_potential_accuracy > 30 the difference between the spectra is less than 10% of the CMB-HD error bar for a given \(L\). Here we fix the other parameters to AccuracyBoost = 1.1, lAccuracyBoost = 3.0, and lSampleBoost = 3.0. Since the “true Universe” model has AccuracyBoost = 3.0, lAccuracyBoost = 5.0, and lSampleBoost = 5.0, we see that varying these other parameters within these ranges has minimal impact given the CMB-HD error bars.
We find that above lens_potential_accuracy = 30, the lensing power spectrum theory curves converge with a difference smaller than a tenth of the CMB-HD error bar per multipole. Thus we take lens_potential_accuracy = 40 to be the "true Universe" model for this work, and compare our results to this model.
We find that after setting lens_potential_accuracy = 40, varying the lAccuracyBoost and lSampleBoost parameters results in parameter biases that are well below \(1\sigma\) of the expected CMB-HD+BAO parameter errors for a \(\Lambda\)CDM + \(N_{\rm eff}+\sum m_{\nu}\) model. The default CAMB settings for these parameters are 1.0, and we increase the settings to 5.0 for each for our "true Universe" model to be conservative.
In addition, we find that varying the AccuracyBoost parameter, also minimally impacts parameter biases, and also results in biases well below \(1\sigma\) of the expected CMB-HD+BAO parameter errors. The CAMB default setting is AccuracyBoost = 1.0, and we increase this to AccuracyBoost = 3.0 for our "true Universe" model.
We show in Fig. 16, the bias on each parameter divided by the expected CMB-HD+BAO \(1\sigma\) parameter error for a \(\Lambda\)CDM + \(N_{\rm eff}+\sum m_{\nu}\) model. We use Eq. 20 to calculate each parameter bias, using the "true Universe" model described above as \(C_{\ell}^{\rm true}\). We see that lens_potential_accuracy = 30 yields parameter biases that are less than 0.3\(\sigma\). We also find that varying the AccuracyBoost has minimal impact.
In addition, we find that varying the AccuracyBoost has the most significant impact on the computation time. We show in Fig. 17 the rapid increase in computation time with increasing AccuracyBoost. Since we see from Fig. 16 that an AccuracyBoost = 1.1 yields similar biases to a value double that, we choose as our baseline setting AccuracyBoost = 1.1; we note that AccuracyBoost = 1.0 is the CAMB default, and we find a significant reduction in bias for a small increase above that. Similarly, we find little reduction in bias for values of lAccuracyBoost and lSampleBoost above 3.0.
Thus, we use as our baseline CAMB settings for CMB-HD in this work:
import camb import numpy as np lmax = 20100 pars = camb.CAMBparams()
Figure 16: Here we show the expected bias on each cosmological parameter as a fraction of its \(1\sigma\) error for CMB-HD+DESI when we calculate the delensed CMB and lensing power spectra with different values of the CAMB accuracy parameters lens_potential_accuracy and AccuracyBoost. The other parameters are fixed to values of lAccuracyBoost = 3.0 and lSampleBoost = 3.0, with the “true” model in Eq. 20 calculated with the accuracy settings AccuracyBoost = 3.0,lAccuracyBoost = 5.0,and lSampleBoost = 5.0 (see text). We see that the bias converges to below 0.3\(\sigma\) at lens_potential_accuracy = 30, and that increasing the AccuracyBoost does not significantly decrease the bias.
Figure 17: Here we show the increase in the total CAMB computation time to compute the lensed and delensed CMB spectra plus the CMB lensing power spectrum for a maximum \(\ell/L\) of 20,000 when the AccuracyBoost parameter is increased. Here the other accuracy parameters are fixed to lAccuracyBoost = 3.0, lSampleBoost = 3.0, and lens_potential_accuracy = 30. The computation times were measured on a NENESC _Parmittter_ login node. The red star indicates the baseline accuracy settings used throughout this work, which shows that all the spectra mentioned above can be calculated in a total time of 11 seconds.
pars.set_cosmology(H0=67.36, ombh2=0.02237, omch2=0.1200, tau=0.0544, num_massive_neutrinos=1, mnu=0.06, nnu=3.046) pars.Initpower:set_params(As=np.exp(3.044) * 1e-10, ns=0.9649) pars.set_for_1max(int(lmax)+500, lens_potential_accuracy=30, lens_margin=2050) pars.set_accuracy(AccuracyBoost=1.1, lSampleBoost=3.0, lAccuracyBoost=3.0, DoLatteRadTrunation=False) pars.NonLinear = camb.model.NonLinear_both pars.NonLinearModel.set_params("mead2016")
We find that the computation of the lensed and delensed \(TT,TE,EE,BB\) CMB power spectra plus the CMB lensing power spectrum takes 11 seconds total to run on the NERSC _Permutter_ machine using the CAMB accuracy settings above.
We also run an MCMC to verify the accuracy of the Fisher estimated biases for our baseline settings. We do this by generating mock data at the accuracy settings of the "true Universe" model described above, and running the MCMC with our baseline accuracy settings. We find consistency between the two methods of bias estimation, and confirm that the parameter biases are well below 1\(\sigma\) of the expected CMB-HD+BAO parameter errors.
## Appendix B Consistency of Fisher and MCMC Methods
To verify that our Fisher forecasts accurately predict the expected parameter errors, we create a likelihood and run MCMC chains. In Fig. 18 we show as solid lines/shaded contours our MCMC run for the baseline CMB-HD delensed \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa X\) power spectra (including foregrounds), combined with mock DESI BAO data. We see that the MCMC recovers the input values of the data well, which are indicated by the grey dashed lines. We also overlay the results of the Fisher estimation method as red dotted contours. We find good agreement between the two methods. We also compare the 1\(\sigma\) parameter errors from each method in Table 6, finding good consistency. Given the consistency of both methods, we use the Fisher method for all parameter forecasts in this work, unless stated otherwise.
## Appendix C Additional Parameter Forecasts
Below we show parameter constraints for CMB-HD in a \(\Lambda\)CDM model with fixed \(N_{\rm eff}=3.046\) and \(\sum m_{v}=0.06\) eV (Table 7), and in a \(\Lambda\)CDM + \(N_{\rm eff}\) model with fixed \(\sum m_{v}=0.06\) eV (Table 8). We see that \(\sigma(N_{\rm eff})\) and \(\sigma(n_{\rm s})\) do not change when neutrino mass is fixed in the \(\Lambda\)CDM + \(N_{\rm eff}\) model compared to when it is free. In the \(\Lambda\)CDM model, \(\sigma(n_{\rm s})\) decreases slightly compared to the models where \(N_{\rm eff}\) is free. \(\sigma(H_{0})\) is about a factor of two smaller in the \(\Lambda\)CDM model compared to the \(\Lambda\)CDM + \(N_{\rm eff}+\sum m_{v}\).
In order to see how much improvement is gained from delensing even after adding DESI BAO, we compare the 1\(\sigma\) parameter uncertainties obtained from lensed or delensed CMB-HD \(TT\), \(TE\), \(EE\), and \(BB\) power spectra plus the lensing \(\kappa X\) spectrum and DESI BAO data in Table 9. We find that delensing does improve parameter constraints, with a 20% improvement for \(N_{\rm eff}\) which is important for pushing the error well below the 0.027 threshold for a spin-0 particle [5].
Finally, in Table 10, we show the forecasted 1\(\sigma\) parameter constraints from CMB-HD plus DESI BAO data for different values of the maximum multipole \(\ell_{\rm max}\) or \(L_{\rm max}\) used in the analysis. At higher multipoles the data is more sensitive to non-linear effects in the matter power spectrum, which are more complex to model. Thus we show the parameter constraints achievable from the linear and semi-linear regimes. We see that to obtain \(\sigma(n_{\rm s})=0.0013\) and \(\sigma(N_{\rm eff})=0.014\) requires the full range of multipoles out to 20,000.
Figure 18: Here we show the parameter constraints for a \(\Lambda\)CDM + \(N_{\rm eff}\) + \(\sum m_{\nu}\) model obtained from delensed CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra plus DESI BAO. The Fisher forecasts are shown in red (dotted lines), and the results from running MCMC chains are shown in blue (solid lines/shaded contours). The parameter values used to generate the data are indicated by the grey dashed lines. We find good agreement between parameter error estimates based on Fisher and MCMC analyses.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & & & \multicolumn{4}{c}{\(\ell_{\text{max}},L_{\text{max}}=20,000\)} & \(\ell_{\text{max}},L_{\text{max}}=10,000\) \\ \cline{2-6} Parameter & Fiducial & HD Lensed & HD Lensed & HD Delensed & **HD Delensed** & HD Delensed \\ \(\Lambda\)CDM\(+N_{\text{eff}}\) & & & \(+\) FG & + FG & **+ FG + DESI BAO** & + FG + DESI BAO \\ \hline \(\Omega_{\text{b}}h^{2}\) & 0.022370 & 0.000017 & 0.000018 & 0.000017 & 0.000017 & 0.000017 \\ \(\Omega_{\text{c}}h^{2}\) & 0.12000 & 0.00046 & 0.00048 & 0.00045 & 0.00037 & 0.00038 \\ \(\ln(10^{10}A_{\text{s}})\) & 3.0440 & 0.0078 & 0.0080 & 0.0074 & 0.0062 & 0.0063 \\ \(n_{\text{s}}\) & 0.9649 & 0.0013 & 0.0014 & 0.0013 & 0.0012 & 0.0013 \\ \(\tau\) & 0.0544 & 0.0045 & 0.0046 & 0.0043 & 0.0036 & 0.0036 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 0.17 & 0.18 & 0.17 & 0.14 & 0.14 \\ \hline \end{tabular}
\end{table}
Table 7: Forecasted cosmological parameter constraints from CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra for a \(\Lambda\)CDM model. All forecasts include a \(\tau\) prior of \(\tau=0.054\pm 0.007\) from _Planck_[1]. The first two columns list the parameters and their fiducial values. The following two columns list their forecasted marginalized \(1\sigma\) uncertainties when using lensed spectra with or without foregrounds in the temperature maps. The fifth column shows the forecast when delensing the CMB spectra, and the sixth column shows the change when including DESI BAO data [16]. In each of these cases we use CMB and CMB lensing multipoles out to \(\ell_{\text{max}},L_{\text{max}}=20,000\) for CMB-HD. The last column lists the same information as the sixth column, but instead using a maximum multipole of \(\ell_{\text{max}},L_{\text{max}}=10,000\) for both CMB and CMB lensing power spectra.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & & & \multicolumn{4}{c}{\(\ell_{\text{max}},L_{\text{max}}=20,000\)} & \(\ell_{\text{max}},L_{\text{max}}=10,000\) \\ \cline{3-6} Parameter & Fiducial & HD Lensed & HD Lensed & HD Delensed & **HD Delensed** & HD Delensed \\ \(\Lambda\)CDM\(+N_{\text{eff}}\) & & & \(+\) FG & + FG & **+ FG + DESI BAO** & + FG + DESI BAO \\ \hline \(\Omega_{\text{b}}h^{2}\) & 0.022370 & 0.000032 & 0.000033 & 0.000027 & 0.000026 & 0.000026 \\ \(\Omega_{\text{c}}h^{2}\) & 0.12000 & 0.00051 & 0.00052 & 0.00047 & 0.00041 & 0.00041 \\ \(\ln(10^{10}A_{\text{s}})\) & 3.0440 & 0.0078 & 0.0080 & 0.0075 & 0.0062 & 0.0063 \\ \(n_{\text{s}}\) & 0.9649 & 0.0013 & 0.0015 & 0.0015 & 0.0013 & 0.0014 \\ \(\tau\) & 0.0544 & 0.0045 & 0.0046 & 0.0043 & 0.0036 & 0.0036 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 0.21 & 0.22 & 0.21 & 0.18 & 0.19 \\ \(N_{\text{eff}}\) & 3.046 & 0.016 & 0.017 & 0.015 & 0.014 & 0.015 \\ \hline \end{tabular}
\end{table}
Table 8: Forecasted cosmological parameter constraints from CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra on a \(\Lambda\)CDM + \(N_{\text{eff}}\) model. All forecasts include a \(\tau\) prior of \(\tau=0.054\pm 0.007\) from _Planck_[1]. The first two columns list the parameters and their fiducial values. The following two columns list their forecasted marginalized \(1\sigma\) uncertainties when using lensed spectra with or without foregrounds in the temperature maps. The fifth column shows the forecast when delensing the CMB spectra, and the sixth column shows the change when including DESI BAO data [16]. In each of these cases we use CMB and CMB lensing multipoles out to \(\ell_{\text{max}},L_{\text{max}}=20,000\) for CMB-HD. The last column lists the same information as the sixth column, but instead using a maximum multipole of \(\ell_{\text{max}},L_{\text{max}}=10,000\) for both CMB and CMB lensing power spectra.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Parameter & \multicolumn{6}{c}{\(\ell_{\rm max},\ L_{\rm max}\) for HD Delensed + FG + DESI BAO} \\ \cline{3-8} \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) & Fiducial & 1000 & 3000 & 5000 & 10,000 & 20,000 \\ \hline \(\Omega_{\rm b}h^{2}\) & 0.022370 & 0.00016 & 0.000039 & 0.000027 & 0.000026 & 0.000026 \\ \(\Omega_{\rm c}h^{2}\) & 0.12000 & 0.0024 & 0.00054 & 0.00042 & 0.00041 & 0.00041 \\ \(\ln(10^{10}A_{\rm s})\) & 3.044 & 0.014 & 0.011 & 0.011 & 0.010 & 0.0098 \\ \(n_{\rm s}\) & 0.9649 & 0.0048 & 0.0023 & 0.0018 & 0.0014 & 0.0013 \\ \(\tau\) & 0.0544 & 0.0065 & 0.0060 & 0.0057 & 0.0054 & 0.0052 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 0.80 & 0.32 & 0.29 & 0.29 & 0.29 \\ \(N_{\rm eff}\) & 3.046 & 0.15 & 0.030 & 0.018 & 0.015 & 0.014 \\ \(\sum m_{\nu}\) [eV] & 0.06 & 0.037 & 0.029 & 0.028 & 0.026 & 0.025 \\ \hline \end{tabular}
\end{table}
Table 10: Shown are the forecasted \(1\sigma\) uncertainties from delensed CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) spectra plus DESI BAO, when varying the maximum multipole \(\ell_{\rm max}\) or \(L_{\rm max}\) for the CMB spectra. The parameter names and fiducial values are listed in the first and second columns, respectively, while the remaining columns list the \(1\sigma\) parameter errors for the given maximum multipole. We see that to obtain \(\sigma(n_{\rm s})=0.0013\) and \(\sigma(N_{\rm eff})=0.014\), for example, requires the full range of multipoles out to 20,000.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Parameter & \multicolumn{2}{c}{Fiducial} & \multicolumn{2}{c}{HD Lensed} & \multicolumn{2}{c}{HD Delensed} \\ \(\Lambda\)CDM+\(N_{\rm eff}\)+\(\sum m_{\nu}\) & & + FG + DESI BAO & + FG + DESI BAO \\ \hline \(\Omega_{\rm b}h^{2}\) & 0.022370 & 0.000032 & 0.000026 \\ \(\Omega_{\rm c}h^{2}\) & 0.12000 & 0.00045 & 0.00041 \\ \(\ln(10^{10}A_{\rm s})\) & 3.044 & 0.010 & 0.0098 \\ \(n_{\rm s}\) & 0.9649 & 0.0014 & 0.0013 \\ \(\tau\) & 0.0544 & 0.0055 & 0.0052 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 67.36 & 0.30 & 0.29 \\ \(N_{\rm eff}\) & 3.046 & 0.017 & 0.014 \\ \(\sum m_{\nu}\) [eV] & 0.06 & 0.025 & 0.025 \\ \hline \end{tabular}
\end{table}
Table 9: Here we compare the forecasted \(1\sigma\) parameter constraints from lensed (third column) and delensed (fourth column). CMB-HD \(TT\), \(TE\), \(EE\), \(BB\) and \(\kappa\kappa\) power spectra when including DESI BAO in order to see how much improvement is gained from delensing alone. The parameter names and fiducial values are listed in the first and second columns, respectively. We find that delensing improves parameter constraints even after including DESI BAO, with the most improvement seen for \(N_{\rm eff}\). |
2308.14674 | Spectroscopically resolved resonant interatomic Coulombic decay in
photoexcited large He nanodroplets | Interatomic Coulombic decay (ICD) processes play a crucial role in weakly
bound complexes exposed to intense or high-energy radiation. Using large helium
nanodroplets, we demonstrate that ICD is efficient even when the droplets are
irradiated by weak synchrotron radiation at relatively low photon energies.
Below the ionization threshold, resonant excitation of multiple centers
efficiently induces resonant ICD as previously observed for intense pulses [A.
C. LaForge et al., PRX 11, 021011 (2021)]. More surprisingly, we observe ICD
even above the ionization threshold due to recombination of photoelectrons and
ions into excited states which subsequently decay by ICD. This demonstrates the
importance of secondary processes, in particular electron scattering and
recombination, in inducing ICD in extended condensed phase systems. In
addition, we show that ICD can serve as a diagnostic tool for monitoring the
relaxation dynamics of highly-excited and ionized weakly-bound nanosystems. | L. Ben Ltaief, K. Sishodia, R. Richter, B. Bastian, J. D. Asmussen, S. Mandal, N. Pal, C. Medina, S. R. Krishnan, K. von Haeften, M. Mudrich | 2023-08-28T16:03:53Z | http://arxiv.org/abs/2308.14674v1 | Spectroscopically resolved resonant interatomic Coulombic decay in photoexcited large He nanodroplets
###### Abstract
Interatomic Coulombic decay (ICD) processes play a crucial role in weakly bound complexes exposed to intense or high-energy radiation. Using large helium nanodroplets, we demonstrate that ICD is efficient even when the droplets are irradiated by weak synchrotron radiation at relatively low photon energies. Below the ionization threshold, resonant excitation of multiple centers efficiently induces resonant ICD as previously observed for intense pulses [A. C. LaForge et al., PRX 11, 021011 (2021)]. More surprisingly, we observe ICD even above the ionization threshold due to recombination of photoelectrons and ions into excited states which subsequently decay by ICD. This demonstrates the importance of secondary processes, in particular electron scattering and recombination, in inducing ICD in extended condensed phase systems. In addition, we show that ICD can serve as a diagnostic tool for monitoring the relaxation dynamics of highly-excited and ionized weakly-bound nanosystems.
## I Introduction
When matter is exposed to ionizing radiation, both primary ionization and secondary processes may occur. In biological tissue, radiation damage is mostly induced by the latter, _e. g._ by multiple scattering of the primary photoelectron in the medium followed by dissociative attachment of low-energy electrons to vital biomolecules [1]. Another process creating slow, genotoxic low-energy electrons is interatomic Coulombic decay (ICD), where the energy deposited in one atom or molecule is transferred to another which in turn is ionized [2].
ICD has been discovered and characterized in detail for small van-der-Waals molecules and clusters [3]. More recently, the focus has shifted to more relevant condensed-phase systems such as liquid water [4; 5]. There, the light-matter interactions are more complex and the processes causing radiation damage are harder to decipher; in particular electron scattering tends to obscure the signatures of ICD in electron spectra [6].
In this work, we show that large He nanodroplets are efficiently multiply excited by weak quasi-continuous synchrotron radiation leading to the resonant variant of ICD, resonant ICD [7]. Moreover, we find that elastic electron scattering can facilitate ICD by inducing electron recombination into highly-excited states which subsequently decay by resonant ICD. He nanodroplets are a special type of condensed phase system owing to their quantum fluid nature [8]; Atoms and molecules inside them are highly mobile, and their electron spectra are often well-resolved [9; 10; 11; 7; 12]. However, electron scattering leading to low-energy electrons and electron-ion recombination occurs in other types of condensed-phase systems as well [13]. In particular, the decay of multiple excited states or excitons by ICD-type processes has been observed for solid rare-gas clusters [14; 15], nanoplasmas [16; 17; 18; 10], solid nanostructures [19], and thin films [20; 21; 22].
He nanodroplets have previously proven well-suited as model system for elucidating ICD and related processes. In those studies, mainly either high-energy photons were used to excite into high-lying or ionized states of He [23; 24; 25; 26; 27], or intense pulses were used to multiply excite or ionize the droplets [16; 17; 18; 12; 16; 18; 19; 28; 29; 30]. Using extreme ultraviolet (EUV) pulses from a tunable free-electron laser (FEL), the transition from the regime of ICD of weakly excited He droplets to the regime of ultrafast collective autoionization (CAI) of multiply excited He droplets was tracked [12; 29]. EUV-pump, UV-probe studies of multiply excited He droplets indicated that ICD predominantly occurs in pairs of nearest-neighbor He\({}^{*}\) excited atoms within \(\gtrsim 0.4\) ps, facilitated by the merging of void bubbles forming around each He\({}^{*}\)[7].
An important aspect of the present study is that only weak radiation with photon energies below or just above the ionization threshold of He is used for inducing resonant ICD. The He droplets are multiply excited or ionized owing to their large size \(\gtrsim 20\) nm and absorption cross section \(\gtrsim 2\times 10^{5}\) A\({}^{2}\). In such bulk-like systems, inelastic and multiple elastic scattering efficiently slows down photoelectrons such that they are recaptured by the
photoions to populate both fluorescing and metastable states, denoted as \(\mathrm{He^{*}}\)[31, 32, 27]. Resonant ICD then proceeds in the droplets according to the reaction \(\mathrm{He^{*}}+\mathrm{He^{*}}\rightarrow\mathrm{He}+\mathrm{He^{+}}+e_{\mathrm{ ICD}}^{-}\)[33]. Our results indicate that this ICD reaction most likely occurs for pairs of metastable \(\mathrm{He^{*}}\) that have migrated to the surface of the He droplets.
Additionally, by measuring the energy of the emitted ICD electron \(e_{\mathrm{ICD}}^{-}\), we gain detailed insight into the relaxation of the photo-excited system. We find that different states of \(\mathrm{He^{*}}\) are populated prior to ICD in different regimes of resonant excitation, autoionization, or direct photoionization of the droplets. This is in agreement with previous studies of the relaxation dynamics of singly excited He droplets, which have shown that electronically excited He droplets relax into the lowest excited singlet state \(\mathrm{1s2s\,^{1}S}\) within \(\lesssim 1\) ps [34, 35, 36]. When the droplets are excited above their adiabatic ionization energy \(E_{i}^{\mathrm{drop}}\approx 23\) eV, additionally triplet states are populated by electron-ion recombination which relax by fluorescence emission and droplet-induced electronic relaxation into the metastable \(\mathrm{1s2s\,^{3}S}\) state [37, 36]. Surprisingly, the ICD spectra reveal that in large He droplets, electronic relaxation into the \(\mathrm{1s2s\,^{1}S}\) state occurs even well above \(E_{i}^{\mathrm{drop}}\) and recombination into the \(\mathrm{1s2s\,^{3}S}\) state occurs up to several eV above the vertical ionization threshold of He, \(E_{i}=24.6\) eV. Thus, in extended systems multiple electron scattering and electron-ion recombination is another efficient route to creating multiple excitations which efficiently decay by ICD.
## II Experimental Setup
To probe the ICD electrons emitted from He droplets at variable photon energy, a He nanodroplet apparatus combined with a photoelectron-photoion coincidence velocity-map imaging (PEPICO-VMI) detector [38] was used at the GasPhase beamline of the Elettra synchrotron facility in Trieste, Italy. Kinetic energy distributions of electrons were inferred from the VMIs using the MEVELER inversion method [39]. In a second arrangement, a hemispherical electron analyzer (HEA, model VG-220i) with a resolution of \(<0.1\) eV was mounted at the magic angle and combined to the He nanodroplet apparatus to measure high-resolution ICD electron spectra (see Fig. 1).
The He droplet apparatus has been described in detail elsewhere [40, 41]. Briefly, a continuous beam of He nanodroplets of variable droplet radii ranging from \(R=5\) nm for droplets containing an average number of He atoms \(\langle N\rangle\sim 10^{4}\) up to \(R=75\) nm (\(\langle N\rangle\sim 10^{8}\)) is generated by expanding He out of a cryogenic nozzle at a temperature ranging from 16 down to 8 K at 50 bar of He backing pressure. A mechanical chopper is used for discriminating the He droplet beam-correlated signals from the background.
In this study, the photon energy was tuned across the He absorption resonances and across \(E_{i}\), _i. e._ in the range \(h\nu=21.0\) - 28.0 eV. The use of a variable angle spherical grating monochromator ensured narrow-band radiation with a time-averaged photon flux \(\Phi\approx 5\times 10^{11}\) s\({}^{-1}\). At photon energies \(h\nu\leq 21.6\) eV, a Sn filter was inserted in the beamline to suppress higher-order radiation.
## III Results and Discussion
### Total electron and EUV fluorescence spectra
The strongest resonant absorption bands of large He nanodroplets are those correlating to the \(\mathrm{1s2s\,^{1}S}\) and \(\mathrm{1s2p\,^{1}P}\) states of He atoms at photon energies \(h\nu=21.0\) and \(h\nu=21.6\) eV, respectively [42, 40]. As these excitation energies stay below the adiabatic ionization energy \(E_{i}^{\mathrm{drop}}\), no direct electron emission is expected. Nevertheless, high yields of electrons are detected when the size of the He nanodroplets exceeds \(R\approx 20\) nm.
Electron VMIs measured under these conditions display a sharp-edged, perfectly isotropic ring structure, see the inset in Fig. 2 a) recorded at \(h\nu=21.0\) eV. Fig. 2 a) shows the total electron spectra inferred from the VMIs recorded at \(h\nu=21.0\) eV (black line), \(h\nu=21.6\) eV (red line) and \(h\nu=23.8\) eV (blue line). All electron spectra exhibit a sharp peak around 16.6 eV. At \(h\nu\)= 23.8 eV, an additional narrow feature close to zero electron kinetic energy is present in the spectrum which is due to autoionization of superexcited He droplets [43, 44, 9]. The peak present in all three spectra is centered at the electron energy expected for ICD of two \(\mathrm{He^{*}}\) in \(\mathrm{1s2s\,^{1}S}\) states, \(E_{e}=2E_{1S}-E_{i}=2\times 20.6\ \mathrm{eV}-24.6\ \mathrm{eV}=16.6\) eV, irrespective of the photon energy. Here, \(E_{1S}\) is the excitation energy of the \(\mathrm{1s2s\,^{1}S}\) state. This indicates that the He droplets mostly relax from the initially excited \(\mathrm{1s2s,\,1s2p}\) and \(\mathrm{1s3p}\)-correlated states of the droplet into
Figure 1: Sketch of the experimental setups used in this work. a) He nanodroplet beam source and coincidence velocity-map imaging (VMI)-time-of-flight (TOF) spectrometer. b) Hemispherical electron analyser (HEA) coupled to a microchannel plate (MCP) detector.
the lowest excited 1s2s \({}^{1}\)S singlet state of the He\({}^{*}\) atom prior to ICD. Fast vibronic relaxation preceding ICD has been observed before [7, 11]. This sets an upper bound to the ICD decay time to \(\gtrsim 1\) ps, the relaxation time of electronically excited He droplets previously measured by time-resolved photoelectron spectroscopy [34, 35, 36].
This ICD process was previously observed using FEL pulses at much higher intensity \(\gtrsim 10^{10}\) Wcm\({}^{-2}\)[7, 12], but not unambiguously using synchrotron radiation [44, 45, 46, 47, 48, 9, 11, 41]. In the present experiment the He droplets were produced by supercritical expansion of liquid He; in this regime the droplets are much larger (radius \(R>10\) nm, \(\langle N\rangle>10^{5}\) He atoms per droplet) than those conventionally used for He-nanodroplet isolation spectroscopy, \(\langle N\rangle\lesssim 10^{4}\)[49]. Accordingly, their absorption cross section is larger and the rate of resonant absorption of a droplet with _e. g._\(\langle N\rangle=10^{6}\) is \(r_{\rm abs}=\sigma_{2p}\langle N\rangle\Phi/w^{2}\approx 8\times 10^{3}\) s\({}^{-1}\). Here, the photon beam radius is \(w\approx 400\)\(\mu\)m and the absorption cross section at the 1s2p-resonance of He droplets at \(h\nu=21.6\) eV is estimated to \(\sigma_{2p}=25\) Mbarn [41, 29]. Accordingly, the probability that this droplet resonantly absorbs one photon during its flight through the interaction region is \(p_{1}=\sigma_{2p}\langle N\rangle t_{\rm tr}\Phi/w^{2}\approx 1\,\%\) for a transit time of the droplets through the focus \(t_{\rm tr}\approx 1\)\(\mu\)s. As it takes two photons to excite a pair of He\({}^{*}\) atoms in one He droplets which decay by ICD, the ICD rate is \(r_{\rm ICD}=r_{\rm abs}/2\)[33]. For an estimated number density of He droplets in the jet of \(n_{\rm HeN}\sim 10^{6}\) cm\({}^{-3}\) and an active length of the focal volume of \(d\approx 2\) mm, the total ICD rate is \(R_{\rm ICD}=r_{\rm ICD}n_{\rm HeN}w^{2}d\approx 10^{6}\) s\({}^{-1}\). This value roughly matches the rate of detected ICD electrons in the experiment considering the detection efficiency of the HEA is \(\sim 10^{-3}\).
Note that \(R_{\rm ICD}\propto\Phi\) scales linearly with photon flux \(\Phi\) although two or more photons have to be absorbed by one He droplet to induce ICD. Only for much higher intensity as in previous FEL experiments would the excited-state population and hence \(R_{\rm ICD}\) be saturated [28, 29]. This linear dependency of \(R_{\rm ICD}\) on \(\Phi\) is experimentally confirmed, see Fig. 1 in the supplemental material (SM). When varying the photon flux by gradually opening and closing the exit slit of the monochromator and measuring all ICD electrons produced at \(h\nu=21\) eV, we observe essentially a linear dependency over more than one order of magnitude variation of the photon flux, irrespective of the He nanodroplets size.
Using the HEA, the ICD feature seen at the electron energy \(E_{e}=16.6\) eV in the VMI spectra are much better resolved. Fig. 2 b) shows high-resolution total electron spectra measured using the HEA at various photon energies below and above \(E_{i}\). The spectra clearly show a substructure of the ICD peak. Additionally, a low-amplitude wing structure extends from \(E_{e}=16.2\) eV down to about 14 eV, indicating that 1s2s \({}^{3}\)S-excited He\({}^{*}\) atoms and correlated states of the He\({}_{2}^{**}\) dimer contribute to the ICD signal to a small extent. At \(h\nu\)= 21.0 and 21.6 eV, the electron spectra exhibit one peak at \(E_{e}=16.3\) eV originating from ICD of pairs of He\({}^{*}\) in the \({}^{1}\Sigma_{g}\) state correlating to two atoms each in the 1s2s \({}^{1}\)S singlet state [7], and a shoulder that extends up to 16.8 eV featuring two smaller peaks. Electron spectra recorded at 23.8 eV \(\leq h\nu\)\(\leq\) 24.6 eV exhibit three small maxima at \(E_{e}=15.0\), 15.5 and 15.8 eV in addition to the main ICD features. The weak peak at 15.0 eV is assigned to \({}^{5}\Sigma_{g}/^{3}\Sigma_{u,\,g}\) states correlating to two He\({}^{*}\) atoms in 1s2s \({}^{3}\)S triplet states [7]. The peaks at 15.5 eV and 15.8 eV can be ascribed to ICD out of \({}^{3}\Sigma_{u,\,g}\) states correlating to a mixed pair of metastable He\({}^{*}\)(\({}^{1}\)S, \({}^{3}\)S). The structure of these two peaks resembles very well the structure of electron kinetic energy features observed before in earlier low-energy binary He\({}^{*}\)(\({}^{1}\)S, \({}^{3}\)S) collisions experiment [50, 51]. Interestingly, all of these three small peaks that involve the 1s2s \({}^{3}\)S states only appear in the electron spectra at \(h\nu\geq E_{i}^{\rm drop}\). The main ICD features around \(E_{e}=16.5\) eV are still visible up to \(h\nu=25.0\) eV and disappear for \(h\nu\geq 26.0\) eV
Figure 2: Background-subtracted total electron spectra measured for pure large He nanodroplets (\(R=75\) nm) at different photon energies near the ionization energy of He using the VMI spectrometer a) and the HEA b). The inset in a) shows a raw total VMI recorded at \(h\nu=21.0\) eV. For better visibility, all of the spectra shown in b) in the kinetic energy range 14.0 - 16.0 eV are scaled up by a factor of 20. All electron spectra are normalized to the flux of the incident EUV photon beam.
whereas the small peaks at \(E_{e}<16\) eV are still faintly visible.
This indicates that autoionization or photoionization followed by electron-ion recombination preferentially populate the metastable 1s2s\({}^{3}\)S state which in turn undergoes ICD by interaction with another 1s2s\({}^{3}\)S or 1s2s\({}^{1}\)S He atom [36; 37]. The structure of the high-resolution ICD spectra at \(E_{e}=16.0\) - 17.0 eV in Fig. 2 b) will be discussed in more detail in Sec. III C.
To get an overview of the ICD efficiency across the entire photoexcitation spectrum of He droplets, the photon energy was tuned from \(h\nu=20\) to 26.5 eV while measuring the yield of all ICD electrons with the HEA. Fig. 3 a) shows the HEA signal integrated in the electron energy range \(E_{e}=16\) - 17 eV for He droplets of various sizes in the range \(R=6\) - 75 nm.
The ICD yield exhibits four main features: A sharp peak at \(h\nu=21.0\) eV associated with the 1s2s \({}^{1}\)S droplet resonance and two broad features peaked at \(h\nu=21.4\) eV and \(h\nu=23.8\) eV associated with the 1s2p \({}^{1}\)P and 1s3p/1s4p droplet states, respectively [53; 42]. A fourth maximum appears at \(h\nu=E_{i}=24.6\) eV, where a high density of Rydberg states is expected. These features are invisible for small droplets (\(R<20\) nm) [see red line in Fig. 3 a)]. They are clearly visible at \(R=20\) nm and become more and more pronounced when the droplet radius is further increased to \(R=75\) nm [black line in Fig. 3 a)].
For comparison, Fig. 3 b) shows previously measured EUV fluorescence yield spectra of small He droplets (red lines) [53]. The dark line, showing the EUV fluorescence spectra for large He droplets (\(R=75\) nm), features a similarly distorted resonance structure as the ICD spectra; the 1s2s\({}^{1}\)S-correlated peak at \(h\nu=21.0\) eV is enhanced, the 1s2p \({}^{1}\)P-correlated peak at \(h\nu=21.6\) eV is asymmetrically broadened, and the 1s3p/1s4p-correlated feature around \(h\nu=23.8\) eV is enhanced as compared to the fluorescence spectra for small He droplets (\(R\leq 6\) nm). The feature around \(h\nu=24.6\) eV remains sharp in the fluorescence spectra for all sizes, likely due to the contribution of free He atoms and small He clusters accompanying the He droplets in the jet. Note that the fluorescence spectra contain contributions from all singly excited He species decaying to the ground state, whereas ICD spectra are selective to large He droplets which absorb at least two photons in the course of their interaction with the photon beam.
It is also interesting to note that, contrary to the ICD yields shown in Fig. 3 a), the EUV fluorescence yields for large He droplets (\(R>4.5\) nm) are reduced in intensity. This suggests that for large He droplets where two or more absorption events per droplet become probable, a large fraction of the excited He atoms decay by ICD instead of decaying by fluorescence emission. Hence, ICD competes with fluorescence emission; it would be interesting to quantify the branching ratio of ICD and fluorescence emission. In future experiments both channels should be measured simultaneously.
The most striking feature of the ICD spectra of large He droplets is the enhanced intensity of the peak at \(h\nu=21.0\) eV which becomes the highest peak for droplet sizes \(R>36\) nm. Note that the absorption cross section at the 1s2s \({}^{1}\)S resonance of medium-sized He droplets is smaller than the absorption cross section of the 1s2p \({}^{1}\)P resonance by a factor \(\approx 1/7\)[42]. The enhancement of
Figure 4: a) Droplet size-dependent total ICD electron yields measured at various photon energies. b) Integrated ICD signal measured in coincidence with He\({}^{+}\) (gray curve) and He\({}_{2}^{+}\) (black) at \(h\nu=21.6\) eV and as a function of droplet size. Droplet size-dependent relative intensity of \({}^{1}\)S ICD _vs._ 3S ICD measured at \(h\nu=23.8\) eV c) and at \(h\nu=25.0\) eV d).
Figure 3: Total yield spectra of ICD electrons a) and EUV fluorescence b) measured for He nanodroplets of various sizes. The purple line is the absorption spectrum of bulk liquid He taken from [52]. The EUV fluorescence data shown in b) are reproduced from [53].
the peak at \(h\nu=21.0\) eV for large droplets can also be seen from the total electron spectra shown in Fig. 2 b) and in Fig. 4 a) which compares droplet size-dependent ICD electron yields measured at \(h\nu=21.0\) eV to those measured at higher photon energies.
The change of the structure of the ICD and EUV fluorescence spectra for increasing He droplet sizes \(R>20\) nm may lead to the assumption that the spectra for large He droplets approach the characteristic absorption spectrum of bulk superfluid He. Intriguingly, the latter (measured in reflection from the surface of liquid He) resembles the spectra measured for small He droplets, though, see the purple line in Fig. 3 b) [52]. This indicates that the modified peak structure for large He droplets is related to their intrinsic properties. In particular, nano-optical effects, as observed in other types of nanoparticles [54], may be expected to influence the absorption and emission spectra in this range of photon energy and He droplet size; around the 1s2p \({}^{1}\)P resonance, the index of refraction is expected to significantly deviate from 1 by about \(\pm 0.5\)[55] which facilitates nano-focusing effects. Additionally, the wavelength of the EUV radiation \(\lambda=59\) nm matches the He droplet size studied here which may lead to resonance effects and the enhancement of the light absorption. Further experiments and simulations should be done to investigate this interesting nano-optical system in detail.
At \(h\nu>E_{i}\), where direct emission of photoelectrons from He droplets is observed [9; 41], one would probably not expect to detect any ICD and EUV fluorescence signals. However, both ICD and EUV fluorescence are detected up to \(h\nu=26\) eV or even higher for large He droplets with \(R\gtrsim 20\) nm. The ICD electron yield appears as a broad feature peaked at \(h\nu=E_{i}=24.6\) eV which reaches up to \(h\nu=26\) eV. The EUV fluorescence signal appears as a tail that continuously drops even beyond \(h\nu=26\) eV. This indicates that electron-ion recombination is effective for photon energies exceeding \(E_{i}\) by several eV [36; 37; 56; 57; 58; 59; 60; 61; 62]. Electron-ion recombination may be expected to be particularly efficient for \(h\nu\lesssim E_{i}+V_{0}=25.6\) eV where \(V_{0}\approx 1\) eV is the gap to the conduction band edge of superfluid He [63; 64; 59]. Photoelectrons created with kinetic energy \(E_{e}\lesssim V_{0}\) promptly localize in the droplet by forming bubbles, which facilitates the recombination with their parent ions. The excited He\({}^{*}\) atoms formed in this way subsequently decay either by fluorescence emission or by ICD.
### Electron-ion coincidence spectra
More detailed insights into the relaxation of large He nanodroplets are obtained from electron and ion spectra recorded by coincidence detection. Fig. 5 shows typical electron spectra measured in coincidence with He\({}^{+}\) (black lines) and He\({}_{2}^{+}\) (red lines) for large He nanodroplets (\(R=50\) nm) at \(h\nu=25.0\) eV a), \(23.8\) eV b) and \(21.6\) eV c). The corresponding raw electron VMIs in coincidence with He\({}_{2}^{+}\) are displayed as insets. At \(h\nu=21.6\) eV, one sharp-edged, perfectly isotropic ring structure is seen due to ICD electrons. At \(h\nu=23.8\) eV, the ICD ring is still present but an additional central bright spot appears which is due to emission of electrons by autoionization. At \(h\nu=25\) eV, an anisotropic small ring at the center of the image is due to direct emission of photoelectrons. Note that this ring features a forward/backward asymmetry with respect to the propagation direction of the photon beam; this "shadowing effect" [54] occurring in large He nanodroplets is discussed in detail in Ref. [32].
At \(h\nu\)= 21.6 eV, both electron spectra measured in coincidence with He\({}^{+}\) and He\({}_{2}^{+}\) exhibit only one main peak due to ICD of two He\({}^{*}\) atoms in the \({}^{1}\)S state which is populated by electronic relaxation after exciting the He droplets to the 1s2p \({}^{1}\)P resonance [40; 42]. At \(h\nu=23.8\) eV, the ICD electron spectra measured in coincidence with He\({}_{2}^{+}\) [Fig. 5 b)] feature a double-peak structure. At this photon energy, the He droplets are excited into the 1s3p and 1s4p absorption bands [36; 34] which then decay by ultrafast electronic relaxation into the 1s2s \({}^{1}\)S atomic state and by autoionization. The latter pathway can further lead to a dissociative electron-ion recombination, thereby preferentially populating the 1s2s \({}^{3}\)S state [36; 37; 60]. Therefore the shoulder structure at about 15 eV appears at the electron energy expected for two He\({}^{*}\) in the \({}^{3}\)S state decaying by ICD, \(E_{e}=2E(^{3}S)-E_{i}=2\times 19.8\) eV \(-24.6\) eV = 15.0 eV. The peak at near-zero electron energy is due to electrons emitted by droplet autoionization that do not recombine [9; 44].
Note that the \({}^{3}\)S ICD feature is only seen in the electron spectrum measured in coincidence with He\({}_{2}^{+}\) and not in the electron spectrum measured in coincidence with He\({}^{+}\). In contrast, the \({}^{1}\)S ICD feature is present in both coincidence electron spectra. Thus \({}^{3}\)S ICD generates only He\({}_{2}^{+}\) ions, whereas \({}^{1}\)S ICD generates both He\({}^{+}\) and He\({}_{2}^{+}\) ions. This indicates qualitatively that there are different scenarios for ICD in the He nanodroplets: i) He\({}^{*}\) atoms excited in the \({}^{1}\)S state are formed by electronic relaxation accompanied by the migration of the He\({}^{*}\)'s to the droplet surface. There, two He\({}^{*}\) excited atoms undergo ICD with only little influence by the He droplet. Accordingly, mostly He\({}^{+}\) ions are produced. Low yields of He\({}_{2}^{+}\) are likely due to the binding of a He atom to the He\({}^{+}\) product as it escapes from the droplet. ii) He\({}^{*}\) atoms excited in the \({}^{3}\)S state are formed as a result of dissociation of excited He\({}_{2}^{*}\)'s being produced by electron-He\({}_{2}^{+}\) ion recombination that mainly occurs in the bulk of the droplets. Two He\({}^{*}\) excited atoms formed in this way undergo ICD prior to their ejection to the droplet surface. Therefore, the resulting He\({}^{+}\) product has a high chance of picking up another He atom to form a He\({}_{2}^{+}\) which is eventually ejected from the droplet by a non-thermal pro
cess. Associative ionization, _i. e._ the formation of stable He\({}_{2}^{+}\) by ICD can be ruled out as it is a minor channel [50].
When the photon energy is tuned across \(E_{i}\) up to \(h\nu=25\) eV, direct photoemission becomes the dominant process, see the sharp peak at \(E_{e}=0.4\) eV in Fig. 5 a) which matches the expected position of the photonic at \(E_{e}=h\nu-E_{i}\). Remarkably, ICD features are still clearly visible implying the presence of two or more neutral excitations in one droplet. In this regime, ICD out of the 1s2s\({}^{3}\)S state is the main indirect decay channel in the electron spectra measured in coincidence with He\({}_{2}^{+}\). Thus, electron-ion recombination which populates the 1s2s\({}^{3}\)S state appears to contribute more abundantly to the electron-He\({}_{2}^{+}\) ion coincidences than electronic relaxation which mainly leads to the 1s2s \({}^{1}\)S state. This can also be seen from the He-droplet size dependence of the \({}^{1}\)S and \({}^{3}\)S ICD components measured at \(h\nu=25.0\) eV, see Fig. 4 d). Beyond the onset of ICD at a droplet radius \(R\approx 20\) nm, ICD of the \({}^{3}\)S state clearly dominates over \({}^{1}\)S ICD. The opposite is true at the photon energy \(h\nu=23.8\) eV just above \(E_{i}^{\rm drop}\), see Fig. 4 c) which is based on the electron spectra shown in SM Fig. 3.
This enhanced efficiency of \({}^{3}\)S ICD over \({}^{1}\)S ICD when detecting electron-ion coincidences can be rationalized by the atomic motion occurring in the encounter of two He\({}^{*}\) excited atoms along the He\({}_{2}^{**}\) potential energy curves, see Fig. 6[7]. The atomic motion during the ICD process is indicated by pink arrows; see Sec. III.3 for a more detailed discussion. As the potential well depths of the He(\({}^{3}\)S)-He(\({}^{3}\)S) dimer states are twice as deep as for the He(\({}^{1}\)S)-He(\({}^{1}\)S) dimer state, the ion produced by \({}^{3}\)S ICD is released with higher kinetic energy and therefore it is ejected out of the He droplet more efficiently.
The He\({}_{2}^{+}\) and He\({}^{+}\)-correlated electron spectra recorded at \(h\nu=23.8\) eV and \(h\nu=25.0\) eV contain another weak component in the range 7 - 13 eV which was also seen in FEL experiments at \(h\nu=23.8\) eV [grey line in Fig. 5 b)] [7]. This feature is interpreted as due to ICD involving He\({}_{2}^{*}\) excirners. Interestingly, it is only observed in the electron spectra recorded at \(h\nu\geq E_{i}^{\rm drop}\) and not in the electron spectra measured following photoexcitation of the He droplets into the 1s2p-correlated band at \(h\nu=21.6\) eV, see Fig. 5 c). This further entails the evidence of electron-ion recombination in leading to the formation of He\({}_{2}^{*}\)'s upon autoionization or direct ionization of the He droplets [64; 27]. Note that in bulk liquid He, He\({}_{2}^{*}\) excirners can also be produced following primary ionization events [67; 68; 69; 70]. Both atomic and molecular triplet emission lines upon relaxation of He\({}_{2}^{*}\) were previously observed in fluorescence spectra of He clusters at 23.1 eV \(\leq h\nu\leq 24.6\) eV [37]. While electron recombination with He\({}_{2}^{+}\) in the droplet can be a dissociative as mentioned above, inside a He droplet fraction of He\({}_{2}^{*}\)'s formed by recombination can also be stabilized by the cold He environment. When the stabilized He\({}_{2}^{*}\) excimer subsequently decays into the electronic groundstate by ICD, the amount of energy transferred to the reaction partner (He\({}^{*}\) or He\({}_{2}^{*}\)) is significantly lower compared to ICD where an excited He\({}^{*}\) decays to the groundstate; the He\({}_{2}^{*}\) excitation energy is lower by 2.5 eV and the He\({}_{2}\) groundstate potential is strongly repulsive at the He\({}_{2}^{*}\) equilibrium distance (1.08 A), see the red line in Fig. 6[65; 71]. This He\({}_{2}^{*}\) ICD feature is also observed in large droplets at \(h\nu\geq 44.4\)\(eV\) where inelastic scattering facilitates the population of excited states [27].
For even higher photon energies we expect that electron-ion recombination becomes the only way of inducing ICD, as the electron promoted into the conduction
Figure 5: Background-subtracted electron spectra of large pure He nanodroplet (\(R=50\) nm) measured in coincidence with He\({}^{+}\) and He\({}_{2}^{+}\) ions at photon energies below and about the ionization energy of He. The two dotted lines indicate the kinetic energies of electrons expected for ICD of two He atoms in their metastable 1s2s \({}^{1}\)S and 1s2s \({}^{3}\)S states. Insets show the raw electron VMI’s in coincidence with He\({}_{2}^{+}\). The polarization of the EUV light was vertical and the EUV beam was incident on the droplets from the left-hand side. The electron spectra are normalized to the flux of the incident EUV photon beam.
band detaches from the He\({}^{+}\) core and has to undergo multiple elastic collisions in the droplet to lose enough energy and return to the He\({}^{+}\). Indeed, when tuning the photon energy from \(h\nu=21.6\) eV to \(28.0\) eV, we see a transition from \({}^{1}\)S ICD to \({}^{3}\)S ICD, and for \(h\nu>25\) eV, \({}^{3}\)S ICD clearly dominates the electron-ion spectra measured in coincidence with He\({}^{+}_{2}\), see Fig. 7 a) and b). The ratios of \({}^{1}\)S and \({}^{3}\)S ICD peak areas versus photoelectrons are shown in Fig. 7 c). They are obtained from fitting Gaussian functions to the \({}^{1}\)S and \({}^{3}\)S ICD peaks in the electron spectra measured in coincidence with He\({}^{+}\) and He\({}^{+}_{2}\), and as the weighted average of their \({}^{1}\)S and \({}^{3}\)S ICD peak heights, respectively [see SM Sec. III]. Below the He ionization threshold, \(h\nu\leq E_{i}\), the near-zero kinetic energy peak resulting from droplet autoionization is fitted instead of the photoline at \(h\nu>E_{i}\). In this representation of the data, the transition from relaxation-dominated ICD, leading mostly to the \({}^{1}\)S ICD peak, to recombination-dominated ICD, leading to the \({}^{3}\)S peak, occurs right at \(h\nu=E_{i}\). The \({}^{1}\)S ICD signal drops to the noise level for \(h\nu\gtrsim 25.5\) eV whereas recombination-induced ICD remains visible even at \(h\nu=28\) eV.
ization and ICD) the He\({}_{2}^{+}\) ions are ejected out of the He droplet by a non-thermal, impulsive process in the course of vibrational relaxation [73].
The kinetic energy distributions of He\({}^{+}\) ions are shown in Fig. 8 a). Interestingly, all He\({}^{+}\) ion energy distributions recorded at \(h\nu<E_{i}\) are similar in shape with a pronounced maximum at 0.27 eV. They can only originate from ionization of excited He\({}^{*}\)'s by ICD following two-photon absorption by the droplets. Furthermore, the shape of these spectra clearly differs from the shape of the He\({}^{+}\) spectrum measured at \(h\nu=25.0\) eV, where only one main broad feature peaking around 0.1 eV with a tail extending to 0.5 eV is observed. The latter broad feature is also visible in the ion spectra recorded for small He droplets. Remarkably, it does not change structure when varying the He droplet size and when tuning the photon energy above \(E_{i}\), see SM Fig. 2 a). We interpret this generic distribution of ion energies by photoionization into the repulsive \(A\) state of the He\({}_{2}^{+}\) molecular ion, see the blue line in Fig. 6.
For small He droplets (\(R<20\) nm), an additional sharp peak near 0 eV is present in the ion spectra recorded at \(h\nu>E_{i}\), see SM Fig. 2 a). It is due to photoionization of the free He atoms accompanying the He droplets in the jet, as discussed in [72]. Note that the He\({}^{+}\) and He\({}_{2}^{+}\) ion spectra recorded at \(h\nu>E_{i}\) for large He nanodroplets (\(R>20\) nm) should include a contribution of He\({}^{+}\) and He\({}_{2}^{+}\) ions created by ICD. However, these ICD ions are hard to identify due to the overwhelming contribution from ions created by direct photoionization.
The proposed relaxation pathway of two He\({}^{*}\)'s formed by absorption of two EUV photons by one He nanodroplet (resonant excitation or electron-ion recombination) leading to the ejection of an ICD electron and ion is illustrated in Fig. 8[65; 66; 7]. Following photoexcitation, the two He\({}^{*}\)'s are accelerated toward each other from large interatomic distance \(R\) along the attractive \({}^{1}\Sigma_{g}\) potential curve of the doubly excited He dimer, He\({}_{2}^{**}\); when they reach shorter distances \(R\), the ICD probability rises and ICD likely occurs near the well of the potential around \(R=4\) A, leading to the emission of an ICD electron with an energy corresponding to the difference potential between the initial He\({}_{2}^{**}\) state and the final He\({}_{2}^{+}\) state at the distance \(R\). The maximum kinetic energy acquired by the two He\({}^{*}\) atoms is given by the depth of the potential well with respect to the He\({}^{*}\)+He\({}^{*}\) atomic asymptote, \(\Delta E\). As the kinetic energy acquired by the two colliding He atoms in the \({}^{1}\Sigma_{g}\) state is not significantly affected by the ICD process, the He and He\({}^{+}\) atoms in the final state continue their trajectory toward short \(R\) where they are reflected at the hard-core potential of the He\({}_{2}^{+}\) ground state \(X\). In this process, the ICD electron energy is reduced at the extent at which the kinetic energy of the products increases in the course of the collision; therefore the ICD electron spectrum can be transformed into a kinetic energy distribution of the He\({}^{+}\) ICD ion according to
\[E_{\rm ion}=(2E_{\rm He^{*}}-E_{i}-E_{e}+dE)/2. \tag{1}\]
Here, \(E_{\rm He^{*}}=20.62\) eV is the excitation energy of each He\({}^{*}\) and \(dE\)=0.3 eV is a droplet-induced energy shift. The factor 1/2 accounts for equal sharing of the kinetic energy released to the two dissociating He atoms. This calculated ion kinetic energy distribution from the high resolution electron spectrum measured at \(h\nu=21.6\) eV [red line in Fig. 2 b)] matches the corresponding He\({}^{+}\) ion energy distribution surprisingly well, see the dotted black line in Fig. 8 a). This indicates that the He\({}^{+}\) created by ICD are indeed ejected from the He nanodroplet by a binary collision-like process where the He and He\({}^{+}\) products dissociate without undergoing further scattering. This confirms our conjecture that ICD happens predominantly out of relaxed He\({}^{*}\) atoms that have migrated to the He droplet surface. The asymmetric three-peak structure seen in the ion spectra measured at \(h\nu\leq E_{i}\) is likely due to quantum interference effects in the entrance channel of colliding pair of metastable atoms as observed earlier in low-energy binary collisions [50; 51].
## IV Conclusion
In summary, we have studied in detail the decay of multiply excited He nanodroplets by resonant ICD. Owing to the large absorption cross section of He droplets of sizes \(\gtrsim 20\) nm, even low-intensity monochromatic EUV synchrotron radiation can induce multiple excitations in one droplet. Using the advanced techniques of high-resolution electron spectroscopy and photoelectron-photoion coincidence velocity-map imaging, the individ
Figure 8: He\({}^{+}\) and He\({}_{2}^{+}\) ion kinetic energy distributions measured for pure large He nanodroplets of radius \(R=50\) nm at different photon energies below and above the He ionization energy. All ion spectra in a) and b) are background subtracted and normalized to the EUV photon flux. The black dotted curve in a) is obtained by linear transformation of the electron spectrum measured at \(h\nu=21.6\) eV [see red line in Fig. 1 b)] according to Eq. (1).
ual steps of the ICD process out of \({}^{1}\)S and \({}^{3}\)S states are unravelled at \(h\nu\) below and up to a few eV's above the \(E_{i}\). The main results obtained in this work can be summarized as follows: i) At \(h\nu\)= 21.6 eV where the He nanodroplet is excited into the 1s2p-correlated absorption band, the highly resolved electron spectra and the perfectly isotropic distribution of the emitted electrons indicate that ICD takes place between two fully relaxed excited He atoms in metastable states that roam about the He droplet surface. Therefore, this type of ICD may be expected to be a slow process with a time constant on the order of 10 - 100 ps mainly determined by the roaming dynamics. ii) The significant changes of the absorption spectrum of large He droplets point at nano-optical effects occurring in such large He nanodroplets (nano-focusing, resonance enhancement of the radiation). iii) ICD is efficient even at photon energies exceeding the adiabatic ionization energy of He droplets and above \(E_{i}\) up to a few eV's due to electron-ion recombination into excited He states. vi) The electron spectra measured in coincidence with He\({}^{+}\) and He\({}^{+}_{2}\) show that \({}^{1}\)S ICD occurs by electronic relaxation in the entire range of resonant photoexcitation, even at \(h\nu\) exceeding \(E_{i}\) by about 1 eV. v) Droplet-induced electronic relaxation of excited He evolves into electron-ion recombination by which mainly He\({}^{*}\) atoms populated in triplet states are formed. vi) In the electron spectra recorded in coincidence with He\({}^{+}_{2}\), \({}^{3}\)S ICD appears more prominently due to the enhanced ejection of He ions formed in this way; a crossing of the \({}^{3}\)S ICD yield and the \({}^{1}\)S ICD yield occurs when tuning \(h\nu\) across the \(E_{i}\). vii) The strongly differing abundances of He\({}^{+}\) and He\({}^{+}_{2}\) products for \({}^{1}\)S _vs._\({}^{3}\)S ICD indicate different scenarios of ICD taking place at the surface or in the bulk of the droplets, respectively. viii) ICD involving excimer He\({}^{*}_{2}\)'s occurs only at \(h\nu\geq E_{i}^{\rm drop}\) due to formation of stabilized He\({}^{*}_{2}\)'s by electron-ion recombination.
The individual steps of the \({}^{3}\)S ICD process which is occurring at photon energies exceeding \(E_{i}\) due to electron-ion recombination are schematically illustrated in Fig. 9. Following photoionization of two He atoms, the emitted electrons perform a diffusion-like motion inside the droplets by which they loose their kinetic energy. Owing to the long-range Coulomb attraction, the electrons are drawn back to the ions which tend to form He\({}^{+}_{2}\) dimer ions by interaction with the surrounding He. Electron-ion recombination then leads to the formation of \({}^{3}\)S-excited He\({}^{*}\) atoms or \({}^{3}\Sigma\)-excited He\({}^{*}_{2}\) excirners. These metastable species tend to be expelled toward the droplet surface where they meet and decay by ICD.
To assess the general relevance of this process, other types of nanosystems should be studied in a similar size range, _e. g._ heavier rare-gas clusters and molecular clusters such as water clusters and nanodroplets. Some of these can be resonantly excited and ionized with conventional lasers [15]. Electron-ion recombination to create highly reactive excited species may play an even more important role in bulk liquids and biological systems when exposed to ionizing radiation.
## Acknowledgement
M.M. and L.B.L. acknowledge financial support by Deutsche Forschungsgemeinschaft (project BE 6788/1-1), by the Danish Council for Independent Research Fund (DFF) via Grant No. 1026-00299B and by the Carlsberg Foundation. We thank the Danish Agency for Science, Technology, and Innovation for funding the instrument center DanScatt. SRK thanks Dept. of Science and Technology, Govt. of India, for support through the DST-DAAD scheme and Science and Eng. Research Board. SRK acknowledges support for this research through the Indo-French Center for Promotion of Advanced Research (CEFIPRA). SRK, KS and SD acknowledge the support of the Scheme for Promotion of Academic Research Collaboration, Min. of Edu., Govt. of India, and the Institute of Excellence programme at IIT-Madras via the
Figure 9: (a) Schematic illustration of ICD induced by electron-ion recombination in large He droplets following absorption of two ionizing EUV photons. (1) The emitted photoelectrons undergo elastic scattering inside the droplets and lose their kinetic energy. Depending on the excursion time of the electron, the He\({}^{+}\) photoions form stable He\({}^{+}_{2}\) dimers before recapturing the decelerated electrons (2). Once the electrons have recombined with the ions, excited He\({}^{+}_{2}\) dimers form which dissociate into one neutral He atom and one excited He atom in the \({}^{3}\)S state (3). The two \({}^{3}\)S He\({}^{*}\) atoms tend to migrate to the surface of the droplet where they decay by ICD (4). (b) Energy level diagram illustrating the dynamics (2), (3) and (4) leading to ICD as shown in (a).
Quantum Center for Diamond and Emergent Materials. SRK gratefully acknowledges support of the Max Planck Society's Partner group programme, and M.M. and S.R.K. acknowledge funding from the SPARC Programme, MHRD, India. The research leading to this result has been supported by the project CALIPSOplus under grant agreement 730872 from the EU Framework Programme for Research and Innovation HORIZON 2020 and by the COST Action CA21101 "Confined Molecular Systems: From a New Generation of Materials to the Stars (COSY)".
[MISSING_PAGE_POST]
* Wiegandt _et al._ [2019]F. Wiegandt, F. Trinter, K. Henrichs, D. Metz, M. Pitzer, M. Waitz, E. Jabbour al Maalouf, C. Janke, J. Rist, N. Wechselberger, T. Miteva, S. Kazandjian, M. Schoffler, N. Siouzat, T. Jahnke, and R. Dorner, Direct observation of interatomic Coulombic decay and subsequent ion-atom scattering in helium nanodroplets, Phys. Rev. A **100**, 022707 (2019).
* Ltaief _et al._ [2020]L. B. Ltaief, M. Shcherbinin, S. Mandal, S. Krishnan, R. Richter, T. Pfeifer, M. Bauer, A. Ghosh, M. Mudrich, K. Gokhberg, _et al._, Electron transfer mediated decay of alkali dimers attached to He nanodroplets, Phys. Chem. Chem. Phys. **22**, 8557 (2020).
* Ben Ltaief _et al._ [2023]L. Ben Ltaief, K. Sishodia, S. Mandal, S. De, S. R. Krishnan, C. Medina, N. Pal, R. Richter, T. Fennel, and M. Mudrich, Efficient indirect interatomic Coulombic decay induced by photoelectron impact excitation in large pure helium nanodroplets, Phys. Rev. Lett. **131**, 023001 (2023).
* LaForge _et al._ [2014]A. C. LaForge, M. Drabbels, N. B. Brauer, M. Coreno, M. Devetta, M. Di Fraia, P. Finetti, C. Grazioli, R. Katzy, V. Lyamayev, T. Mazza, M. Mudrich, P. O'Keeffe, Y. Ovcharenko, P. Piseri, O. Plekan, K. C. Prince, R. Richter, S. Stranges, C. Callegari, T. Moller, and F. Stienkemeier, Collective autoionization in multiply-excited systems: A novel ionization process observed in helium nanodroplets, Sci. Rep. **4**, 3621 (2014).
* Ovcharenko _et al._ [2014]Y. Ovcharenko, V. Lyamayev, R. Katzy, M. Devetta, A. LaForge, P. O'Keeffe, O. Plekan, P. Finetti, M. Di Fraia, M. Mudrich, M. Krikunova, P. Piseri, M. Coreno, N. B. Brauer, T. Mazza, S. Stranges, C. Grazioli, R. Richter, K. C. Prince, M. Drabbels, C. Callegari, F. Stienkemeier, and T. Moller, Novel collective autoionization process observed in electron spectra of He clusters, Phys. Rev. Lett. **112**, 073401 (2014).
* Michiels _et al._ [2021]R. Michiels, M. Abu-samha, L. B. Madsen, M. Binz, U. Bangert, L. Bruder, R. Duim, A. Wituschek, A. C. LaForge, R. J. Squibb, R. Feifel, C. Callegari, M. Di Fraia, M. Danailov, M. Manfredda, O. Plekan, K. C. Prince, P. Rebernik, M. Zangrando, F. Stienkemeier, and M. Mudrich, Enhancement of above threshold ionization in resonantly excited helium nanodroplets, Phys. Rev. Lett. **127**, 093201 (2021).
* Asmussen _et al._ [2023]J. D. Asmussen, L. Ben Ltaief, K. Sishodia, A. R. Abid, B. Bastian, S. Krishnan, H. B. Pedersen, and M. Mudrich, Dopant ionization and efficiency of ion and electron ejection from helium nanodroplets, J. Chem. Phys. **159** (2023).
* Asmussen _et al._ [2023]J. D. Asmussen, K. Sishodia, B. Bastian, A. R. Abid, L. B. Ltaief, H. B. Pedersen, S. De, C. Medina, N. Pal, R. Richter, _et al._, Electron energy loss and angular asymmetry induced by elastic scattering in superfluid helium nanodroplets, Nanoscale, Advance Article (2023).
* Kuleff _et al._ [2010]A. I. Kuleff, K. Gokhberg, S. Kopelke, and L. S. Cederbaum, Ultrafast interatomic electronic decay in multiply excited clusters, Phys. Rev. Lett. **105**, 043004 (2010).
* Ziemkiewicz _et al._ [2015]M. P. Ziemkiewicz, D. M. Neumark, and O. Gessner, Ultrafast electronic dynamics in helium nanodroplets, Int. Rev. Phys. Chem. **34**, 239 (2015).
* Mudrich _et al._ [2020]M. Mudrich, A. LaForge, A. Ciavardini, P. O'Keeffe, C. Callegari, M. Coreno, A. Demidovich, M. Devetta, M. Di Fraia, M. Drabbels, _et al._, Ultrafast relaxation of photoexcited superfluid He nanodroplets, Nat. Commun. **11**, 1 (2020).
* Asmussen _et al._ [2021]J. D. Asmussen, R. Michiels, K. Dultz, A. Ngai, U. Bangert, M. Barranco, M. Binz, L. Bruder, M. Danailov, M. Di Fraia, J. Eloranta, R. Feifel, L. Giannessi, M. Pi, O. Plekan, K. C. Prince, R. J. Squibb, D. Uhl, A. Wituschek, M. Zangrando, C. Callegari, F. Stienkemeier, and M. Mudrich, Unravelling the full relaxation dynamics of superexcited helium nanodroplets, Phys. Chem. Chem. Phys. **23**, 15138 (2021).
* von Haeften _et al._ [1997]K. von Haeften, A. R. B. de Castro, M. Joppien, L. Moussavizadeh, R. von Pietrowski, and T. Moller, Discrete visible luminescence of helium atoms and molecules desorbing from helium clusters: The role of electronic, vibrational, and rotational energy transfer, Phys. Rev. Lett. **78**, 4371 (1997).
* O'Keeffe _et al._ [2011]P. O'Keeffe, P. Bolognesi, M. Coreno, A. Moise, R. Richter, G. Cautero, L. Stebel, R. Sergo, L. Pravica, Y. Ovcharenko, and L. Avaldi, A photoelectron velocity map imaging spectrometer for experiments combining synchrotron and laser radiations, Rev. Sci. Instrum. **82**, 033109 (2011).
* Dick [2014]B. Dick, Inverting ion images without abel inversion: maximum entropy reconstruction of velocity maps, Phys. Chem. Chem. Phys. **16**, 570 (2014).
* Buchta _et al._ [2013]D. Buchta, S. R. Krishnan, N. B. Brauer, M. Drabbels, P. O'Keeffe, M. Devetta, M. Di Fraia, C. Callegari, R. Richter, M. Coreno, K. C. Prince, F. Stienkemeier, R. Moshammer, and M. Mudrich, Charge transfer and Penning ionization of dopants in or on helium nanodroplets exposed to EUV radiation, J. Phys. Chem. A **117**, 4394 (2013).
* Buchta _et al._ [2013]D. Buchta, S. R. Krishnan, N. B. Brauer, M. Drabbels, P. O'Keeffe, M. Devetta, M. Di Fraia, C. Callegari, R. Richter, M. Coreno, K. C. Prince, F. Stienkemeier, J. Ullrich, R. Moshammer, and M. Mudrich, Extreme ultraviolet ionization of pure he nanodroplets: Mass-correlated photoelectron imaging, Penning ionization, and electron energy-loss spectra, J. Chem. Phys. **139**, 084301 (2013).
* Joppien _et al._ [1993]M. Joppien, R. Karnbach, and T. Moller, Electronic excitations in liquid helium: The evolution from small clusters to large droplets, Phys. Rev. Lett. **71**, 2654 (1993).
* Frochenticht _et al._ [1996]R. Frochenticht, U. Henne, J. P. Toennies, A. Ding, M. Fieber-Erdmann, and T. Drewello, The photoionization of large pure and doped helium droplets, J. Chem. Phys. **104**, 2548 (1996).
* Peterka _et al._ [2003]D. S. Peterka, A. Lindinger, L. Poisson, M. Ahmed, and D. M. Neumark, Photoelectron imaging of helium droplets, Phys. Rev. Lett. **91**, 043401 (2003).
* Wang _et al._ [2008]C. C. Wang, O. Kornilov, O. Gessner, J. H. Kim, D. S. Peterka, and D. M. Neumark, Photoelectron imaging of helium droplets doped with Xe and Kr atoms, J. Phys. Chem. **112**, 9356 (2008).
* Shcherbinin _et al._ [2018]M. Shcherbinin, A. C. LaForge, M. Hanif, R. Richter, and M. Mudrich, Penning ionization of acene molecules by helium nanodroplets, J. Phys. Chem. A **122**, 1855 (2018).
* LaForge _et al._ [2019]A. LaForge, M. Shcherbinin, F. Stienkemeier, R. Richter, R. Moshammer, T. Pfeifer, and M. Mudrich, Highly efficient double ionization of mixed alkali dimers by intermolecular Coulombic decay, Nat. Phys. **15**, 247 (2019).
* Mandal _et al._ [2019]S. Mandal, R. Gopal, M. Shcherbinin, A. D'Elia, H. Srinivas, R. Richter, M. Coreno, B. Bapat, M. Mudrich, S. Krishnan, and V. Sharma, Penning spectroscopy and structure of acetylene oligomers in He nanodroplets, Phys.
Chem. Chem. Phys. (2020).
* Toennies and Vilesov [2004]J. P. Toennies and A. F. Vilesov, Superfluid helium droplets: A uniquely cold nanomatrix for molecules and molecular complexes, Angew. Chem. **43**, 2622 (2004).
* Muller _et al._ [1987]M. W. Muller, W. Busert, M. W. Ruf, H. Hotop, and W. Meyer, New oscillatory structure in electron energy spectra from autoionizing quasi-molecules: Subthermal collisions of he(2\({}^{3}\)s) atoms with he(2\({}^{1}\)s,2\({}^{3}\)s) atoms, Phys. Rev. Lett. **59**, 2279 (1987).
* Muller _et al._ [1991]M. Muller, A. Merz, M.-W. Ruf, H. Hotop, W. Meyer, and M. Movre, Experimental and theoretical studies of the bi-excited collision systems He\({}^{*}\)(2 \({}^{3}\)S) + He\({}^{*}\)(2 \({}^{3}\)S, 2 \({}^{1}\)S) at thermal and subthermal kinetic energies, Z. Phys. D **21**, 89 (1991).
* Surko _et al._ [1969]C. M. Surko, G. J. Dick, F. Reif, and W. C. Walker, Spectroscopic study of liquid helium in the vacuum ultraviolet, Phys. Rev. Lett. **23**, 842 (1969).
* von Haeften _et al._ [2011]K. von Haeften, T. Laarmann, H. Wabnitz, T. Moller, and K. Fink, Size and isotope effects of helium clusters and droplets: Identification of surface and bulk-volume excitations, J. Phys. Chem. A **115**, 7316 (2011).
* Signorell _et al._ [2016]R. Signorell, M. Goldmann, B. L. Yoder, A. Bodi, E. Chasovskikh, L. Lang, and D. Luckhaus, Nanofocusing, shadowing, and electron mean free path in the photoemission from aerosol droplets, Chem. Phys. Lett. **658**, 1 (2016).
* Rupp _et al._ [2017]D. Rupp, N. Monserud, B. Langbehn, M. Sauppe, J. Zimmermann, Y. Ovcharenko, T. Moller, F. Frassetto, L. Poletto, A. Trabattoni, _et al._, Coherent diffractive imaging of single helium nanodroplets with a high harmonic generation source, Nat. Commun. **8**, 493 (2017).
* Carata _et al._ [1999]L. Carata, A. E. Orel, and A. Suzor-Weiner, Dissociative recombination of He\({}_{2}^{+}\) molecular ions, Phys. Rev. A **59**, 2804 (1999).
* Coman _et al._ [1999]L. Coman, M. Guna, L. Simons, and K. A. Hardy, First measurement of the rotational constants for the homonuclear molecular ion He\({}_{2}^{+}\), Phys. Rev. Lett. **83**, 2715 (1999).
* Urbain _et al._ [2004]X. Urbain, N. Djuric, C. Safvan, M. Jensen, H. Pedersen, L. V. Sogaard, and L. Andersen, Storage ring study of the dissociative recombination of He\({}_{2}^{+}\), J. Phys. B **38**, 43 (2004).
* von Haeften _et al._ [2005]K. von Haeften, T. Laarmann, H. Wabnitz, and T. Moller, The electronically excited states of helium clusters: an unusual example for the presence of Rydberg states in condensed matter, J. Phys. B **38**, 373 (2005).
* Pedersen _et al._ [2005]H. B. Pedersen, H. Buhr, S. Altevogt, V. Andrianarijaona, H. Kreckel, L. Lammich, N. de Ruette, E. M. Staicu-Casagrande, D. Schwam, D. Strasser, X. Urbain, D. Zajfman, and A. Wolf, Dissociative recombination and low-energy inelastic electron collisions of the helium dimer ion, Phys. Rev. A **72**, 012712 (2005).
* Royal and Orel [2007]J. Royal and A. E. Orel, Resonant dissociative excitation and vibrational excitation of He\({}_{2}^{+}\), Phys. Rev. A **75**, 052706 (2007).
* Buhr _et al._ [2008]H. Buhr, H. B. Pedersen, S. Altevogt, V. M. Andrianarijaona, H. Kreckel, L. Lammich, S. Novotny, D. Strasser, J. Hoffmann, M. Lange, M. Lestinsky, M. B. Mendes, M. Motsch, O. Novotny, D. Schwalm, X. Urbain, D. Zajfman, and A. Wolf, Inelastic electron collisions of the isotopically symmetric helium dimer ion \({}^{4}\)He\({}_{2}^{+}\) in a storage ring, Phys. Rev. A **77**, 032719 (2008).
* Mauracher _et al._ [2018]A. Mauracher, O. Echt, A. Ellis, S. Yang, D. Bohme, J. Postler, A. Kaiser, S. Denifl, and P. Scheier, Cold physics and chemistry: Collisions, ionization and reactions inside helium nanodroplets close to zero K, Phys. Rep. **751**, 1 (2018).
* Buchenau _et al._ [1991]H. Buchenau, J. P. Toennies, and J. A. Northby, Excitation and ionization of \({}^{4}\)He clusters by electrons, J. Chem. Phys. **95**, 8134 (1991).
* Sheng _et al._ [2020]X. Sheng, J. P. Toennies, and K. T. Tang, Conformal analytical potential for all the rare gas dimers over the full range of internuclear distances, Phys. Rev. Lett. **125**, 253402 (2020).
* Carrington _et al._ [1995]A. Carrington, C. H. Pyne, and P. J. Knowles, Microwave electronic spectrum of the He\({}_{2}^{+}\) ion, J. Chem. Phys. **102**, 5979 (1995).
* Benderskii _et al._ [1999]A. Benderskii, R. Zadoyan, N. Schwentner, and V. Apkarian, Photodynamics in superfluid helium: Femtosecond laser-induced ionization, charge recombination, and preparation of molecular Rydberg states, J. Chem. Phys. **110**, 1542 (1999).
* Gao _et al._ [2015]J. Gao, A. Marakov, W. Guo, B. Pawlowski, S. Van Sciver, G. Ihas, D. McKinsey, and W. Vinen, Producing and imaging a thin line of He\({}_{2}^{+}\) molecular tracers in helium-4, Review of Scientific Instruments **86** (2015).
* Dennis _et al._ [1969]W. Dennis, E. Durbin Jr, W. Fitzsimmons, O. Heybey, and G. Walters, Spectroscopic identification of excited atomic and molecular states in electron-born-bonded liquid helium, Physical Review Letters **23**, 1083 (1969).
* Hill _et al._ [1971]J. Hill, O. Heybey, and G. Walters, Evidence of metastable atomic and molecular bubble states in electron-bombarded superfluid liquid helium, Physical Review Letters **26**, 1213 (1971).
* Fiedler and Eloranta [2014]S. L. Fiedler and J. Eloranta, Interaction of helium Rydberg state atoms with superfluid helium, J. Low Temp. Phys. **174**, 269 (2014).
* Shcherbinin _et al._ [2019]M. Shcherbinin, F. Westergaard, M. Hanif, S. Krishnan, A. LaForge, R. Richter, T. Pfeifer, and M. Mudrich, Inelastic scattering of photoelectrons from He nanodroplets, J. Chem. Phys. **150** (2019).
* Callicoatt _et al._ [1998]B. E. Callicoatt, K. Forde, L. F. Jung, T. Ruchti, and K. C. Janda, Fragmentation of ionized liquid helium droplets: A new interpretation, J. Chem. Phys. **109**, 10195 (1998).
**Supplementary Materials**
**Interatomic Coulombic decay induced by electron-ion recombination in large He nanodroplets**
L. Ben Ltaief _et al._
## I Dependency of the ICD electron yield on photon flux
To probe the dependency of the ICD electron yield on the intensity of the photon beam we recorded the total yield of ICD electrons with the HEA in the electron energy range \(E_{e}=16.2\) - \(16.7\) eV at \(h\nu=21.0\) eV and for different droplet sizes \(R\), see SM Fig. 1. The photon flux was varied by gradually opening and closing the exit slit of the monochromator of the beamline. It was measured using a photodiode placed at the end of the beamline as well as by measuring the current at a mesh placed into the photon beam. The latter two currents were perfectly proportional to one another. In the range of lowest photon flux where the slit was nearly fully closed the ICD electron yield appears to show a slightly nonlinear dependency on the photon flux. This may be due to changes in the size and shape of the intensity distribution in interaction region by diffraction effects of the light passing through the narrow slit.
## II Ion kinetic energy distributions
A compilation of He\({}^{+}\) and He\({}_{2}^{+}\) ion kinetic energy distributions measured for various He nanodroplet radii \(R\) and photon energies \(h\nu\) is shown in SM Fig. 2. The He\({}_{2}^{+}\) spectra have the same shape up to variable amplitude; they all feature a maximum around \(0.3\) eV and a broad tail that extends up to \(1.6\) eV. This generic kinetic energy distribution of He\({}_{2}^{+}\) was previously observed and interpreted by an impulsive ejection mechanism [1]. The only remarkable trend is a dropping amplitude for large He droplets (\(R=50\) nm). This is likely due to the tendency of large He droplets to efficiently trap ions as any ion tends to solvate in liquid He by forming a dense solvation complex [2].
The He\({}^{+}\) spectra feature two peaks: A sharp one near zero kinetic energy, which is most prominent for small droplets (\(R=4\) nm), and a broad one peaked around \(0.1\) eV, which is present for all values of \(R\) and \(h\nu\). The former
Figure 1: Yield of ICD electrons as a function of photon flux measured at \(h\nu=21.0\) eV for He droplets of various droplet sizes set by controlling the temperature \(T\) of the He nozzle. The photon flux was measured as photocurrents at a photodiode and at a mesh inserted into the photon beam.
is due to free He atoms accompanying the He droplet jet, whereas the latter is characteristic of photoionization of He nanodroplets. For small nanodroplets the two-photon ionization probability is negligibly small. Therefore, Coulomb explosion of two He\({}^{+}\) photoions created in the same droplet can be ruled out. Moreover, one would expect a shift of the most probable energy of ejected He\({}^{+}\) ions as a function of droplet size as ions created in the bulk of large droplets would likely undergo binary collisions leading to a reduction of their kinetic energy [3]. However, the 0.1-eV feature remains unchanged up to amplitude variations.
Therefore, we rationalize the observation of He\({}^{+}\) ions with kinetic energies peaked at 0.1 eV by photoionization of nearest-neighbor pairs of He atoms into the repulsive \(A\) state the \(He_{2}^{+}\) molecular ion, see the blue line in Fig. 6 in the main text. The transition from the \(He_{2}\) ground state to the \(A\) state is forbidden in the free He\({}_{2}\) system. However, it may become partly allowed due to symmetry breaking when the transition takes place in the He droplet environment. The estimated ion kinetic energy released following the dissociation along the \(A\)-state potential curve is 0.06 eV, which is in decent agreement with the experimental value (0.1 eV).
## III Electron spectra
In addition to Fig. 5 in the main text, further information on the droplet size-dependent ICD electron yield for pairs of \({}^{1}\)S and \({}^{3}\)S He atoms can be obtained from the electron spectra shown in SM Fig. 3. Panels a) and b) show electron spectra of He nanodroplets of variable droplet sizes (\(R=4.5\) nm up to 75 nm) measured at \(h\nu=23.8\) eV in coincidence with He\({}_{2}^{+}\) and He\({}^{+}\), respectively. Clearly, ICD starts to occur for droplets with radius \(R\gtrsim 20\) nm and becomes more and more pronounced when the He droplet size increases. Electron spectra measured in coincidence with He\({}^{+}\) exhibit only one feature at 16.6 eV which is assigned to ICD out of the \({}^{1}\)S state, whereas the electron spectra measured in coincidence with He\({}_{2}^{+}\) show two features; one \({}^{1}\)S ICD feature at 16.6 eV and one at 15.0 eV attributed to ICD out of the \({}^{3}\)S state. The droplet-size dependence of the integrated \({}^{1}\)S and \({}^{3}\)S ICD electron yields are shown in SM Fig. 3 c) and d). Surprisingly, the \({}^{3}\)S ICD feature in the He\({}_{2}^{+}\) coincidence spectra appears more pronounced at smaller droplet sizes whereas at larger droplet sizes \(R\gtrsim 40\) nm, the \({}^{1}\)S ICD feature again dominates.
Figure 2: Ion kinetic energy distributions of He\({}^{+}\) [panel a)] and He\({}_{2}^{+}\) [panel b)] measured for different droplet conditions and different photon energies \(h\nu\) at and above \(E_{i}=24.6\) eV. All the ion spectra in a) and b) are background subtracted and normalized to the EUV photon flux.
## IV Relative ICD intensity
The relative experimental ICD intensities \(I_{ICD}[\mathrm{He}^{+}]\) and \(I_{ICD}[\mathrm{He}^{+}_{2}]\) plotted in Fig. 7 b) in the main text are obtained from the calculated ratios between the integrated ICD electron yields \(S\) measured in coincidence with He\({}^{+}\) and He\({}^{+}_{2}\), respectively, and of the photoline. Both of these ratios contain correction factors to account for the second-order radiation (\(2h\nu\)) present it the photon beam,
\[I_{ICD}[\mathrm{He}^{+}]=\frac{S_{ICD}[\mathrm{He}^{+}]}{S_{pl}[\mathrm{He}^{ +}]}-\epsilon_{1}\frac{S_{pl(2h\nu)}[\mathrm{He}^{+}]}{S_{pl}[\mathrm{He}^{+} ]}, \tag{2}\]
\[I_{ICD}[\mathrm{He}^{+}_{2}]=\frac{S_{ICD}[\mathrm{He}^{+}_{2}]}{S_{pl}[ \mathrm{He}^{+}_{2}]}-\epsilon_{2}\frac{S_{pl(2h\nu)}[\mathrm{He}^{+}_{2}]}{S_ {pl}[\mathrm{He}^{+}_{2}]}. \tag{3}\]
Here, \(\epsilon_{1}=S_{ICD(h\nu^{\prime})}[\mathrm{He}^{+}]/S_{pl(h\nu^{\prime})}[ \mathrm{He}^{+}]\) and \(\epsilon_{2}=S_{ICD(h\nu^{\prime})}[\mathrm{He}^{+}_{2}]/S_{pl(h\nu^{\prime}) }[\mathrm{He}^{+}_{2}]\) are taken from [4] and defined as the efficiency of \(ICD[\mathrm{He}^{+}]\) and \(ICD[\mathrm{He}^{+}_{2}]\), respectively, at higher photon energy \(h\nu^{\prime}=2h\nu\geq 44.4\) eV.
The relative experimental \({}^{1}\)S and \({}^{3}\)S ICD yields plotted in Fig. 7 c) as well as those plotted in Fig. 4 c) and d) are evaluated by
\[I_{{}^{1}S-ICD}=\alpha_{{}^{1}S}\times I_{{}^{1}SICD}[\mathrm{He}^{+}]+\beta_{ {}^{1}S}\times I_{{}^{1}SICD}[\mathrm{He}^{+}_{2}], \tag{4}\]
\[I_{{}^{3}S-ICD}=\alpha_{{}^{3}S}\times I_{{}^{3}S-ICD}[\mathrm{He}^{+}]+\beta_ {{}^{3}S}\times I_{{}^{3}S-ICD}[\mathrm{He}^{+}_{2}]. \tag{5}\]
Here, \(I_{{}^{1}S-ICD}[\mathrm{He}^{+}]\) and \(I_{{}^{3}S-ICD}[\mathrm{He}^{+}]\) are obtained separately but in a similar way as in equation (1). \(I_{{}^{1}S-ICD}[\mathrm{He}^{+}_{2}]\) and \(I_{{}^{3}S-ICD}[\mathrm{He}^{+}_{2}]\) are obtained separately but in a similar way as in equation (2). \(\alpha_{{}^{1}S}\), \(\beta_{{}^{1}S}\), \(\alpha_{{}^{3}S}\) and \(\beta_{{}^{3}S}\) are weighting
Figure 3: a) & b) Electron spectra of He nanodroplets measured in coincidence with He\({}^{+}_{2}\) and He\({}^{+}\), respectively, for different He droplet sizes and at \(h\nu=23.8\) eV. c) & d) Integrated \({}^{1}\)S and \({}^{3}\)S ICD signals measured in coincidence with He\({}^{+}_{2}\) and He\({}^{+}\), respectively, as a function of droplet radius.
factors
\[\alpha_{{}^{1}S}=\frac{A_{{}^{1}S-ICD}[\text{He}^{+}]}{A_{{}^{1}S-ICD}[\text{He}^{ +}]+A_{{}^{1}S-ICD}[\text{He}^{+}_{2}]},\]
\[\beta_{{}^{1}S}=\frac{A_{{}^{1}S-ICD}[\text{He}^{+}_{2}]}{A_{{}^{1}S-ICD}[ \text{He}^{+}_{2}]+A_{{}^{1}S-ICD}[\text{He}^{+}]},\]
\[\alpha_{{}^{3}S}=\frac{A_{{}^{3}S-ICD}[\text{He}^{+}]}{A_{{}^{3}S-ICD}[\text{ He}^{+}]+A_{{}^{3}S-ICD}[\text{He}^{+}_{2}]},\]
\[\beta_{{}^{3}S}=\frac{A_{{}^{3}S-ICD}[\text{He}^{+}_{2}]}{A_{{}^{3}S-ICD}[ \text{He}^{+}_{2}]+A_{{}^{3}S-ICD}[\text{He}^{+}]}.\]
Here \(A_{{}^{1}S-ICD}[\text{He}^{+}]\) and \(A_{{}^{1}S-ICD}[\text{He}^{+}_{2}]\) denote the peak heights of the \({}^{1}\)S ICD signals measured in coincidence with He\({}^{+}\) and He\({}^{+}_{2}\), respectively. \(A_{{}^{3}S-ICD}[\text{He}^{+}]\) and \(A_{{}^{3}S-ICD}[\text{He}^{+}_{2}]\) denote the peak heights of the \({}^{3}\)S ICD signals measured in coincidence with He\({}^{+}\) and He\({}^{+}_{2}\), respectively.
|
2310.10880 | Coherence conditions for the characters of trivial source modules and
strong isotypies | We introduce a new type of equivalence between blocks of finite group
algebras called a strong isotypy. A strong isotypy is equivalent to a
$p$-permutation equivalence and restricts to an isotypy in the sense of
Brou\'{e}. To prove these results we first establish that the group
$T_{\mathcal{O}}(B)$ of trivial source $B$-modules, where $B$ is a block of a
finite group algebra, is isomorphic to groups of ``coherent character tuples.''
This provides a refinement of work by Boltje and Carman which characterizes the
ring $T_{\mathcal{O}}(G)$ of trivial source $\mathcal{O}G$-modules, where $G$
is a finite group, in terms of coherent character tuples. | John Revere McHugh | 2023-10-16T23:25:25Z | http://arxiv.org/abs/2310.10880v1 | # Coherence conditions for the characters of trivial source modules and strong isotypies
###### Abstract
We introduce a new type of equivalence between blocks of finite group algebras called a _strong isotypy_. A strong isotypy is equivalent to a \(p\)-permutation equivalence and restricts to an isotypy in the sense of Broue. To prove these results we first establish that the group \(T_{\mathcal{O}}(B)\) of trivial source \(B\)-modules, where \(B\) is a block of a finite group algebra, is isomorphic to groups of "coherent character tuples." This provides a refinement of work by Boltje and Carman which characterizes the ring \(T_{\mathcal{O}}(G)\) of trivial source \(\mathcal{O}G\)-modules, where \(G\) is a finite group, in terms of coherent character tuples.
## 1 Introduction
Let \(G\) be a finite group, let \(p\) be a prime number, and let \((\mathbb{K},\mathcal{O},F)\) be a \(p\)-modular system large enough for \(G\). Let \(A\) be a block of \(\mathcal{O}G\) with defect group \(D\) and let \(B\) denote the Brauer correspondent of \(A\) (which, we recall, is a block of \(\mathcal{O}N_{G}(D)\)). In [4] Broue conjectured that if \(D\) is abelian then the bounded derived categories \(D^{b}({}_{A}\mathbf{mod})\) and \(D^{b}({}_{B}\mathbf{mod})\) are equivalent. Later, in [8], Rickard refined Broue's conjecture by postulating the existence of a special type of derived equivalence between \(A\) and \(B\): what is known now as a splendid Rickard equivalence. Turning to Grothendieck groups, a splendid Rickard equivalence induces a \(p\)-permutation equivalence between \(A\) and \(B\) as defined by Boltje and Perepelitsky in [2]. The existence of a splendid Rickard equivalence or of a \(p\)-permutation equivalence in the situation of the abelian defect group conjecture would provide an explanation for the phenomenon of an isotypy between \(A\) and \(B\): a block equivalence defined at the "character level" which has been observed in all examples computed to date.
The main aim of the present article is to provide a closer examination of the construction of an isotypy from a \(p\)-permutation equivalence (or, less generally, from a splendid Rickard equivalence). In this pursuit we are led to a new type of block equivalence that we call a _strong isotypy_ -- see Definition 8.2. The terminology has been chosen because a strong isotypy can be viewed as an extension of Broue's original conception of isotypy, which appeared in [4]. Let \(G\) and \(H\) be finite groups, \(A\) a block of \(\mathcal{O}G\) and \(B\) a block of \(\mathcal{O}H\). Recall that an isotypy between \(A\) and \(B\) is defined relative to a maximal \(A\)-Brauer pair \((D,e)\), a maximal \(B\)-Brauer pair \((E,f)\), and an isomorphism of fusion systems \(\phi:\mathcal{F}_{(E,f)}(B)\stackrel{{\sim}}{{\to}}\mathcal{F}_{( D,e)}(A)\), and consists of a family of "compatible" perfect isometries \(\mu_{Q}\), \(Q\leq E\), between the centralizers of the Brauer pairs corresponding under \(\phi\) (see [2, Definition 15.3]). The requirement that the perfect isometries \(\mu_{Q}\) be "compatible" with one another has two equivalent formulations: one in terms of commutative diagrams involving the generalized decomposition maps and another in terms of character value relations [4, Proposition 4.7]. A strong isotypy is defined relative to the same data as an isotypy and again consists of a family of "compatible" virtual characters \(\chi_{Q}\) indexed by the subgroups \(Q\leq E\); but now the characters \(\chi_{Q}\) are defined on the _normalizers_ of the relevant Brauer pairs rather than on the centralizers. The "compatiblity" requirement placed on the characters \(\chi_{Q}\) in the definition of strong isotypy is expressed in terms of character value relations. A strong isotypy induces an isotypy a la Broue simply by restricting to centralizers -- see Theorem 9.2. We also find commutative diagrams associated to a strong isotypy that extend the diagrams appearing in the compatibility condition of the definition of "isotypy." These diagrams again involve perfect isometries and generalized decomposition maps. Roughly speaking, however, our diagrams exist at the "normalizer level" rather than the "centralizer level," as in the case of an isotypy. See Theorem 10.2.
We are reassured that our notion of strong isotypy is the correct extension of an isotypy in the sense of Broue by the results of Section 8. In this section we show that a \(p\)-permutation equivalence between blocks \(A\) and \(B\) induces a strong isotypy between \(A\) and \(B\) and conversely that every strong isotypy "comes from" a \(p\)-permutation equivalence. In fact, there is a bijection between the set of \(p\)-permutation equivalences between \(A\) and \(B\) and the set of strong isotypies between \(A\) and \(B\) -- this is stated precisely and shown in Theorem 8.8. If an isotypy is the shadow of a \(p\)-permutation equivalence, as is expected in the situation of the abelian defect group conjecture, then it is really only a partial shadow: a strong isotypy fills in completely the missing compatibility criteria for the characters associated to a \(p\)-permutation equivalence.
In [1, Theorem A] Boltje and Carman introduced a ring of so-called "co
herent character tuples" that is isomorphic to the trivial source ring \(T_{\mathcal{O}}(G)\) of a finite group \(G\) (we recall their result in Theorem 6.1). In this situation a coherent character tuple is a tuple of virtual characters \((\chi_{P})_{P\in S_{p}(G)}\) indexed by the \(p\)-subgroups of \(G\) such that \(\chi_{P}\in R_{\mathbb{K}}(N_{G}(P)/P)\) for each \(P\in S_{p}(G)\). In Section 6 we provide refinements of Boltje and Carman's work that describe the trivial source group \(T_{\mathcal{O}}(B)\), where \(B\) is a block of \(\mathcal{O}G\), in terms of coherent character tuples. The main difference here is that the coherent character tuples that describe \(T_{\mathcal{O}}(B)\) can be indexed by the set of all \(B\)-Brauer pairs (see Theorem 6.5) or by the subgroups of a fixed defect group of \(B\) (see Theorem 6.8). These theorems provide the avenue to the definition of strong isotypy and to the results described in the previous paragraphs.
Throughout this note \(p\) will denote a prime number and any \(p\)-modular system \((\mathbb{K},\mathcal{O},F)\) will be assumed "large enough" for the finite groups under consideration. We write \(\overline{\cdot}:\mathcal{O}\twoheadrightarrow F\) for the canonical surjection.
If \(G\) is a finite group then \(S_{p}(G)\) denotes the set of \(p\)-subgroups of \(G\). We write \(G_{p^{\prime}}\) for the set of \(g\in G\) of order not divisible by \(p\). The elements of \(G_{p^{\prime}}\) are also called \(p^{\prime}\)_-elements_ of \(G\). We write \(g\sim_{G}h\) if \(g\) and \(h\) are \(G\)-conjugate elements of \(G\).
If \(g,h\in G\) then we write \(c_{g}(h)={}^{g}h=ghg^{-1}\).
Recall that if \(g\in G\) then there exists a \(p\)-element \(g_{p}\in G\) and a \(p^{\prime}\)-element \(g_{p^{\prime}}\in G\) for which \(g=g_{p}g_{p^{\prime}}=g_{p^{\prime}}g_{p}\). Moreover, \(g_{p},g_{p^{\prime}}\in\langle g\rangle\) and the pair \((g_{p},g_{p^{\prime}})\) is unique. The element \(g_{p}\) is called the \(p\)-_part of_\(g\) and the element \(g_{p^{\prime}}\) is the \(p^{\prime}\)-_part of_\(g\).
If \(G\) and \(H\) are finite groups then \(p_{1}:G\times H\twoheadrightarrow G\) and \(p_{2}:G\times H\twoheadrightarrow H\) denote the canonical projections. If \(X\leq G\times H\) then we set
\[k_{1}(X):=\left\{g\in G|(g,1)\in X\right\}\qquad\text{and}\qquad k_{2}(X):= \left\{h\in H|(1,h)\in X\right\}.\]
One has \(k_{i}(X)\trianglelefteq p_{i}(X)\) for \(i=1,2\) and the projections \(p_{i}\) induce isomorphisms \(X/(k_{1}(X)\times k_{2}(X))\stackrel{{\sim}}{{\to}}p_{i}(X)/k_{ i}(X)\).
If \(Q\leq H\), \(P\leq G\), and \(\phi:Q\stackrel{{\sim}}{{\to}}P\) is an isomorphism, set
\[\Delta(P,\phi,Q):=\left\{(\phi(y),y)|y\in Q\right\}\leq G\times H.\]
Subgroups of \(G\times H\) of the form \(\Delta(P,\phi,Q)\) are called _twisted diagonal_ subgroups. Write \(S_{p}^{\Delta}(G\times H)\) for the collection of twisted diagonal \(p\)-subgroups of \(G\times H\). Note that \(S_{p}^{\Delta}(G\times H)\) is closed under \(G\times H\)-conjugation and closed under taking subgroups. In fact, if \(\Delta(P,\phi,Q)\) is a twisted diagonal subgroup of \(G\times H\) then
\[{}^{(g,h)}\Delta(P,\phi,Q)=\Delta({}^{g}P,c_{g}\phi c_{h}^{-1},{}^{h}Q)\]
for any \((g,h)\in G\times H\).
All modules are assumed finitely generated unless stated otherwise. If \(R\) is a commutative ring and \(G\) and \(H\) are finite groups we will always assume that the induced \(R\)-module structures on an \((RG,RH)\)-bimodule coincide. In other words, if \(M\) is an \((RG,RH)\)-bimodule we will assume that \(rm=mr\) for all \(r\in R\) and \(m\in M\). Any \((RG,RH)\)-bimodule \(M\) may be viewed as a left \(R[G\times H]\)-module (and vice versa) by defining \((g,h)m=gmh^{-1}\) for all \(g\in G\), \(h\in H\), and \(m\in M\). One obtains an isomorphism of categories \({}_{RG}\mathbf{mod}_{RH}\cong{}_{R[G\times H]}\mathbf{mod}\) in this way.
If \(R\) is a commutative ring and \(G\) is a finite group then \((\cdot)^{*}:RG\to RG\) will denote the antipode of \(RG\), defined by \(g^{*}=g^{-1}\) for all \(g\in G\). The antipode is an \(R\)-module isomorphism and satisfies \((\alpha\beta)^{*}=\beta^{*}\alpha^{*}\) for all \(\alpha,\beta\in RG\).
A construction that will be used several times in Section 6 is the following: let \(G\) act on a nonempty set \(X\). For each \(x\in X\), suppose that \(A_{x}\) is an abelian group. Suppose also that for each \(g\in G\) and \(x\in X\) we have a group isomorphism \(\varphi_{g,x}:A_{x}\stackrel{{\sim}}{{\rightarrow}}A_{{}^{g}x}\) such that
1. \(\varphi_{1,x}=\operatorname{id}_{A_{x}}\) for all \(x\in X\); and
2. \(\varphi_{h,{}^{g}x}\circ\varphi_{g,x}=\varphi_{hg,x}\) for all \(g,h\in G\) and all \(x\in X\).
The product \(A=\prod_{x\in X}A_{x}\) can then be given a \(\mathbb{Z}G\)-module structure by defining
\[{}^{g}(a_{x}):=(\varphi_{g,{}^{g^{-1}}x}(a_{{}^{g^{-1}}x}))_{x\in X}\]
for all \(g\in G\) and \((a_{x})_{x\in X}\in A\). In other words, if \(g\in G\) and \((a_{x})\in A\) then the \(x\)-entry of \({}^{g}(a_{x})\) is \(\varphi_{g,{}^{g^{-1}}x}(a_{{}^{g^{-1}}x})\). Note that the subgroup of \(G\)-fixed points \(A^{G}\) consists of all tuples \((a_{x})\in A\) such that \(\varphi_{g,x}(a_{x})=a_{{}^{g}x}\) for all \(x\in X\) and \(g\in G\).
## 2 Brauer pairs and fusion systems
Throughout this section \(p\) denotes a prime number and \((\mathbb{K},\mathcal{O},F)\) is a \(p\)-modular system large enough for the finite groups under consideration. Many of the results of this section hold over both \(\mathcal{O}\) and \(F\), so for brevity we let \(R\in\{\mathcal{O},F\}\) and work over \(R\). If \(a\in\mathcal{O}\) we write \(\overline{a}\) for the image of \(a\) under the canonical projection \(\mathcal{O}\twoheadrightarrow F\). If \(G\) is a finite group then the canonical projection \(\mathcal{O}\twoheadrightarrow F\) extends to an \(\mathcal{O}\)-algebra homomorphism \(\mathcal{O}G\twoheadrightarrow FG\) and we write \(\overline{\alpha}\) for the image of \(\alpha\in\mathcal{O}G\) under this map. If \(\alpha\in FG\) we set \(\overline{\alpha}=\alpha\).
Let \(G\) be a finite group. We write \(\operatorname{Bl}(RG)\) for the set of block algebras of \(RG\) and \(\operatorname{bli}(RG)\) for the set of block idempotents of \(RG\). If \(B\in\operatorname{Bl}(RG)\) then
\(e_{B}\in\operatorname{bli}(RG)\) denotes the identity of \(B\). Recall that the coefficient reduction map \(\overline{\cdot}:\mathcal{O}G\twoheadrightarrow FG\) induces a bijection between the blocks of \(\mathcal{O}G\) and the blocks of \(FG\).
If \(P\) is a \(p\)-subgroup of \(G\) then there is a surjective homomorphism of \(RN_{G}(P)\)-algebras
\[\operatorname{br}_{P}^{G}:(RG)^{P}\twoheadrightarrow FC_{G}(P),\qquad\sum_{g \in G}a_{g}g\mapsto\sum_{g\in C_{G}(P)}\overline{a_{g}}g\]
called the _Brauer homomorphism_. When the overgroup \(G\) is contextually clear we may write \(\operatorname{br}_{P}\) in place of \(\operatorname{br}_{P}^{G}\). Note that if \(\alpha\in(RG)^{P}\) then \(\alpha^{*}\in(RG)^{P}\) and \(\operatorname{br}_{P}(\alpha^{*})=\operatorname{br}_{P}(\alpha)^{*}\). Note also that if \(H\leq G\) and \(P\) is a \(p\)-subgroup of \(H\) then \({}^{g}\operatorname{br}_{P}^{H}(\alpha)=\operatorname{br}_{{}^{g}P}^{H}({}^{g }\alpha)\) for any \(g\in G\) and \(\alpha\in(RH)^{P}\).
### Brauer pairs
An \(RG\)-_Brauer pair_ is an ordered pair \((P,e)\) where \(P\) is a \(p\)-subgroup of \(G\) and \(e\) is a block idempotent of \(RC_{G}(P)\). The set of \(RG\)-Brauer pairs is denoted \(\mathcal{BP}_{R}(G)\). The group \(G\) acts by conjugation on \(\mathcal{BP}_{R}(G)\) and the stabilizer of \((P,e)\in\mathcal{BP}_{R}(G)\) is denoted \(N_{G}(P,e)\). Note that for any \(RG\)-Brauer pair \((P,e)\) one has \(PC_{G}(P)\leq N_{G}(P,e)\leq N_{G}(P)\). If \((Q,f),(P,e)\in\mathcal{BP}_{R}(G)\) write \((Q,f)\trianglelefteq(P,e)\) if \(Q\leq P\leq N_{G}(Q,f)\) and \(\operatorname{br}_{P}(f)\overline{e}=\overline{e}\). The transitive closure of this relation makes \(\mathcal{BP}_{R}(G)\) into a partially ordered set, and the action of \(G\) by conjugation on \(\mathcal{BP}_{R}(G)\) respects this order -- in other words, \(\mathcal{BP}_{R}(G)\) is a \(G\)-poset.
Both of the maps \(\mathcal{BP}_{R}(G)\to\mathcal{BP}_{R}(G)\), \((P,e)\mapsto(P,e^{*})\), and \(\mathcal{BP}_{\mathcal{O}}(G)\to\mathcal{BP}_{F}(G)\), \((P,e)\mapsto(P,\overline{e})\) are \(G\)-poset isomorphisms.
An important fact about \(RG\)-Brauer pairs which will be used repeatedly in the sequel is that if \((P,e)\in\mathcal{BP}_{R}(G)\) and \(Q\leq P\) then there exists a unique block \(f\in\operatorname{bli}(RC_{G}(Q))\) such that \((Q,f)\leq(P,e)\). See [7, Theorem 2.10(a)] for a proof of this fact in a more general setting.
Let \(H\leq G\). If \((P,e)\in\mathcal{BP}_{R}(H)\) and \(g\in G\) then we set \({}^{g}(P,e):=({}^{g}P,{}^{g}e)\in\mathcal{BP}_{R}({}^{g}H)\). Notice that \({}^{1}(P,e)=(P,e)\) and \({}^{g}({}^{g}(P,e))={}^{gg^{\prime}}(P,e)\) for any \(g,g^{\prime}\in G\) and \((P,e)\in\mathcal{BP}_{R}(H)\). If \((Q,f),(P,e)\in\mathcal{BP}_{R}(H)\) are such that \((Q,f)\leq(P,e)\) then \({}^{g}(Q,f)\leq{}^{g}(P,e)\) for any \(g\in G\). It follows that for each \(g\in G\) we have an isomorphism of posets
\[{}^{g}(\cdot):\mathcal{BP}_{R}(H) \stackrel{{\sim}}{{\to}}\mathcal{BP}_{R}({}^{g}H)\] \[(P,e) \mapsto{}^{g}(P,e).\]
Let \(B\in\operatorname{Bl}(RG)\). An \(RG\)-Brauer pair \((P,e)\)_belongs to_\(B\) if \(\operatorname{br}_{P}(e_{B})\overline{e}=\overline{e}\). If \((P,e)\) belongs to \(B\) we also say that \((P,e)\) is a \(B\)-_Brauer pair_. In the sequel
the set of \(RG\)-Brauer pairs that belong to \(B\) will be denoted by \(\mathcal{BP}_{R}(G,B)\), \(\mathcal{BP}_{R}(G,e_{B})\), or simply by \(\mathcal{BP}_{R}(B)\). Recall that \(\mathcal{BP}_{R}(B)\) is a \(G\)-subposet of \(\mathcal{BP}_{R}(G)\) and that if \((Q,f),(P,e)\in\mathcal{BP}_{R}(G)\) are such that \((Q,f)\leq(P,e)\) then \((Q,f)\) is a \(B\)-Brauer pair if and only if \((P,e)\) is a \(B\)-Brauer pair. If \((P,e)\in\mathcal{BP}_{R}(G,B)\) then \((P,e^{*})\in\mathcal{BP}_{R}(G,B^{*})\). The \(G\)-poset isomorphism \(\mathcal{BP}_{\mathcal{O}}(G)\stackrel{{\sim}}{{\to}}\mathcal{BP }_{F}(G)\) described above restricts to a \(G\)-poset isomorphism \(\mathcal{BP}_{\mathcal{O}}(B)\stackrel{{\sim}}{{\to}}\mathcal{BP }_{F}(\overline{B})\) for any block \(B\) of \(\mathcal{OG}\). If \(H\leq G\), \(B\in\operatorname{Bl}(RH)\), and \(g\in G\) then the map \({}^{g}(\cdot):\mathcal{BP}_{R}(H)\stackrel{{\sim}}{{\to}} \mathcal{BP}_{R}({}^{g}H)\) defined above restricts to a poset isomorphism \(\mathcal{BP}_{R}(H,B)\stackrel{{\sim}}{{\to}}\mathcal{BP}_{R}({}^ {g}H,{}^{g}B)\).
A _Brauer element of_\(RG\) is an ordered pair \((u,e)\) where \(u\) is a \(p\)-element of \(G\) and \(e\) is a block idempotent of \(RC_{G}(u)\). Write \(\mathcal{BE}_{R}(G)\) for the set of Brauer elements of \(RG\). Notice that if \((u,e)\in\mathcal{BE}_{R}(G)\) then \((\langle u\rangle,e)\in\mathcal{BP}_{R}(G)\). The group \(G\) acts on \(\mathcal{BE}_{R}(G)\) by conjugation and the map \(\mathcal{BE}_{R}(G)\to\mathcal{BP}_{R}(G)\) which sends \((u,e)\) to \((\langle u\rangle,e)\) is \(G\)-equivariant.
Let \(B\in\operatorname{Bl}(RG)\) and let \((u,e)\in\mathcal{BE}_{R}(G)\). Say \((u,e)\)_belongs to_\(B\) or is a \(B\)-_Brauer element_ if \(\operatorname{br}_{\langle u\rangle}(e_{B})\overline{e}=\overline{e}\). In what follows the set of Brauer elements of \(RG\) that belong to \(B\) will be denoted by \(\mathcal{BE}_{R}(G,B)\), by \(\mathcal{BE}_{R}(G,e_{B})\), or by \(\mathcal{BE}_{R}(B)\). Notice that if \((u,e)\) is a Brauer element of \(RG\) then \((u,e)\) belongs to \(B\) if and only if \((\langle u\rangle,e)\) belongs to \(B\). The set of \(B\)-Brauer elements \(\mathcal{BE}_{R}(B)\) is stable under conjugation by \(G\). Note also that the subsets \(\mathcal{BE}_{R}(B)\), where \(B\) runs through the blocks of \(RG\), form a partition of \(\mathcal{BE}_{R}(G)\).
**Lemma 2.1**.: _Suppose that \(P\) is a normal \(p\)-subgroup of \(G\). Then every central idempotent of \(RG\) belongs to \(RC_{G}(P)\). More generally, if \(P,Q\in S_{p}(G)\) and \(P\trianglelefteq G\) then every central idempotent of \(RC_{G}(Q)\) belongs to \(RC_{G}(PQ)\)._
Let \(N\) be a normal subgroup of \(G\). Then \(G\) permutes the blocks of \(RN\) by conjugation. If \(c\in\operatorname{bli}(RN)\) then \(\operatorname{tr}^{G}_{\operatorname{Stab}_{G}(c)}(c)\) is a (nonzero) central idempotent of \(RG\) contained in \(RN\). Recall that a block idempotent \(b\) of \(RG\) is said to _cover_\(c\) if \(b\operatorname{tr}^{G}_{\operatorname{Stab}_{G}(c)}(c)\neq 0\), or equivalently if \(bc\neq 0\). We refer to [7, IV.6] for more information about block covering. If \(b\in\operatorname{bli}(RG)\) then there exists a unique \(G\)-orbit of blocks of \(RN\) that are covered by \(b\), and if \(c\in\operatorname{bli}(RN)\) then there exists a block \(b\in\operatorname{bli}(RG)\) covering \(c\). Thus we have a surjective map
\[\operatorname{bli}(RG) \twoheadrightarrow G\backslash\operatorname{bli}(RN)\] \[b \mapsto\left\{c\in\operatorname{bli}(RN)|bc\neq 0\right\}.\]
Note that if the map above is also injective, then
\[\operatorname{bli}(RG)=\left\{\operatorname{tr}^{G}_{\operatorname{Stab}_{G}(c )}(c)|c\in\operatorname{bli}(RN)\right\}.\]
**Lemma 2.2**.: _Assume that \(P,Q\in S_{p}(G)\) and \(P\trianglelefteq G\). Then_
\[\operatorname{bli}(RC_{G}(Q))=\left\{\operatorname{tr}_{\operatorname{Stab}_{C_{G }(Q)}(e)}^{C_{G}(Q)}(e)|e\in\operatorname{bli}(RC_{G}(PQ))\right\}.\]
_In particular, if \(P\) is a normal \(p\)-subgroup of \(G\) then_
\[\operatorname{bli}(RG)=\left\{\operatorname{tr}_{N_{G}(P,e)}^{G}(e)|e\in \operatorname{bli}(RC_{G}(P))\right\}.\]
Proof.: We have \(C_{G}(PQ)=C_{G}(P)\cap C_{G}(Q)\trianglelefteq C_{G}(Q)\) since \(P\trianglelefteq G\). Now if \(b\in\operatorname{bli}(RC_{G}(Q))\) then \(b\) is a central idempotent of \(RC_{G}(PQ)\) by Lemma 2.1. It follows that \(b\) is equal to the sum of the blocks of \(RC_{G}(PQ)\) covered by \(b\). In particular, the map
\[\operatorname{bli}(RC_{G}(Q)) \twoheadrightarrow C_{G}(Q)\backslash\operatorname{bli}(RC_{G}(PQ))\] \[b \mapsto\left\{c\in\operatorname{bli}(RC_{G}(PQ))|bc\neq 0\right\}\]
is injective. The result follows.
**Lemma 2.3**.: _Let \((P,e)\in\mathcal{BP}_{R}(G)\). Then \(e\) is a block idempotent of \(RN_{G}(P,e)\)._
Proof.: Set \(I=N_{G}(P,e)\). Then \(P\trianglelefteq I\) and \(C_{I}(P)=C_{G}(P)\). So by Lemma 2.2, \(\operatorname{tr}_{N_{I}(P,e)}^{I}(e)=e\) is a block idempotent of \(RI\).
**Lemma 2.4**.: _Let \(B\in\operatorname{Bl}(RG)\). If \((P,e)\in\mathcal{BP}_{R}(B)\) write \(I_{(P,e)}=N_{G}(P,e)\). Fix a \(B\)-Brauer pair \((P,e)\in\mathcal{BP}_{R}(B)\) and let \((Q,\epsilon)\in\mathcal{BP}_{R}(I_{(P,e)},e)\)._
1. _There exists a block_ \(f\in\operatorname{bli}(RC_{G}(PQ))\) _such that_ \(\epsilon f\neq 0\)_. Any two_ \(f,f^{\prime}\in\operatorname{bli}(RC_{G}(PQ))\) _with_ \(\epsilon f\neq 0\neq\epsilon f^{\prime}\) _are_ \(C_{I_{(P,e)}}(Q)\)_-conjugate._
2. _Let_ \(f\in\operatorname{bli}(RC_{G}(PQ))\) _such that_ \(\epsilon f\neq 0\)_. Then_ \((PQ,f)\in\mathcal{BP}_{R}(B)\)_,_ \((P,e)\trianglelefteq(PQ,f)\)_, and_ \[\epsilon=\operatorname{tr}_{\operatorname{Stab}_{C_{I_{(P,e)}}(Q)}(f)}^{C_{I_{ (P,e)}}(Q)}(f).\] (1) _Moreover, we have_ \[\operatorname{Stab}_{C_{I_{(P,e)}}(Q)}(f)=C_{I_{(P,e)}\cap I_{(PQ,f)}}(Q)=C_{ I_{(PQ,f)}}(Q)\cap N_{G}(P).\] (2)
Proof.: Since \(C_{G}(P)\trianglelefteq I_{(P,e)}\) we have \(C_{G}(PQ)\trianglelefteq C_{I_{(P,e)}}(Q)\). Now \(\epsilon\) is a block idempotent of \(RC_{I_{(P,e)}}(Q)\), so there exists a unique \(C_{I_{(P,e)}}(Q)\)-orbit of blocks of \(RC_{G}(PQ)\) covered by \(\epsilon\). Part (a) follows.
Next let \(f\in\operatorname{bli}(RC_{G}(PQ))\) such that \(\epsilon f\neq 0\). Since \(P,Q\in S_{p}(I_{(P,e)})\), \(P\trianglelefteq I_{(P,e)}\), and \(C_{I_{(P,e)}}(PQ)=C_{G}(PQ)\), Lemma 2.2 gives us the equality in
(1). Now certainly \((PQ,f)\in\mathcal{BP}_{R}(G)\). If we show that \((P,e)\trianglelefteq(PQ,f)\) then since \((P,e)\) is a \(B\)-Brauer pair it will follow that \((PQ,f)\) is a \(B\)-Brauer pair as well. Clearly \(P\leq PQ\leq I_{(P,e)}\) so to prove that \((P,e)\trianglelefteq(PQ,f)\) it remains to see that \(\operatorname{br}_{PQ}(e)\overline{f}=\overline{f}\), or equivalently that \(\operatorname{br}_{PQ}(e)\overline{f}\neq 0\). Suppose, by way of contradiction, that \(\operatorname{br}_{PQ}(e)\overline{f}=0\). One may readily compute that \(\operatorname{br}_{PQ}(e)=\operatorname{br}_{Q}^{I_{(P,e)}}(e)\), so then \(\operatorname{br}_{Q}^{I_{(P,e)}}(e)\overline{f}=0\). It follows that \(\operatorname{br}_{Q}^{I_{(P,e)}}(e)\overline{x}\overline{f}=0\) for any \(x\in C_{I_{(P,e)}}(Q)\), hence \(\operatorname{br}_{Q}^{I_{(P,e)}}(e)\overline{\epsilon}=0\). But this contradicts the assumption that \((Q,\epsilon)\in\mathcal{BP}_{R}(I_{(P,e)},e)\). Thus we find that \(\operatorname{br}_{PQ}(e)\overline{f}\neq 0\) and \((P,e)\trianglelefteq(PQ,f)\).
To complete the proof of part (b) it remains to verify the equalities in (2). Let \(g\in C_{I_{(P,e)}}(Q)\) such that \({}^{g}f=f\). Since \(C_{I_{(P,e)}}(Q)\leq N_{G}(PQ)\) it follows that \(g\in I_{(PQ,f)}\), hence \(g\in C_{I_{(P,e)}\cap I_{(PQ,f)}}(Q)\). So \(\operatorname{Stab}_{C_{I_{(P,e)}}(Q)}(f)\leq C_{I_{(P,e)}\cap I_{(PQ,f)}}(Q)\). Because \(I_{(P,e)}\leq N_{G}(P)\) we have that \(C_{I_{(P,e)}\cap I_{(PQ,f)}}(Q)\leq C_{I_{(PQ,f)}}(Q)\cap N_{G}(P)\). Finally, let \(h\in C_{I_{(PQ,f)}}(Q)\cap N_{G}(P)\). Conjugating the containment \((P,e)\trianglelefteq(PQ,f)\) by \(h\) yields \((P,{}^{h}e)\trianglelefteq(PQ,f)\), so \({}^{h}e=e\) by uniqueness. Therefore \(h\in I_{(P,e)}\). Since \(h\) centralizes \(Q\) and fixes \(f\) we have \(h\in\operatorname{Stab}_{C_{I_{(P,e)}}(Q)}(f)\). The element \(h\) was arbitrary, so we conclude that \(C_{I_{(PQ,f)}}(Q)\cap N_{G}(P)\leq\operatorname{Stab}_{C_{I_{(P,e)}}(Q)}(f)\). The proof is complete.
**Lemma 2.5**.: _Let \(B\in\operatorname{Bl}(RG)\) and let \((P,e)\in\mathcal{BP}_{R}(B)\). Write \(I_{(P,e)}=N_{G}(P,e)\) and let \(Q\) be a \(p\)-subgroup of \(I_{(P,e)}\). Set_
\[\mathcal{E}=\{(\epsilon,f)|\epsilon\in\operatorname{bli}(RC_{I_{(P,e)}}(Q)),f\in\operatorname{bli}(RC_{G}(PQ))\] \[\text{such that }(Q,\epsilon)\in\mathcal{BP}_{R}(I_{(P,e)},e) \text{ and }\epsilon f\neq 0\}\]
_and set_
\[\mathcal{F}=\{f\in\operatorname{bli}(RC_{G}(PQ))|(P,e)\trianglelefteq(PQ,f)\}\,.\]
_Then the map \(\mathcal{E}\to\mathcal{F}\), \((\epsilon,f)\mapsto f\), is a well-defined bijection._
Proof.: The map is well-defined and injective by part (b) of Lemma 2.4. Let \(f\in\mathcal{F}\). Then \(\epsilon:=\operatorname{tr}_{\operatorname{Stab}_{C_{I_{(P,e)}}(Q)}(f)}^{C_{I _{(P,e)}}(Q)}(f)\) is a block idempotent of \(RC_{I_{(P,e)}}(Q)\) by Lemma 2.2. Now \((P,e)\trianglelefteq(PQ,f)\) so for any \(x\in C_{I_{(P,e)}}(Q)\) we have \((P,e)\trianglelefteq(PQ,{}^{x}f)\), hence \(\operatorname{br}_{PQ}(e)^{\overline{x}}\overline{f}=\overline{x}\overline{f}\). Since \(\operatorname{br}_{PQ}(e)=\operatorname{br}_{Q}^{I_{(P,e)}}(e)\) it follows that \(\operatorname{br}_{Q}^{I_{(P,e)}}(e)\overline{\epsilon}=\overline{\epsilon}\). In other words, \((Q,\epsilon)\in\mathcal{BP}_{R}(I_{(P,e)},e)\). The pair \((\epsilon,f)\) is an element of \(\mathcal{E}\) that maps to \(f\), so the map \(\mathcal{E}\to\mathcal{F}\) is surjective.
Now let \(G\) and \(H\) be finite groups and assume that the \(p\)-modular system \((\mathbb{K},\mathcal{O},F)\) is large enough for \(G\times H\). It is not hard to see that if \(P\leq G\times H\) then
\[C_{G\times H}(P)=C_{G}(p_{1}(P))\times C_{H}(p_{2}(P))\]
where \(p_{1}:G\times H\twoheadrightarrow G\) and \(p_{2}:G\times H\twoheadrightarrow H\) denote the canonical projections. Thus, after identifying \(RC_{G\times H}(P)\) with \(R[C_{G}(p_{1}(P))]\otimes_{R}R[C_{H}(p_{2}(P))]\), every block idempotent of \(RC_{G\times H}(P)\) is of the form \(e\otimes f\) for uniquely determined block idempotents \(e\in\operatorname{bli}(RC_{G}(p_{1}(P)))\) and \(f\in\operatorname{bli}(RC_{H}(p_{2}(P)))\). In particular, every \(R[G\times H]\)-Brauer pair is of the form \((P,e\otimes f)\) where \(P\in S_{p}(G\times H)\), \(e\in\operatorname{bli}(RC_{G}(p_{1}(P)))\), and \(f\in\operatorname{bli}(RC_{H}(p_{2}(P)))\). Note that if \((P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\) then \((p_{1}(P),e)\in\mathcal{BP}_{R}(G)\) and \((p_{2}(P),f)\in\mathcal{BP}_{R}(H)\).
**Lemma 2.6**.: _Let \(G\) and \(H\) be finite groups._
1. _Let_ \((Q,c\otimes d),(P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\)_. If_ \((Q,c\otimes d)\trianglelefteq(P,e\otimes f)\) _then_ \((p_{1}(Q),c)\trianglelefteq(p_{1}(P),e)\) _in_ \(\mathcal{BP}_{R}(G)\) _and_ \((p_{2}(Q),d)\trianglelefteq(p_{2}(P),f)\) _in_ \(\mathcal{BP}_{R}(H)\)_._
2. _Let_ \(Q,P\in S_{p}(G\times H)\) _and assume_ \(Q\trianglelefteq P\)_. Let_ \((p_{1}(Q),c),(p_{1}(P),e)\in\mathcal{BP}_{R}(G)\) _and_ \((p_{2}(Q),d),(p_{2}(P),f)\in\mathcal{BP}_{R}(H)\) _be such that_ \((p_{1}(Q),c)\leq(p_{1}(P),e)\) _and_ \((p_{2}(Q),d)\leq(p_{2}(P),f)\)_. Then_ \((Q,c\otimes d)\trianglelefteq(P,e\otimes f)\) _in_ \(\mathcal{BP}_{R}(G\times H)\)_._
3. _Let_ \((Q,c\otimes d),(P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\)_. Then_ \((Q,c\otimes d)\leq(P,e\otimes f)\) _if and only if_ \(Q\leq P\)_,_ \((p_{1}(Q),c)\leq(p_{1}(P),e)\) _in_ \(\mathcal{BP}_{R}(G)\) _and_ \((p_{2}(Q),d)\leq(p_{2}(P),f)\) _in_ \(\mathcal{BP}_{R}(H)\)_._
Proof.: (a) Assume that \((Q,c\otimes d)\trianglelefteq(P,e\otimes f)\) is a normal containment of \(R[G\times H]\)-Brauer pairs. Then by definition \(Q\leq P\leq N_{G\times H}(Q,c\otimes d)\) and \(\operatorname{br}_{P}^{G\times H}(c\otimes d)\cdot\overline{e\otimes f}= \overline{e\otimes f}\). Since \(Q\trianglelefteq P\) we have \(p_{i}(Q)\trianglelefteq p_{i}(P)\) for \(i=1,2\). Let \(x\in p_{1}(P)\). Then there exists an element \(y\in H\) such that \((x,y)\in P\). Since \(P\leq N_{G\times H}(Q,c\otimes d)\) we have \(y\in N_{H}(p_{2}(Q))\). Now \(x\in N_{G}(p_{1}(Q))\) and \(c\in\operatorname{bli}(RC_{G}(p_{1}(Q)))\) so \({}^{x}c\in\operatorname{bli}(RC_{G}(p_{1}(Q)))\). Likewise \({}^{y}d\in\operatorname{bli}(RC_{H}(p_{2}(Q)))\). Since \(c\otimes d={}^{(x,y)}(c\otimes d)={}^{x}c\otimes{}^{y}d\) it follows that \({}^{x}c=c\). Thus we have shown that \(p_{1}(P)\leq N_{G}(p_{1}(Q),c)\). A similar argument shows that \(p_{2}(P)\leq N_{H}(p_{2}(Q),d)\). Since \(\operatorname{br}_{P}^{G\times H}(c\otimes d)=\operatorname{br}_{p_{1}(P)}^{G }(c)\otimes\operatorname{br}_{p_{2}(P)}^{H}(d)\) in \(FC_{G\times H}(P)=(FC_{G}(p_{1}(P)))\otimes_{F}(FC_{H}(p_{2}(P)))\) the equality \(\operatorname{br}_{P}^{G\times H}(c\otimes d)\cdot\overline{e\otimes f}= \overline{e\otimes f}\) gives that
\[\operatorname{br}_{p_{1}(P)}^{G}(c)\overline{e}\otimes\operatorname{br}_{p_{2} (P)}^{H}(d)\overline{f}=\overline{e}\otimes\overline{f}.\]
So \(\operatorname{br}_{p_{1}(P)}^{G}(c)\overline{e}=\overline{e}\) and \(\operatorname{br}_{p_{2}(P)}^{H}(d)\overline{f}=\overline{f}\). Thus we find that \((p_{1}(Q),c)\trianglelefteq(p_{1}(P),e)\) in \(\mathcal{BP}_{R}(G)\) and \((p_{2}(Q),d)\trianglelefteq(p_{2}(P),f)\) in \(\mathcal{BP}_{R}(H)\).
(b) Let \(Q\trianglelefteq P\) be \(p\)-subgroups of \(G\times H\) and let \((p_{1}(Q),c),(p_{1}(P),e)\in\mathcal{BP}_{R}(G)\), \((p_{2}(Q),d),(p_{2}(P),f)\in\mathcal{BP}_{R}(H)\) be such that \((p_{1}(Q),c)\leq(p_{1}(P),e)\) and \((p_{2}(Q),d)\leq(p_{2}(P),f)\). Since \(p_{i}(Q)\trianglelefteq p_{i}(P)\) for \(i=1,2\), [2, Proposition
4.2(b)] implies that \((p_{1}(Q),c)\trianglelefteq(p_{1}(P),e)\) and \((p_{2}(Q),d)\trianglelefteq(p_{2}(P),f)\). In particular, we have \(p_{1}(P)\leq N_{G}(p_{1}(Q),c),p_{2}(P)\leq N_{H}(p_{2}(Q),d),\operatorname{br} ^{G}_{p_{1}(P)}(c)\overline{e}=\overline{e}\), and \(\operatorname{br}^{H}_{p_{2}(P)}(d)\overline{f}=\overline{f}\). Let \((x,y)\in P\). Then \(x\in p_{1}(P)\) so \({}^{x}c=c\) and \(y\in p_{2}(P)\) so \({}^{y}d=d\). It follows that \({}^{(x,y)}(c\otimes d)={}^{x}c\otimes{}^{y}d=c\otimes d\) and hence \(P\leq N_{G\times H}(Q,c\otimes d)\). Now \(\operatorname{br}^{G\times H}_{P}(c\otimes d)=\operatorname{br}^{G}_{p_{1}(P) }(c)\otimes\operatorname{br}^{H}_{p_{2}(P)}(d)\), so
\[\operatorname{br}^{G\times H}_{P}(c\otimes d)\cdot(\overline{e \otimes f}) =(\operatorname{br}^{G}_{p_{1}(P)}(c)\otimes\operatorname{br}^{H}_ {p_{2}(P)}(d))\cdot(\overline{e}\otimes\overline{f})\] \[=\operatorname{br}^{G}_{p_{1}(P)}(c)\overline{e}\otimes \operatorname{br}^{H}_{p_{2}(P)}(d)\overline{f}\] \[=\overline{e}\otimes\overline{f}\] \[=\overline{e\otimes f}.\]
We conclude that \((Q,c\otimes d)\trianglelefteq(P,e\otimes f)\), as desired.
(c) Let \((Q,c\otimes d),(P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\). If \((Q,c\otimes d)\leq(P,e\otimes f)\) then \(Q\leq P\) and part (a) implies that \((p_{1}(Q),c)\leq(p_{1}(P),e)\) in \(\mathcal{BP}_{R}(G)\) and \((p_{2}(Q),d)\leq(p_{2}(P),f)\) in \(\mathcal{BP}_{R}(H)\). Suppose conversely that \(Q\leq P\), \((p_{1}(Q),c)\leq(p_{1}(P),e)\), and \((p_{2}(Q),d)\leq(p_{2}(P),f)\). Let
\[Q=Q_{0}\trianglelefteq Q_{1}\trianglelefteq\dots\trianglelefteq Q_{n}=P\]
be a subnormal chain of \(p\)-subgroups in \(G\times H\). Then
\[p_{i}(Q)=p_{i}(Q_{0})\trianglelefteq p_{i}(Q_{1})\trianglelefteq\dots \trianglelefteq p_{i}(Q_{n})=p_{i}(P)\]
is a subnormal chain of \(p\)-subgroups in \(G\) (if \(i=1\)) or \(H\) (if \(i=2\)). By [2, Proposition 4.2(a)] there exist Brauer pairs \((p_{1}(Q_{i}),c_{i})\in\mathcal{BP}_{R}(G)\) and \((p_{2}(Q_{i}),d_{i})\in\mathcal{BP}_{R}(H)\), \(i=0,\dots,n\), such that
\[(p_{1}(Q),c)=(p_{1}(Q_{0}),c_{0})\trianglelefteq(p_{1}(Q_{1}),c_{1}) \trianglelefteq\dots\trianglelefteq(p_{1}(Q_{n}),c_{n})=(p_{1}(P),e)\]
in \(\mathcal{BP}_{R}(G)\) and
\[(p_{2}(Q),d)=(p_{2}(Q_{0}),d_{0})\trianglelefteq(p_{2}(Q_{1}),d_{1}) \trianglelefteq\dots\trianglelefteq(p_{2}(Q_{n}),d_{n})=(p_{2}(P),f)\]
in \(\mathcal{BP}_{R}(H)\). Part (b) then implies that \((Q_{i},c_{i}\otimes d_{i})\trianglelefteq(Q_{i+1},c_{i+1}\otimes d_{i+1})\) for each \(i=0,\dots,n-1\). We conclude that \((Q,c\otimes d)\leq(P,e\otimes f)\).
**Lemma 2.7**.: _Let \(G\) and \(H\) be finite groups. The maps \(\pi_{1}:\mathcal{BP}_{R}(G\times H)\to\mathcal{BP}_{R}(G)\) and \(\pi_{2}:\mathcal{BP}_{R}(G\times H)\to\mathcal{BP}_{R}(H)\) defined by \(\pi_{1}(P,e\otimes f)=(p_{1}(P),e)\) and \(\pi_{2}(P,e\otimes f)=(p_{2}(P),f)\) are surjective morphisms of posets. Furthermore, if \((g,h)\in G\times H\) and \((P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\) then one has_
\[\pi_{1}({}^{(g,h)}(P,e\otimes f))={}^{g}\pi_{1}(P,e\otimes f)\]
_and_
\[\pi_{2}({}^{(g,h)}(P,e\otimes f))={}^{h}\pi_{2}(P,e\otimes f).\]
Proof.: The maps \(\pi_{1}\) and \(\pi_{2}\) are morphisms of posets by part (c) of Lemma 2.6. Let \((P,e)\in\mathcal{BP}_{R}(G)\). If \(f\) is any block idempotent of \(RH\) then \((P\times\left\{1\right\},e\otimes f)\in\mathcal{BP}_{R}(G\times H)\) and \(\pi_{1}(P\times\left\{1\right\},e\otimes f)=(P,e)\). So \(\pi_{1}\) is surjective. In a similar way one can show that \(\pi_{2}\) is surjective. If \((g,h)\in G\times H\) and \((P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\) then
\[\pi_{1}({}^{(g,h)}(P,e\otimes f)) =\pi_{1}({}^{(g,h)}P,{}^{g}e\otimes{}^{h}f)\] \[=(p_{1}({}^{(g,h)}P),{}^{g}e)\] \[={}^{(g}p_{1}(P),{}^{g}e)\] \[={}^{g}p_{1}(P),e)\] \[={}^{g}\pi_{1}(P,e\otimes f).\]
A similar computation shows that \(\pi_{2}({}^{(g,h)}(P,e\otimes f))={}^{h}\pi_{2}(P,e\otimes f)\). The proof is complete.
**Lemma 2.8**.: _Let \(A\in\mathrm{Bl}(RG)\), \(B\in\mathrm{Bl}(RH)\), and let \((P,e\otimes f)\in\mathcal{BP}_{R}(G\times H)\). Then \((P,e\otimes f)\in\mathcal{BP}_{R}(A\otimes_{R}B)\) if and only if \((p_{1}(P),e)\in\mathcal{BP}_{R}(A)\) and \((p_{2}(P),f)\in\mathcal{BP}_{R}(B)\). In particular, if \(\pi_{1}\) and \(\pi_{2}\) are as in Lemma 2.7 then_
\[\pi_{1}(\mathcal{BP}_{R}(A\otimes_{R}B))=\mathcal{BP}_{R}(A)\qquad\text{and} \qquad\pi_{2}(\mathcal{BP}_{R}(A\otimes_{R}B))=\mathcal{BP}_{R}(B).\]
Proof.: Let \(e_{A}\) and \(f_{B}\) denote the identities of \(A\) and \(B\), respectively. Then \(e_{A}\otimes f_{B}\) is the identity of \(A\otimes_{R}B\). Note that
\[\mathrm{br}_{P}^{G\times H}(e_{A}\otimes f_{B})=\mathrm{br}_{p_{1}(P)}^{G}(e_ {A})\otimes\mathrm{br}_{p_{2}(P)}^{H}(f_{B}).\]
Now by definition \((P,e\otimes f)\in\mathcal{BP}_{R}(A\otimes_{R}B)\) if and only if \(\mathrm{br}_{P}^{G\times H}(e_{A}\otimes f_{B})\cdot(\overline{e\otimes f})= \overline{e\otimes f}\). This equality holds if and only if \(\mathrm{br}_{p_{1}(P)}^{G}(e_{A})\overline{e}\otimes\mathrm{br}_{p_{2}(P)}^{H }(f_{B})\overline{f}=\overline{e}\otimes\overline{f}\), which in turn holds if and only if \(\mathrm{br}_{p_{1}(P)}^{G}(e_{A})\overline{e}=\overline{e}\) and \(\mathrm{br}_{p_{2}(P)}^{H}(f_{B})\overline{f}=\overline{f}\). Thus we see that \((P,e\otimes f)\in\mathcal{BP}_{R}(A\otimes_{R}B)\) if and only if \((p_{1}(P),e)\in\mathcal{BP}_{R}(A)\) and \((p_{2}(P),f)\in\mathcal{BP}_{R}(B)\).
Now let \(\pi_{1}\) and \(\pi_{2}\) be as in Lemma 2.7. By what we have just shown, \(\pi_{1}(\mathcal{BP}_{R}(A\otimes_{R}B))\subseteq\mathcal{BP}_{R}(A)\) and \(\pi_{2}(\mathcal{BP}_{R}(A\otimes_{R}B))\subseteq\mathcal{BP}_{R}(B)\). If \((P,e)\in\mathcal{BP}_{R}(A)\) then the \(R[G\times H]\)-pair \((P\times\left\{1\right\},e\otimes f_{B})\) belongs to \(A\otimes_{R}B\) and maps to \((P,e)\) under \(\pi_{1}\). This shows that \(\pi_{1}(\mathcal{BP}_{R}(A\otimes_{R}B))=\mathcal{BP}_{R}(A)\). Similarly, \(\pi_{2}(\mathcal{BP}_{R}(A\otimes_{R}B))=\mathcal{BP}_{R}(B)\).
### Fusion systems
In this subsection we establish some facts about block fusion systems for later use. We refer the reader to [7] for the definitions of any terms not recalled here.
If \(\mathcal{F}\) is a fusion system over a \(p\)-group \(S\) then elements \(x,y\in S\) are said to be \(\mathcal{F}\)-_conjugate_ if there exists an \(\mathcal{F}\)-isomorphism \(\varphi:\langle x\rangle\stackrel{{\sim}}{{\to}}\langle y\rangle\) satisfying \(\varphi(x)=y\). If \(Q\leq S\) then \(\mathcal{N}_{\mathcal{F}}(Q)\) and \(\mathcal{C}_{\mathcal{F}}(Q)\) respectively denote the normalizer and centralizer subsystems of \(Q\) in \(\mathcal{F}\). If \(\mathcal{E}\) is another fusion system defined over a \(p\)-group \(R\) then a group isomorphism \(\phi:S\stackrel{{\sim}}{{\to}}R\) induces an isomorphism of fusion systems \(\phi:\mathcal{F}\stackrel{{\sim}}{{\to}}\mathcal{E}\) if
\[\operatorname{Hom}_{\mathcal{E}}(\phi(P),\phi(Q))=\phi\circ\operatorname{Hom }_{\mathcal{F}}(P,Q)\circ\phi^{-1}\]
for all \(P,Q\leq S\).
If \(\mathcal{E}\) and \(\mathcal{F}\) are fusion systems defined over \(p\)-groups \(R\) and \(S\), respectively, then \(\mathcal{E}\times\mathcal{F}\) is the fusion system over \(R\times S\) generated by the set of all morphisms of the form \((\psi_{1},\psi_{2})\in\operatorname{Hom}(P_{1}\times Q_{1},P_{2}\times Q_{2})\) with \(\psi_{1}\in\operatorname{Hom}_{\mathcal{E}}(P_{1},P_{2})\) and \(\psi_{2}\in\operatorname{Hom}_{\mathcal{F}}(Q_{1},Q_{2})\). If \(U,V\leq R\times S\) then \(\operatorname{Hom}_{\mathcal{E}\times\mathcal{F}}(U,V)\) consists of those group homomorphisms \(\psi:U\to V\) for which there exist \(\psi_{1}\in\operatorname{Hom}_{\mathcal{E}}(p_{1}(U),p_{1}(V))\) and \(\psi_{2}\in\operatorname{Hom}_{\mathcal{F}}(p_{2}(U),p_{2}(V))\) such that \((\psi_{1},\psi_{2})|_{U}=\psi\).
**Lemma 2.9**.: _Let \(\mathcal{E}\) and \(\mathcal{F}\) be fusion systems over \(p\)-groups \(R\) and \(S\), respectively. Assume that at least one of \(\mathcal{E}\) or \(\mathcal{F}\) is saturated. Let \(\phi:S\stackrel{{\sim}}{{\to}}R\) be an isomorphism that induces an isomorphism of fusion systems \(\phi:\mathcal{F}\stackrel{{\sim}}{{\to}}\mathcal{E}\). Let \(Q\) be a fully \(\mathcal{F}\)-normalized subgroup of \(S\) and set \(P=\phi(Q)\). Then \(\Delta(P,\phi,Q)\) is a fully \(\mathcal{E}\times\mathcal{F}\)-normalized subgroup of \(R\times S\)._
Proof.: Since \(\phi:\mathcal{F}\stackrel{{\sim}}{{\to}}\mathcal{E}\) is an isomorphism, \(P\) is a fully \(\mathcal{E}\)-normalized subgroup of \(R\). Notice that we have
\[p_{1}(N_{R\times S}(\Delta(P,\phi,Q)))=N_{R}(P), p_{2}(N_{R\times S}(\Delta(P,\phi,Q)))=N_{S}(Q),\] \[k_{1}(N_{R\times S}(\Delta(P,\phi,Q)))=C_{R}(P), k_{2}(N_{R\times S}(\Delta(P,\phi,Q)))=C_{S}(Q).\]
Therefore
\[N_{R}(P)/C_{R}(P)\cong N_{R\times S}(\Delta(P,\phi,Q))/(C_{R}(P)\times C_{S}( Q))\cong N_{S}(Q)/C_{S}(Q),\]
and in particular
\[|N_{R\times S}(\Delta(P,\phi,Q))|=|N_{S}(Q)|\cdot|C_{R}(P)|=|N_{R}(P)|\cdot|C _{S}(Q)|.\]
Now let \(V\leq R\times S\) and suppose that \(\psi:\Delta(P,\phi,Q)\stackrel{{\sim}}{{\to}}V\) is an \(\mathcal{E}\times\mathcal{F}\)-isomorphism. Then there exists an \(\mathcal{E}\)-isomorphism \(\psi_{1}:P\stackrel{{\sim}}{{\to}}p_{1}(V)\) and an \(\mathcal{F}\)-isomorphism \(\psi_{2}:Q\stackrel{{\sim}}{{\to}}p_{2}(V)\) such that \((\psi_{1},\psi_{2})|_{\Delta(P,\phi,Q)}=\psi\). In particular, \(V=\Delta(p_{1}(V),\psi_{1}\phi\psi_{2}^{-1},p_{2}(V))\). It follows that
\[p_{1}(N_{R\times S}(V))\leq N_{R}(p_{1}(V)), p_{2}(N_{R\times S}(V))\leq N_{S}(p_{2}(V)),\] \[k_{1}(N_{R\times S}(V))=C_{R}(p_{1}(V)), k_{2}(N_{R\times S}(V))=C_{S}(p_{2}(V)).\]
Suppose that \({\cal E}\) is saturated. Then \(P\) is fully \({\cal E}\)-centralized, so \(|C_{R}(P)|\geq|C_{R}(p_{1}(V))|\). We also have \(|N_{S}(Q)|\geq|N_{S}(p_{2}(V))|\) since \(Q\) is fully \({\cal F}\)-normalized. It follows that
\[|N_{R\times S}(V)| =|p_{2}(N_{R\times S}(V))|\cdot|C_{R}(p_{1}(V))|\] \[\leq|N_{S}(Q)|\cdot|C_{R}(P)|=|N_{R\times S}(\Delta(P,\phi,Q))|,\]
and hence \(\Delta(P,\phi,Q)\) is fully \({\cal E}\times{\cal F}\)-normalized. A similar argument achieves the same result if instead \({\cal F}\) is saturated.
Let \(B\) be a block of \(RG\) and let \((D,e_{D})\in{\cal BP}_{R}(B)\) be a maximal \(B\)-Brauer pair. For each subgroup \(P\leq D\) let \(e_{P}\) denote the unique block idempotent of \(RC_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\). The _fusion system of_\(B\)_associated to_\((D,e_{D})\) is the subcategory \({\cal F}={\cal F}_{(D,e_{D})}(G,B)\) of the category of all finite groups whose objects are the subgroups of \(D\) and with morphism sets \({\rm Hom}_{\cal F}(P,Q)\), for two subgroups \(P\) and \(Q\) of \(D\), defined as the set of group homomorphisms \(\varphi:P\to Q\) for which there exists a \(g\in G\) satisfying \(\varphi=c_{g}\) and \({}^{g}(P,e_{P})\leq(Q,e_{Q})\). By the results of [7, IV.3] the fusion system \({\cal F}\) is saturated. Since any two maximal \(B\)-Brauer pairs are \(G\)-conjugate (see [7, IV, Theorem 2.20]) the isomorphism class of \({\cal F}\) does not depend on the choice of maximal \(B\)-Brauer pair. If \(B\in{\rm Bl}({\cal O}G)\) note that \({\cal F}_{(D,e_{D})}(G,B)={\cal F}_{(D,\overline{e_{D}})}(G,\overline{B})\).
Keep the notation set above. If \(P,Q\leq D\) and \(\varphi:P\to Q\) is a morphism in \({\cal F}\) then \(\varphi\) is an \({\cal F}\)-isomorphism if and only if \(\varphi=c_{g}\) for some \(g\in G\) such that \({}^{g}(P,e_{P})=(Q,e_{Q})\). In fact, if \(\varphi:P\stackrel{{\sim}}{{\to}}Q\) is an \({\cal F}\)-isomorphism and \(g\in G\) is such that \(\varphi=c_{g}\) and \({}^{g}(P,e_{P})\leq(Q,e_{Q})\) then necessarily \({}^{g}(P,e_{P})=(Q,e_{Q})\).
Note that if \((D,e_{D})\in{\cal BP}_{R}(B)\) is a maximal \(B\)-Brauer pair then \((D,e_{D}^{*})\) is a maximal \(B^{*}\)-Brauer pair and \({\cal F}_{(D,e_{D})}(G,B)={\cal F}_{(D,e_{D}^{*})}(G,B^{*})\).
**Lemma 2.10**.: _Let \(B\in{\rm Bl}(RG)\) and let \((D,e_{D})\in{\cal BP}_{R}(B)\) be a maximal \(B\)-Brauer pair. If \(P\leq D\) write \(e_{P}\) for the unique block idempotent of \(RC_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\) and set \(I_{P}=N_{G}(P,e_{P})\). Let \({\cal F}={\cal F}_{(D,e_{D})}(G,B)\). Fix \(P\leq D\)._
1. \((N_{D}(P),e_{N_{D}(P)})\) _is an_ \(RI_{P}e_{P}\)_-Brauer pair which is maximal if and only if_ \(P\) _is fully_ \({\cal F}\)_-normalized. In this case_ \({\cal N}_{\cal F}(P)={\cal F}_{(N_{D}(P),e_{N_{D}(P)})}(I_{P},e_{P})\)_._
2. _If_ \(Q\leq N_{D}(P)\) _write_ \(\epsilon_{Q}\) _for the unique block idempotent of_ \(RC_{I_{P}}(Q)\) _such that_ \((Q,\epsilon_{Q})\leq(N_{D}(P),e_{N_{D}(P)})\) _is a containment of_ \(RI_{P}e_{P}\)_-Brauer pairs. Then_ \(\epsilon_{Q}\) _is the unique block of_ \(RC_{I_{P}}(Q)\) _covering_ \(e_{PQ}\)_. Moreover,_ \[\epsilon_{Q}={\rm tr}_{{\rm Stab}_{C_{I_{P}}(Q)}(e_{PQ})}^{C_{I_{P}}(Q)}(e_{PQ})\] _and_ \({\rm Stab}_{C_{I_{P}}(Q)}(e_{PQ})=C_{I_{P}\cap I_{PQ}}(Q)\)_._
Proof.: Part (a) is well-known: see [7, IV, Theorem 3.19]. Note that if \(Q\leq N_{D}(P)\) then \(Q\leq I_{P}\) and \(C_{I_{P}}(PQ)=C_{G}(PQ)\trianglelefteq C_{I_{P}}(Q)\), so by Lemma 2.2 there is a unique block of \(RC_{I_{P}}(Q)\) that covers \(e_{PQ}\). We must show that for each subgroup \(Q\leq N_{D}(P)\) the block \(\epsilon_{Q}\) covers \(e_{PQ}\). Suppose first that \(P\leq Q\leq N_{D}(P)\). Then \(C_{I_{P}}(Q)=C_{G}(Q)\), so \((Q,\epsilon_{Q})\) is an \(RG\)-Brauer pair. The containment \((Q,\epsilon_{Q})\leq(N_{D}(P),e_{N_{D}(P)})\) then implies the corresponding containment of \(RG\)-Brauer pairs. Hence \(\epsilon_{Q}=e_{Q}=e_{PQ}\) and \(\epsilon_{Q}\) clearly covers \(e_{PQ}\) in this case. Now suppose that \(P\not\leq Q\). By what we have just shown \(\epsilon_{PQ}=e_{PQ}\). Let \(Q=Q_{0}\trianglelefteq Q_{1}\trianglelefteq\cdots\trianglelefteq Q_{n}=PQ\) be a subnormal chain of subgroups. Then for each \(i=0,\ldots,n-1\) we have a normal containment of \(RI_{P}e_{P}\)-Brauer pairs \((Q_{i},\epsilon_{Q_{i}})\trianglelefteq(Q_{i+1},\epsilon_{Q_{i+1}})\). Now Lemma 2.1 implies that \(\epsilon_{Q_{i}}\in RC_{I_{P}}(PQ_{i})=RC_{G}(PQ)\), so \(\operatorname{br}_{Q_{i+1}}^{I_{P}}(\epsilon_{Q_{i}})=\overline{\epsilon_{Q_ {i}}}\) for each \(i\). It follows that \(\overline{\epsilon_{Q_{i}}\epsilon_{Q_{i+1}}}=\overline{\epsilon_{Q_{i+1}}}\), which in turn implies that \(\overline{\epsilon_{Q}e_{PQ}}\neq 0\). Thus we see that \(\epsilon_{Q}\) covers \(e_{PQ}\) for each \(Q\leq N_{D}(P)\). The final statement follows from part (b) of Lemma 2.4.
In the next couple of lemmas \(G\) and \(H\) denote finite groups and the \(p\)-modular system \((\mathbb{K},\mathcal{O},F)\) is assumed large enough for \(G\times H\).
**Lemma 2.11**.: _Let \(A\in\operatorname{Bl}(RG)\) and \(B\in\operatorname{Bl}(RH)\). Let \((D,e_{D})\in\mathcal{BP}_{R}(A)\) and \((E,f_{E})\in\mathcal{BP}_{R}(B)\) be maximal Brauer pairs. Then \((D\times E,e_{D}\otimes f_{E})\) is a maximal \(A\otimes_{R}B\)-Brauer pair._
Proof.: By Lemma 2.8 we know that \((D\times E,e_{D}\otimes f_{E})\in\mathcal{BP}_{R}(A\otimes_{R}B)\). Suppose that \((P,e\otimes f)\in\mathcal{BP}_{R}(A\otimes_{R}B)\) is such that \((D\times E,e_{D}\otimes f_{E})\leq(P,e\otimes f)\). Then \((D,e_{D})\leq(p_{1}(P),e)\) and \((E,f_{E})\leq(p_{2}(P),f)\) by Lemma 2.7. By Lemma 2.8 we have \((p_{1}(P),e)\in\mathcal{BP}_{R}(A)\) and \((p_{2}(P),f)\in\mathcal{BP}_{R}(B)\). Since \((D,e_{D})\) and \((E,f_{E})\) are maximal Brauer pairs it follows that \((D,e_{D})=(p_{1}(P),e)\) and \((E,f_{E})=(p_{2}(P),f)\). Then \(P\leq p_{1}(P)\times p_{2}(P)=D\times E\), so we have \((D\times E,e_{D}\otimes f_{E})=(P,e\otimes f)\). Therefore \((D\times E,e_{D}\otimes f_{E})\) is a maximal \(A\otimes_{R}B\)-Brauer pair.
**Lemma 2.12**.: _Let \(A\in\operatorname{Bl}(RG)\) and \(B\in\operatorname{Bl}(RH)\). Let \((D,e_{D})\in\mathcal{BP}_{R}(A)\) and \((E,f_{E})\in\mathcal{BP}_{R}(B)\) be maximal Brauer pairs. If \(P\leq D\) write \(e_{P}\) for the unique block idempotent of \(RC_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\) and if \(Q\leq E\) write \(f_{Q}\) for the unique block idempotent of \(RC_{H}(Q)\) such that \((Q,f_{Q})\leq(E,f_{E})\)._
1. _If_ \(S\leq D\times E\) _then_ \[(S,e_{p_{1}(S)}\otimes f_{p_{2}(S)})\leq(D\times E,e_{D}\otimes f_{E})\] _is a containment of_ \(A\otimes_{R}B\)_-Brauer pairs._
_._
2. _Set_ \(\mathcal{A}=\mathcal{F}_{(D,e_{D})}(G,A)\) _and_ \(\mathcal{B}=\mathcal{F}_{(E,f_{E})}(H,B)\)_. Then_ \[\mathcal{F}_{(D\times E,e_{D}\otimes f_{E})}(G\times H,A\otimes_{R}B)=\mathcal{ A}\times\mathcal{B}.\]
Proof.: (a) Let \(S\leq D\times E\). Then we have \((p_{1}(S),e_{p_{1}(S)})\leq(D,e_{D})\) and \((p_{2}(S),f_{p_{2}(S)})\leq(E,f_{E})\), so \((S,e_{p_{1}(S)}\otimes f_{p_{2}(S)})\leq(D\times E,e_{D}\otimes f_{E})\) by Part (c) of Lemma 2.6.
(b) Recall from Lemma 2.11 that \((D\times E,e_{D}\otimes f_{E})\) is a maximal \(A\otimes_{R}B\)-Brauer pair. So the fusion system \(\mathcal{F}:=\mathcal{F}_{(D\times E,e_{D}\otimes f_{E})}(G\times H,A\otimes_ {R}B)\) is well-defined. Now \(\mathcal{F}\) and \(\mathcal{A}\times\mathcal{B}\) are both fusion systems over \(D\times E\) so to show \(\mathcal{F}=\mathcal{A}\times\mathcal{B}\) we only need to compare their morphism sets. Let \(P_{1},P_{2}\leq D\), \(Q_{1},Q_{2}\leq E\), \(\psi_{1}\in\operatorname{Hom}_{\mathcal{A}}(P_{1},P_{2})\), and \(\psi_{2}\in\operatorname{Hom}_{\mathcal{B}}(Q_{1},Q_{2})\). Then \(\psi_{1}=c_{g}\) for some \(g\in G\) such that \({}^{g}(P_{1},e_{P_{1}})\leq(P_{2},e_{P_{2}})\) and \(\psi_{2}=c_{h}\) for some \(h\in H\) such that \({}^{h}(Q_{1},f_{Q_{1}})\leq(Q_{2},f_{Q_{2}})\). Observe that \((\psi_{1},\psi_{2})=c_{(g,h)}\) and
\[{}^{(g,h)}(P_{1}\times Q_{1},e_{P_{1}}\otimes f_{Q_{1}}) =({}^{g}P_{1}\times{}^{h}Q_{1},{}^{g}e_{P_{1}}\otimes{}^{h}f_{Q_{1 }})\] \[\leq(P_{2}\times Q_{2},e_{P_{2}}\otimes f_{Q_{2}})\]
by Part (c) of Lemma 2.6. So \((\psi_{1},\psi_{2})\in\operatorname{Hom}_{\mathcal{F}}(P_{1}\times Q_{1},P_{2 }\times Q_{2})\). Since morphisms of the form \((\psi_{1},\psi_{2})\) generate \(\mathcal{A}\times\mathcal{B}\) it follows that \(\mathcal{A}\times\mathcal{B}\subseteq\mathcal{F}\). Now let \(S,T\leq D\times E\) and let \(\psi\in\operatorname{Hom}_{\mathcal{F}}(S,T)\). Then by definition \(\psi=c_{(g,h)}\) for some \((g,h)\in G\times H\) such that
\[{}^{(g,h)}(S,e_{p_{1}(S)}\otimes f_{p_{2}(S)})\leq(T,e_{p_{1}(T)}\otimes f_{p _{2}(T)}).\]
Note that this implies \({}^{g}(p_{1}(S),e_{p_{1}(S)})\leq(p_{1}(T),e_{p_{1}(T)})\) and \({}^{h}(p_{2}(S),f_{p_{2}(S)})\leq(p_{2}(T),f_{p_{2}(T)})\) by Lemma 2.7. Therefore if we set \(\psi_{1}=c_{g}:p_{1}(S)\to p_{1}(T)\) and \(\psi_{2}=c_{h}:p_{2}(S)\to p_{2}(T)\) then \(\psi_{1}\in\operatorname{Hom}_{\mathcal{A}}(p_{1}(S),p_{1}(T))\) and \(\psi_{2}\in\operatorname{Hom}_{\mathcal{B}}(p_{2}(S),p_{2}(T))\). Now consider the morphism \((\psi_{1},\psi_{2}):p_{1}(S)\times p_{2}(S)\to p_{1}(T)\times p_{2}(T)\). By definition, \((\psi_{1},\psi_{2})\) is a morphism in \(\mathcal{A}\times\mathcal{B}\). Since \((\psi_{1},\psi_{2})|_{S}=\psi\) we see that \(\psi\) is a morphism in \(\mathcal{A}\times\mathcal{B}\). Thus we have \(\mathcal{F}\subseteq\mathcal{A}\times\mathcal{B}\) and the proof is complete.
## 3 Class functions and characters
Let \(G\) be a finite group and let \((\mathbb{K},\mathcal{O},F)\) stand for a \(p\)-modular system large enough for \(G\). Write \(\operatorname{Irr}_{\mathbb{K}}(G)=\operatorname{Irr}(\mathbb{K}G)\) for the set of characters of irreducible \(\mathbb{K}G\)-modules. Let \(CF(G;\mathbb{K})\) denote the \(\mathbb{K}\)-algebra of \(\mathbb{K}\)-valued class functions on \(G\). Recall that \(CF(G;\mathbb{K})\) is endowed with the scalar product
\[(\chi,\psi)_{G}=\frac{1}{|G|}\sum_{g\in G}\chi(g)\psi(g^{-1})\]
where \(\chi,\psi\in CF(G;\mathbb{K})\). The set \(\mathrm{Irr}_{\mathbb{K}}(G)\) is an orthonormal basis for \(CF(G;\mathbb{K})\). Write \(R_{\mathbb{K}}(G)=R(\mathbb{K}G)\) for the \(\mathbb{Z}\)-span of \(\mathrm{Irr}_{\mathbb{K}}(G)\) in \(CF(G;\mathbb{K})\). Recall that \(R_{\mathbb{K}}(G)\) is isomorphic to the Grothendieck ring of the category \({}_{\mathbb{K}G}\mathbf{mod}\) of finite-dimensional \(\mathbb{K}G\)-modules and that \(\mathrm{Irr}_{\mathbb{K}}(G)\) forms a \(\mathbb{Z}\)-basis of \(R_{\mathbb{K}}(G)\).
If \(N\) is a normal subgroup of \(G\) we write \(R_{\mathbb{K}}(G/N)=R(\mathbb{K}[G/N])\) for the \(\mathbb{Z}\)-span in \(R_{\mathbb{K}}(G)\) of the irreducible characters of \(\mathbb{K}G\) that contain \(N\) in their kernel. In other words, we will identify each character of the quotient group \(G/N\) with its inflation to \(G\).
Let \(H\leq G\) and let \(g\in G\). Then the map
\[{}^{g}(\cdot):CF(H;\mathbb{K}) \overset{\sim}{\to}CF({}^{g}H;\mathbb{K})\] \[\chi \mapsto({}^{g}\chi:{}^{g}h\mapsto\chi(h)\text{ for all }h\in H)\]
is a \(\mathbb{K}\)-algebra isomorphism and satisfies \(({}^{g}\chi,{}^{g}\psi)_{{}^{g}H}=(\chi,\psi)_{H}\) for all \(\chi,\psi\in CF(H;\mathbb{K})\). Note that \({}^{g}(\cdot)\) restricts to a ring isomorphism \(R_{\mathbb{K}}(H)\overset{\sim}{\to}R_{\mathbb{K}}({}^{g}H)\). More generally, if \(N\trianglelefteq H\leq G\) and \(g\in G\) then \({}^{g}(\cdot)\) restricts to a ring isomorphism \(R_{\mathbb{K}}(H/N)\overset{\sim}{\to}R_{\mathbb{K}}({}^{g}H/{}^{g}N)\). Note also that if \(\chi\in CF(H;\mathbb{K})\) and \(g_{1},g_{2}\in G\) then \({}^{g_{2}}({}^{g_{1}}\chi)={}^{g_{2}g_{1}}\chi\).
A \(\mathbb{K}\)-linear map \(\chi:\mathbb{K}G\to\mathbb{K}\) is called a _central function_ if \(\chi(\alpha\beta)=\chi(\beta\alpha)\) for all \(\alpha,\beta\in\mathbb{K}G\). Write \(\mathrm{CentFun}(\mathbb{K}G)\) for the space of all central functions on \(\mathbb{K}G\). We make \(\mathrm{CentFun}(\mathbb{K}G)\) into a \(Z(\mathbb{K}G)\)-module by defining \((z\cdot\chi)(\alpha)=\chi(\alpha z)\) for all \(z\in Z(\mathbb{K}G)\), \(\chi\in\mathrm{CentFun}(\mathbb{K}G)\), and \(\alpha\in\mathbb{K}G\). Each class function \(G\to\mathbb{K}\) extends uniquely to a central function on \(\mathbb{K}G\), and this gives rise to a \(\mathbb{K}\)-isomorphism \(CF(G;\mathbb{K})\overset{\sim}{\to}\mathrm{CentFun}(\mathbb{K}G)\). Thus \(CF(G;\mathbb{K})\) inherits a \(Z(\mathbb{K}G)\)-module structure given by \((z\cdot\chi)(g)=\chi(gz)\) for all \(z\in Z(\mathbb{K}G)\), \(\chi\in CF(G;\mathbb{K})\), and \(g\in G\). Note that \(\chi(gz)\) is the value of the extension of \(\chi\) to \(\mathbb{K}G\) at \(gz\). We will identify each class function on \(G\) with its extension to \(\mathbb{K}G\) without further comment.
If \(e\) is a central idempotent of \(\mathbb{K}G\) set \(CF(G,e;\mathbb{K}):=e\cdot CF(G;\mathbb{K})\). Notice that
\[CF(G,e;\mathbb{K})=\left\{\chi\in CF(G;\mathbb{K})|\chi(ge)=\chi(g)\text{ for all }g\in G\right\}.\]
Since we are assuming \(\mathbb{K}\) is large enough, the set \(\mathrm{Irr}_{\mathbb{K}}(G,e)=\mathrm{Irr}(\mathbb{K}Ge)\) of characters of irreducible \(\mathbb{K}Ge\)-modules forms a basis of \(CF(G,e;\mathbb{K})\). Write \(R_{\mathbb{K}}(G,e)=R(\mathbb{K}Ge)\) for the \(\mathbb{Z}\)-span of \(\mathrm{Irr}_{\mathbb{K}}(G,e)\) in \(CF(G,e;\mathbb{K})\). Recall that \(R_{\mathbb{K}}(G,e)\) is isomorphic to the Grothendieck group of the category \({}_{\mathbb{K}Ge}\mathbf{mod}\) of finite-dimensional \(\mathbb{K}Ge\)-modules and that \(\mathrm{Irr}_{\mathbb{K}}(G,e)\) forms a \(\mathbb{Z}\)-basis of \(R_{\mathbb{K}}(G,e)\). Note also that \(R_{\mathbb{K}}(G,e)=e\cdot R_{\mathbb{K}}(G)\).
If \(H\leq G\), \(g\in G\), and \(e\in Z(\mathbb{K}H)\) is an idempotent then the isomorphism \({}^{g}(\cdot):R_{\mathbb{K}}(H)\overset{\sim}{\to}R_{\mathbb{K}}({}^{g}H)\) restricts to a group isomorphism \(R_{\mathbb{K}}(H,e)\overset{\sim}{\to}R_{\mathbb{K}}({}^{g}H,{}^{g}e)\).
If \(I\) is a set of pairwise orthogonal idempotents of \(Z(\mathbb{K}G)\) whose sum equals \(1\) then we have orthogonal decompositions
\[CF(G;\mathbb{K})=\bigoplus_{e\in I}CF(G,e;\mathbb{K})\qquad\text{and}\qquad R_{ \mathbb{K}}(G)=\bigoplus_{e\in I}R_{\mathbb{K}}(G,e).\]
In particular, for any \(\chi,\psi\in CF(G;\mathbb{K})\) one has
\[(\chi,\psi)_{G}=\sum_{e\in I}(e\chi,e\psi)_{G}.\]
If \(\overline{e}\in Z(FG)\) is an idempotent with lift \(e\in Z(\mathcal{O}G)\subseteq Z(\mathbb{K}G)\) then we write
\[CF(G,\overline{e};\mathbb{K}):=CF(G,e;\mathbb{K})\qquad\text{and}\qquad R_{ \mathbb{K}}(G,\overline{e}):=R_{\mathbb{K}}(G,e).\]
Since every central idempotent of \(FG\) has a unique lift to a central idempotent of \(\mathcal{O}G\), for any idempotent \(f\in Z(FG)\) we may speak of \(CF(G,f;\mathbb{K})\) or \(R_{\mathbb{K}}(G,f)\) without explicitly naming a lift of \(f\) to \(Z(\mathcal{O}G)\).
Let \(N\trianglelefteq G\) and let \(e\in Z(\mathbb{K}G)\) be an idempotent. We set
\[R_{\mathbb{K}}(G/N,e):=R_{\mathbb{K}}(G/N)\cap R_{\mathbb{K}}(G,e).\]
Note that \(R_{\mathbb{K}}(G/N,e)\) is the subgroup of \(R_{\mathbb{K}}(G)\) spanned by those \(\chi\in\operatorname{Irr}_{\mathbb{K}}(G,e)\) that contain \(N\) in their kernel. If \(\overline{e}\in Z(FG)\) is an idempotent with lift \(e\in Z(\mathcal{O}G)\) set \(R_{\mathbb{K}}(G/N,\overline{e}):=R_{\mathbb{K}}(G/N,e)\).
Let \(N\trianglelefteq H\leq G\) and let \(e\in Z(\mathbb{K}H)\) be an idempotent. If \(g\in G\) then the isomorphism \({}^{g}(\cdot):R_{\mathbb{K}}(H)\to R_{\mathbb{K}}({}^{g}H)\) restricts to a group isomorphism \(R_{\mathbb{K}}(H/N,e)\stackrel{{\sim}}{{\to}}R_{\mathbb{K}}({}^{g }H/{}^{g}N,{}^{g}e)\).
If \(\chi\in CF(G;\mathbb{K})\) we denote by \(\chi^{\circ}\) the \(\mathbb{K}\)-valued class function on \(G\) defined by
\[\chi^{\circ}(g):=\chi(g^{-1})\qquad g\in G.\]
Note that if \(\alpha\in\mathbb{K}G\) then \(\chi^{\circ}(\alpha)=\chi(\alpha^{*})\), where \(\alpha^{*}\) is the image of \(\alpha\) under the antipode of \(\mathbb{K}G\). It follows easily that if \(e\) is a central idempotent of \(\mathbb{K}G\) and \(\chi\in CF(G,e;\mathbb{K})\) then \(\chi^{\circ}\in CF(G,e^{*};\mathbb{K})\). If \(M\) is a \(\mathbb{K}G\)-module affording the character \(\chi\) then \(\chi^{\circ}\) is the character of the dual (left) \(\mathbb{K}G\)-module \(M^{\circ}:=\operatorname{Hom}_{\mathbb{K}}(M,\mathbb{K})\). In particular, if \(\chi\in\operatorname{Irr}(\mathbb{K}G)\) then \(\chi^{\circ}\in\operatorname{Irr}(\mathbb{K}G)\).
Now let \(G\) and \(H\) be finite groups (and assume that \(\mathbb{K}\) is large enough for \(G\times H\)). We identify \(\mathbb{K}[G\times H]\) with \(\mathbb{K}G\otimes_{\mathbb{K}}\mathbb{K}H\) via \((g,h)\mapsto g\otimes h\). Recall that the space of \(\mathbb{K}\)-valued class functions on \(G\times H\) is identified with the space of \(\mathbb{K}\)-valued central functions on \(\mathbb{K}[G\times H]\). Since we identify \(\mathbb{K}[G\times H]\) with \(\mathbb{K}G\otimes_{\mathbb{K}}\mathbb{K}H\), a central function on \(\mathbb{K}[G\times H]\) is the same thing as a
\(\mathbb{K}\)-bilinear map \(\chi:\mathbb{K}G\times\mathbb{K}H\to\mathbb{K}\) that satisfies \(\chi(\alpha\alpha^{\prime},\beta\beta^{\prime})=\chi(\alpha^{\prime}\alpha,\beta ^{\prime}\beta)\) for all \(\alpha,\alpha^{\prime}\in\mathbb{K}G\) and all \(\beta,\beta^{\prime}\in\mathbb{K}H\).
Let \(e\) be a central idempotent of \(\mathbb{K}G\) and let \(f\) be a central idempotent of \(\mathbb{K}H\). Then \(e\otimes f\) is a central idempotent of \(\mathbb{K}[G\times H]\). Under our identifications, we have
\[CF(G\times H,e\otimes f;\mathbb{K})=\left\{\chi\in CF(G\times H;\mathbb{K})| \chi(ge,hf)=\chi(g,h)\text{ for all }g\in G,h\in H\right\}.\]
Let \(\mu\in CF(G\times H,e\otimes f;\mathbb{K})\). Then for any \(g\in G\)
\[\mu(g,\cdot)\in CF(H,f;\mathbb{K})\]
and for any \(h\in H\)
\[\mu(\cdot,h)\in CF(G,e;\mathbb{K}).\]
More generally, for any \(\alpha\in\mathbb{K}G\) we have
\[\mu(\alpha,\cdot)\in CF(H,f;\mathbb{K})\]
and for any \(\beta\in\mathbb{K}H\) we have
\[\mu(\cdot,\beta)\in CF(G,e;\mathbb{K}).\]
Now let \(\mu\in CF(G\times H,e\otimes f^{*};\mathbb{K})\) where again \(e\) is a central idempotent of \(\mathbb{K}G\) and \(f\) is a central idempotent of \(\mathbb{K}H\) (so that \(f^{*}\) is a central idempotent of \(\mathbb{K}H\)). If \(\nu\in CF(H,f;\mathbb{K})\) define \(\mu\otimes_{H}\nu\in CF(G,e;\mathbb{K})\) by the formula
\[(\mu\otimes_{H}\nu)(g)=\frac{1}{|H|}\sum_{h\in H}\mu(g,h)\nu(h)\qquad g\in G.\]
If \(M\) is a \((\mathbb{K}Ge,\mathbb{K}Hf)\)-bimodule (i.e., a left \(\mathbb{K}[G\times H](e\otimes f^{*})\)-module) with character \(\mu\) and \(N\) is a \(\mathbb{K}Hf\)-module with character \(\nu\) then the character of \(M\otimes_{\mathbb{K}H}N\) is \(\mu\otimes_{H}\nu\), whence the definition above. (See [3, Lemma 7.1.3] for a proof of this character formula.) This construction yields a \(\mathbb{K}\)-bilinear map
\[-\otimes_{H}-:CF(G\times H,e\otimes f^{*};\mathbb{K})\times CF(H,f;\mathbb{K} )\to CF(G,e;\mathbb{K})\]
defined by \((\mu,\nu)\mapsto\mu\otimes_{H}\nu\), and this bilinear map restricts to a biadditive map
\[R_{\mathbb{K}}(G\times H,e\otimes f^{*})\times R_{\mathbb{K}}(H,f)\to R_{ \mathbb{K}}(G,e).\]
If \(\chi\in\operatorname{Irr}(\mathbb{K}Ge)\) and \(\psi,\psi^{\prime}\in\operatorname{Irr}(\mathbb{K}Hf)\) then
\[(\chi\times\psi^{\circ})\otimes_{H}\psi^{\prime}=\begin{cases}\chi&\text{if } \psi=\psi^{\prime}\\ 0&\text{else}.\end{cases}\]
Notice that if \(\mu\in CF(G\times H,e\otimes f^{*};\mathbb{K})\), \(\nu\in CF(H,f;\mathbb{K})\) and \(g\in G\) then
\[(\mu\otimes_{H}\nu)(g)=(\mu(g,\cdot),\nu^{\circ})_{H}=(\mu(g,\cdot)^{\circ},\nu )_{H}.\]
More generally, for any \(\alpha\in\mathbb{K}G\) we have
\[(\mu\otimes_{H}\nu)(\alpha)=(\mu(\alpha,\cdot),\nu^{\circ})_{H}=(\mu(\alpha, \cdot)^{\circ},\nu)_{H}.\]
**Proposition 3.1**.: _Let \(G\) and \(H\) be finite groups, let \(e\) be a central idempotent of \(\mathbb{K}G\) and let \(f\) be a central idempotent of \(\mathbb{K}H\). The map_
\[CF(G\times H,e\otimes f^{*};\mathbb{K}) \to\operatorname{Hom}_{\mathbb{K}}(CF(H,f;\mathbb{K}),CF(G,e; \mathbb{K}))\] \[\mu \mapsto\mu\otimes_{H}-\]
_is an isomorphism of \(\mathbb{K}\)-vector spaces, with inverse_
\[\operatorname{Hom}_{\mathbb{K}}(CF(H,f;\mathbb{K}),CF(G,e; \mathbb{K})) \to CF(G\times H,e\otimes f^{*};\mathbb{K})\] \[I \mapsto\sum_{\psi\in\operatorname{Irr}(\mathbb{K}Hf)}I(\psi) \times\psi^{\circ}.\]
Proof.: Let \(\Phi\) and \(\Psi\) denote the maps defined first and second, respectively, in the statement of the proposition. We recall that if \(\theta\in CF(H,f;\mathbb{K})\) then \(I(\theta)\times\theta^{\circ}\in CF(G\times H,e\otimes f^{*};\mathbb{K})\) is the class function defined by \((I(\theta)\times\theta^{\circ})(g,h)=I(\theta)(g)\theta^{\circ}(h)\). Now if \(I:CF(H,f;\mathbb{K})\to CF(G,e;\mathbb{K})\) is \(\mathbb{K}\)-transformation then
\[\Phi(\Psi(I))=(\sum_{\psi\in\operatorname{Irr}(\mathbb{K}Hf)}I(\psi)\times \psi^{\circ})\otimes_{H}-.\]
Let \(\psi^{\prime}\in\operatorname{Irr}(\mathbb{K}Hf)\) and let \(g\in G\). Then
\[[(\sum_{\psi\in\operatorname{Irr}(\mathbb{K}Hf)}I(\psi)\times\psi^ {\circ})\otimes_{H}\psi^{\prime}](g) =\sum_{\psi\in\operatorname{Irr}(\mathbb{K}Hf)}\frac{1}{|H|}\sum _{h\in H}I(\psi)(g)\psi^{\circ}(h)\psi^{\prime}(h)\] \[=\sum_{\psi\in\operatorname{Irr}(\mathbb{K}Hf)}I(\psi)(g)\cdot( \psi^{\prime},\psi)_{H}\] \[=I(\psi^{\prime})(g).\]
Since this holds for all \(g\in G\), and all \(\psi^{\prime}\in\operatorname{Irr}(\mathbb{K}Hf)\) we find that \(\Phi(\Psi(I))=I\). In particular, \(\Phi\) is surjective. A dimension count then shows that \(\Phi\) is an isomorphism, hence also \(\Phi^{-1}=\Psi\). The proof is complete.
**Corollary 3.2**.: _Let \(G\) and \(H\) be finite groups, let \(e\) be a central idempotent of \(\mathbb{K}G\) and let \(f\) be a central idempotent of \(\mathbb{K}H\). The map_
\[R_{\mathbb{K}}(G\times H,e\otimes f^{*}) \to\operatorname{Hom}_{\mathbb{Z}}(R_{\mathbb{K}}(H,f),R_{\mathbb{ K}}(G,e))\] \[\mu \mapsto\mu\otimes_{H}-\]
_is a group isomorphism, with inverse_
\[\operatorname{Hom}_{\mathbb{Z}}(R_{\mathbb{K}}(H,f),R_{\mathbb{ K}}(G,e)) \to R_{\mathbb{K}}(G\times H,e\otimes f^{*})\] \[I \mapsto\sum_{\psi\in\operatorname{Irr}(\mathbb{K}Hf)}I(\psi) \times\psi^{\circ}.\]
**Lemma 3.3**.: _Suppose that \(G^{\prime}\leq G\) and \(H^{\prime}\leq H\). Let \(e\) be a central idempotent of \(\mathbb{K}G^{\prime}\) and let \(f\) be a central idempotent of \(\mathbb{K}H^{\prime}\). If \(\mu\in CF(G^{\prime}\times H^{\prime},e\otimes f^{*};\mathbb{K})\) and \(\nu\in CF(H^{\prime},f;\mathbb{K})\) then for any \((x,y)\in G\times H\) we have_
\[(^{(x,y)}\mu)\otimes_{y^{\prime}H^{\prime}}(^{y}\nu)={}^{x}(\mu\otimes_{H^{ \prime}}\nu).\]
## 4 Brauer characters and the generalized decomposition map
Let \(G\) be a finite group and let \((\mathbb{K},\mathcal{O},F)\) be a \(p\)-modular system large enough for \(G\). Let \(CF_{p^{\prime}}(G;\mathbb{K})\) denote the subspace of \(CF(G;\mathbb{K})\) consisting of class functions \(\chi:G\to\mathbb{K}\) for which \(\chi(g)=0\) if \(g\notin G_{p^{\prime}}\).
Write \(\operatorname{IBr}_{F}(G)=\operatorname{IBr}(FG)\) for the set of irreducible Brauer characters of \(FG\). By convention we view each irreducible Brauer character of \(FG\) as an element of \(CF_{p^{\prime}}(G;\mathbb{K})\); or in other words, we extend each irreducible Brauer character to a class function on \(G\) that vanishes on \(G-G_{p^{\prime}}\). With this convention, \(\operatorname{IBr}_{F}(G)\) is a basis of \(CF_{p^{\prime}}(G;\mathbb{K})\). Another basis for this space is given by the set \(\operatorname{PrInd}_{\mathcal{O}}(G)=\operatorname{Pr}\operatorname{Ind}( \mathcal{O}G)\), which is the set of characters of projective indecomposable \(\mathcal{O}G\)-modules. Write \(R_{F}(G)\) for the subgroup of \(CF_{p^{\prime}}(G;\mathbb{K})\) spanned by \(\operatorname{IBr}_{F}(G)\). Then \(R_{F}(G)\) is isomorphic to the Grothendieck ring of the category \({}_{FG}\mathbf{mod}\) of finite-dimensional \(FG\)-modules and \(\operatorname{IBr}_{F}(G)\) is a \(\mathbb{Z}\)-basis of \(R_{F}(G)\).
Let \(u\in G\) be a \(p\)-element. If \(\chi\in CF(G;\mathbb{K})\) set \(d^{u}_{G}(\chi)\in CF_{p^{\prime}}(C_{G}(u);\mathbb{K})\) equal to the class function on \(C_{G}(u)\) defined by
\[d^{u}_{G}(\chi)(s):=\begin{cases}\chi(us)&\text{if }s\in C_{G}(u)_{p^{\prime}}\\ 0&\text{if }s\notin C_{G}(u)_{p^{\prime}}.\end{cases}\]
This construction yields a \(\mathbb{K}\)-linear map \(d_{G}^{u}:CF(G;\mathbb{K})\to CF_{p^{\prime}}(C_{G}(u);\mathbb{K})\) called the _generalized decomposition map_ (_associated to_\(u\)). When \(u=1\) we obtain the usual decomposition map \(d_{G}:CF(G;\mathbb{K})\to CF_{p^{\prime}}(G;\mathbb{K})\).
Note that for any \(g\in G\) and any \(\chi\in CF(G;\mathbb{K})\) one has \({}^{g}d_{G}^{u}(\chi)=d_{G}^{\varrho u}(\chi)\). Note also that for any \(p\)-element \(u\in G\) and any \(\chi\in CF(G;\mathbb{K})\) one has \(d_{G}^{u}(\chi^{\circ})=d_{G}^{u^{-1}}(\chi)^{\circ}\).
**Proposition 4.1**.: _Let \(\mathcal{U}\) be a set of representatives for the \(G\)-conjugacy classes of \(p\)-elements of \(G\). Then_
\[\bigoplus_{u\in\mathcal{U}}d_{G}^{u}:CF(G;\mathbb{K})\to\bigoplus_{u\in \mathcal{U}}CF_{p^{\prime}}(C_{G}(u);\mathbb{K})\]
_is an isomorphism of \(\mathbb{K}\)-vector spaces. Moreover, for any \(\chi,\psi\in CF(G;\mathbb{K})\) one has_
\[(\chi,\psi)_{G}=\sum_{u\in\mathcal{U}}(d_{G}^{u}(\chi),d_{G}^{u^{-1}}(\psi))_{ C_{G}(u)}.\]
Proof.: For each \(u\in\mathcal{U}\) let \(\mathcal{S}_{u}\) be a set of representatives of the \(C_{G}(u)\)-conjugacy classes of \(p^{\prime}\)-elements of \(C_{G}(u)\). Then \(\mathcal{G}:=\cup_{u\in\mathcal{U}}\left\{us|s\in\mathcal{S}_{u}\right\}\) is a set of representatives for the \(G\)-conjugacy classes of \(G\). It follows that
\[\dim_{\mathbb{K}}CF(G;\mathbb{K})=\dim_{\mathbb{K}}\bigoplus_{u\in\mathcal{U} }CF_{p^{\prime}}(C_{G}(u);\mathbb{K}).\]
In particular, to show \(\oplus_{u}d_{G}^{u}\) is a \(\mathbb{K}\)-isomorphism it is enough to show that it is injective.
Let \(\chi\in\ker(\oplus_{u}d_{G}^{u})\). Then \(d_{G}^{u}(\chi)=0\) for all \(u\in\mathcal{U}\). In particular, \(\chi(us)=d_{G}^{u}(\chi)(s)=0\) for all \(u\in\mathcal{U}\) and \(s\in\mathcal{S}_{u}\). But every element of \(G\) is conjugate to an element of the form \(us\) for some \(u\in\mathcal{U}\) and \(s\in\mathcal{S}_{u}\). Therefore \(\chi=0\), which proves that \(\oplus_{u}d_{G}^{u}\) is a \(\mathbb{K}\)-isomorphism.
Finally, if \(\chi,\psi\in CF(G;\mathbb{K})\) then
\[\sum_{u\in\mathcal{U}}(d^{u}_{G}(\chi),d^{u^{-1}}_{G}(\psi))_{C_{G}(u)} =\sum_{u\in\mathcal{U}}\frac{1}{|C_{G}(u)|}\sum_{s\in C_{G}(u)}d^{u }_{G}(\chi)(s)d^{u^{-1}}_{G}(\psi)(s^{-1})\] \[=\sum_{u\in\mathcal{U}}\frac{1}{|C_{G}(u)|}\sum_{s\in C_{G}(u)_{p ^{\prime}}}\chi(us)\psi(u^{-1}s^{-1})\] \[=\sum_{u\in\mathcal{U}}\frac{1}{|C_{G}(u)|}\sum_{s\in\mathcal{S}_ {u}}\frac{|C_{G}(u)|}{|C_{C_{G}(u)}(s)|}\chi(us)\psi((us)^{-1})\] \[=\sum_{u\in\mathcal{U}}\sum_{s\in\mathcal{S}_{u}}\frac{1}{|C_{G}( us)|}\chi(us)\psi((us)^{-1})\] \[=\sum_{g\in\mathcal{G}}\frac{1}{|C_{G}(g)|}\chi(g)\psi(g^{-1})\] \[=\frac{1}{|G|}\sum_{g\in G}\chi(g)\psi(g^{-1})\] \[=(\chi,\psi)_{G}.\]
We recall the definition of the _generalized decomposition numbers_, following Radha Kessar's treatment in [7, IV.5.1]. Let \(u\in G\) be a \(p\)-element and let \(\chi\in\operatorname{Irr}_{\mathbb{K}}(G)\). Write
\[\operatorname{Res}^{G}_{C_{G}(u)}\chi=\sum_{\zeta\in\operatorname{Irr}_{ \mathbb{K}}(C_{G}(u))}n_{\chi,\zeta}\zeta\qquad n_{\chi,\zeta}\in\mathbb{N}_{0}.\]
Since \(u\in Z(C_{G}(u))\), if \(\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))\) then there exists a root of unity \(\lambda_{u,\zeta}\in\mathbb{K}\) of \(p\)-power order such that
\[\zeta(uy)=\lambda_{u,\zeta}\zeta(y)\qquad\text{for all $y\in C_{G}(u)$.}\]
For each \(\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))\) write
\[d^{1}_{C_{G}(u)}(\zeta)=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u))}d^{(u)} _{\zeta,\tau}\tau\qquad d^{(u)}_{\zeta,\tau}\in\mathbb{N}_{0}.\]
Note that the \(d^{(u)}_{\zeta,\tau}\) are just the usual decomposition numbers for the character \(\zeta\) of the group \(C_{G}(u)\). Now define, for each \(\tau\in\operatorname{IBr}_{F}(C_{G}(u))\), the _generalized decomposition number_
\[d^{u}_{\chi,\tau}=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))}n_{ \chi,\zeta}\lambda_{u,\zeta}d^{(u)}_{\zeta,\tau}.\]
**Proposition 4.2**.: _Let \(u\in G\) be a \(p\)-element and let \(\chi\in\operatorname{Irr}_{\mathbb{K}}(G)\). Keep the notation set above. Then_
\[d^{u}_{G}(\chi)=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u))}d^{u}_{\chi,\tau}\tau.\]
_In particular, the matrix of \(d^{u}_{G}:CF(G;\mathbb{K})\to CF_{p^{\prime}}(C_{G}(u);\mathbb{K})\) with respect to the bases \(\operatorname{Irr}_{\mathbb{K}}(G)\) of \(CF(G;\mathbb{K})\) and \(\operatorname{IBr}_{F}(C_{G}(u))\) of \(CF_{p^{\prime}}(C_{G}(u);\mathbb{K})\) is the matrix whose entry in row \(\tau\), column \(\chi\) is \(d^{u}_{\chi,\tau}\)._
Proof.: Let \(s\in C_{G}(u)\). We must show that \(d^{u}_{G}(\chi)(s)=\sum_{\tau}d^{u}_{\chi,\tau}\tau(s)\). This is clear if \(s\notin C_{G}(u)_{p^{\prime}}\), so assume that \(s\in C_{G}(u)_{p^{\prime}}\). Then
\[\chi(us) =\operatorname{Res}^{G}_{C_{G}(u)}(\chi)(us)=\sum_{\zeta\in \operatorname{Irr}_{\mathbb{K}}(C_{G}(u))}n_{\chi,\zeta}\zeta(us)\] \[=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))}n_{ \chi,\zeta}\lambda_{u,\zeta}\zeta(s)\] \[=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))}n_{ \chi,\zeta}\lambda_{u,\zeta}\left(\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}( u))}d^{(u)}_{\zeta,\tau}\tau(s)\right)\] \[=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u))}\left(\sum_{\zeta \in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))}n_{\chi,\zeta}\lambda_{u,\zeta}d^ {(u)}_{\zeta,\tau}\right)\tau(s)\] \[=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u))}d^{u}_{\chi,\tau} \tau(s).\]
So \(d^{u}_{G}(\chi)(s)=\sum_{\tau}d^{u}_{\chi,\tau}\tau(s)\), and the proof is complete.
If \(e\) is a central idempotent of \(FG\) write \(\operatorname{IBr}_{F}(G,e)=\operatorname{IBr}(FGe)\) for the set of Brauer characters of irreducible \(FGe\)-modules. If \(e\) is a central idempotent of \(\mathcal{O}G\) write \(\operatorname{PrInd}_{\mathcal{O}}(G,e)=\operatorname{PrInd}(\mathcal{O}Ge)\) for the set of characters of projective indecomposable \(\mathcal{O}Ge\)-modules.
Let \(e\in Z(\mathcal{O}G)\) be an idempotent. Set \(CF_{p^{\prime}}(G,e;\mathbb{K}):=e\cdot CF_{p^{\prime}}(G;\mathbb{K})\). Note that \(\operatorname{PrInd}_{\mathcal{O}}(G,e)\) is a basis of \(CF_{p^{\prime}}(G,e;\mathbb{K})\). It follows that
\[CF_{p^{\prime}}(G,e;\mathbb{K})=CF_{p^{\prime}}(G;\mathbb{K})\cap CF(G,e; \mathbb{K}),\]
so \(CF_{p^{\prime}}(G,e;\mathbb{K})\) is equal to the subspace of \(CF(G;\mathbb{K})\) formed by the class functions \(\chi:G\to\mathbb{K}\) that satisfy \(\chi(g)=0\) if \(g\notin G_{p^{\prime}}\) and \(\chi(ge)=\chi(g)\) for all \(g\in G\). Since the Cartan matrix of \(FG\) is invertible \(\operatorname{IBr}_{F}(G,\overline{e})\) is also a basis
of \(CF_{p^{\prime}}(G,e;\mathbb{K})\). Note that if \(I\) is a set of pairwise orthogonal idempotents of \(Z(\mathcal{O}G)\) whose sum equals \(1\) then
\[CF_{p^{\prime}}(G;\mathbb{K})=\bigoplus_{e\in I}CF_{p^{\prime}}(G,e;\mathbb{K}).\]
If \((u,e)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(G)\) then the _generalized decomposition map_ (_associated to \((u,e)\)_) is the \(\mathbb{K}\)-linear map
\[d_{G}^{u,e}:CF(G;\mathbb{K}) \to CF_{p^{\prime}}(C_{G}(u),e;\mathbb{K})\] \[\chi \mapsto e\cdot d_{G}^{u}(\chi).\]
In other words, \(d_{G}^{u,e}\) is the composition
Notice that if \(u\in G\) is a \(p\)-element then \(d_{G}^{u}=\sum_{e\in\operatorname{bli}(\mathcal{O}C_{G}(u))}d_{G}^{u,e}\).
**Proposition 4.3**.: _Let \((u,e)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(G)\) and let \(\chi\in CF(G;\mathbb{K})\). If \(s\in C_{G}(u)\) then_
\[d_{G}^{u,e}(\chi)(s)=\begin{cases}\chi(use)&\text{if }s\in C_{G}(u)_{p^{ \prime}}\\ 0&\text{if }s\notin C_{G}(u)_{p^{\prime}}.\end{cases}\]
Proof.: Since the map \(d_{G}^{u,e}\) is \(\mathbb{K}\)-linear we may assume without loss that \(\chi\in\operatorname{Irr}_{\mathbb{K}}(G)\). Let \(s\in C_{G}(u)\). Since \(d_{G}^{u,e}(\chi)\in CF_{p^{\prime}}(C_{G}(u);\mathbb{K})\) we know that \(d_{G}^{u,e}(\chi)(s)=0\) if \(s\notin C_{G}(u)_{p^{\prime}}\). So we assume that \(s\in C_{G}(u)_{p^{\prime}}\). For \(\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))\), \(\tau\in\operatorname{IBr}_{F}(C_{G}(u))\) let \(n_{\chi,\zeta}\), \(\lambda_{u,\zeta}\), and \(d_{\zeta,\tau}^{(u)}\) be as in the definition of the generalized decomposition numbers \(d_{\chi,\tau}^{u}\). By Proposition 4.2,
\[d_{G}^{u}(\chi)=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u))}d_{\chi,\tau}^{ u}\tau.\]
Now if \(\tau\in\operatorname{IBr}_{F}(C_{G}(u),\overline{e})\) then \(\tau\in CF_{p^{\prime}}(C_{G}(u),e;\mathbb{K})\), so \(e\cdot\tau=\tau\). If \(\tau\notin\operatorname{IBr}_{F}(C_{G}(u),\overline{e})\) then \(e\cdot\tau=0\). It follows that
\[d_{G}^{u,e}(\chi)=e\cdot d_{G}^{u}(\chi)=\sum_{\tau\in\operatorname{IBr}_{F}( C_{G}(u),\overline{e})}d_{\chi,\tau}^{u}\tau.\]
Now since \(use\in\mathcal{O}C_{G}(u)\) we have
\[\chi(use) =(\operatorname{Res}_{C_{G}(u)}^{G}\chi)(use)\] \[=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u))}n_{\chi, \zeta}\zeta(use)\] \[=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u),e)}n_{ \chi,\zeta}\zeta(us)\] \[=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u),e)}n_{ \chi,\zeta}\lambda_{u,\zeta}\zeta(s)\] \[=\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u),e)}n_{ \chi,\zeta}\lambda_{u,\zeta}\left(\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u) )}d_{\zeta,\tau}^{(u)}\tau(s)\right)\] \[=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u))}\left(\sum_{\zeta \in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u),e)}n_{\chi,\zeta}\lambda_{u, \zeta}d_{\zeta,\tau}^{(u)}\right)\tau(s)\] \[=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u),\overline{e})} \left(\sum_{\zeta\in\operatorname{Irr}_{\mathbb{K}}(C_{G}(u),e)}n_{\chi,\zeta} \lambda_{u,\zeta}d_{\zeta,\tau}^{(u)}\right)\tau(s)\] \[=\sum_{\tau\in\operatorname{IBr}_{F}(C_{G}(u),\overline{e})}d_{ \chi,\tau}^{u}\tau(s)\] \[=d_{G}^{u,e}(\chi)(s).\]
In the above, we have used the fact that the usual decomposition numbers \(d_{\zeta,\tau}^{(u)}\) are zero unless \(\zeta\) and \(\tau\) belong to the same block of \(C_{G}(u)\). This completes the proof.
**Corollary 4.4**.: _Let \(H\leq G\), \((u,e)\in\mathcal{BE}_{\mathcal{O}}(H)\), and \(\chi\in CF(H;\mathbb{K})\). Then for any \(g\in G\) one has_
\[{}^{g}d_{H}^{u,e}(\chi)=d_{g}^{g\ast u,g\epsilon}({}^{g}\chi)\]
Proposition 4.3 also implies that \(d_{G}^{u,e}(\chi^{\circ})=d_{G}^{u^{-1},e^{\ast}}(\chi)^{\circ}\) for any \((u,e)\in\mathcal{BE}_{\mathcal{O}}(G)\) and any \(\chi\in CF(G;K)\).
Note that the matrix of \(d_{G}^{u,e}:CF(G;\mathbb{K})\to CF_{p^{\prime}}(C_{G}(u),e;\mathbb{K})\) with respect to the bases \(\operatorname{Irr}_{\mathbb{K}}(G)\) and \(\operatorname{IBr}_{F}(C_{G}(u),\overline{e})\) has entry \(d_{\chi,\tau}^{u}\) in row \(\tau\), column \(\chi\). Brauer's 2nd Main Theorem states that certain columns of this matrix must be zero.
**Theorem 4.5**.: _(Brauer's 2nd Main Theorem) Let \((u,e)\in\mathcal{BE}_{\mathcal{O}}(G)\) and let \(B\in\operatorname{Bl}(\mathcal{O}G)\). If \((u,e)\notin\mathcal{BE}_{\mathcal{O}}(B)\) then \(d_{\chi,\tau}^{u}=0\) for any \(\chi\in\operatorname{Irr}_{K}(G,B)\) and any \(\tau\in\operatorname{IBr}_{F}(C_{G}(u),\overline{e})\)._
Let \((u,e)\in\mathcal{BE}_{\mathcal{O}}(G)\). Then there exists a unique block \(B\in\operatorname{Bl}(\mathcal{O}G)\) such that \((u,e)\in\mathcal{BE}_{\mathcal{O}}(B)\). If \(A\) is any block of \(\mathcal{O}G\) different from \(B\) and \(\chi\in\operatorname{Irr}_{\mathbb{K}}(G,A)\) then Brauer's 2nd Main Theorem implies that \(d_{G}^{u,e}(\chi)=0\). It follows that the map \(d_{G}^{u,e}\) is completely determined by its restriction to the subspace \(CF(G,B;K)\) spanned by the irreducible characters of \(\mathbb{K}G\) that belong to \(B\).
**Proposition 4.6**.: _Let \(B\in\operatorname{Bl}(\mathcal{O}G)\) and let \(\mathcal{U}\) be a set of representatives for the \(G\)-conjugacy classes of \(\mathcal{BE}_{\mathcal{O}}(B)\). Then_
\[\bigoplus_{(u,e)\in\mathcal{U}}d_{G}^{u,e}:CF(G,B;\mathbb{K})\to\bigoplus_{(u,e )\in\mathcal{U}}CF_{p^{\prime}}(C_{G}(u),e;\mathbb{K})\]
_is an isomorphism of \(\mathbb{K}\)-vector spaces. Moreover, for any \(\chi,\psi\in CF(G,B;\mathbb{K})\) one has_
\[(\chi,\psi)_{G}=\sum_{(u,e)\in\mathcal{U}}(d_{G}^{u,e}(\chi),d_{G}^{u^{-1},e}( \psi))_{C_{G}(u)}.\]
Proof.: Let \(\chi\in CF(G,B;\mathbb{K})\) be such that \(d_{G}^{u,e}(\chi)=0\) for all \((u,e)\in\mathcal{U}\). I claim that \(d_{G}^{v}(\chi)=0\) for all \(p\)-elements \(v\in G\). Suppose not. Let \(v\in G\) be a \(p\)-element such that \(d_{G}^{v}(\chi)\neq 0\). Then there exists a block \(f\in\operatorname{bli}(\mathcal{O}C_{G}(v))\) such that \(f\cdot d_{G}^{v}(\chi)\neq 0\), or in other words, \(d_{G}^{v,f}(\chi)\neq 0\). Now \(\chi=\sum_{\theta\in\operatorname{Irr}_{\mathbb{K}}(B)}c_{\theta}\theta\) for some scalars \(c_{\theta}\in\mathbb{K}\), hence
\[0\neq d_{G}^{v,f}(\chi)=\sum_{\theta\in\operatorname{Irr}_{\mathbb{K}}(B)}c_{ \theta}d_{G}^{v,f}(\theta).\]
So \(d_{G}^{v,f}(\theta)\neq 0\) for some \(\theta\in\operatorname{Irr}_{\mathbb{K}}(B)\). Brauer's 2nd Main Theorem then implies that \((v,f)\in\mathcal{BE}_{\mathcal{O}}(B)\). Let \((u,e)\in\mathcal{U}\) and \(g\in G\) be such that \({}^{g}(v,f)=(u,e)\). Then one can verify that \({}^{g}d_{G}^{v,f}(\chi)=d_{G}^{u,e}(\chi)\). But by assumption \(d_{G}^{u,e}(\chi)=0\), so \(d_{G}^{v,f}(\chi)=0\), a contradiction. This proves the claim: we have \(d_{G}^{v}(\chi)=0\) for all \(p\)-elements \(v\in G\). Proposition 4.1 then gives that \(\chi=0\). Thus we have shown that the map \(\oplus_{(u,e)\in\mathcal{U}}d_{G}^{u,e}\) is injective, and it remains to see that the map is also surjective.
Let \(\sum_{(u,e)\in\mathcal{U}}\psi_{(u,e)}\in\oplus_{(u,e)\in\mathcal{U}}CF_{p^{ \prime}}(C_{G}(u),e;\mathbb{K})\). So \(\psi_{(u,e)}\in CF_{p^{\prime}}(C_{G}(u),e;\mathbb{K})\) for each \((u,e)\in\mathcal{U}\). If \((v,f)\in\mathcal{BE}_{\mathcal{O}}(B)\setminus\mathcal{U}\) then there exists an element \(g\in G\) such that \({}^{g}(v,f)\in\mathcal{U}\). Set \(\psi_{(v,f)}={}^{g^{-1}}\psi_{{}^{g}(v,f)}\). Note that \(\psi_{(v,f)}\in CF_{p^{\prime}}(C_{G}(v),f;\mathbb{K})\) and that the definition of \(\psi_{(v,f)}\) does not depend on the choice of \(g\). Now let \(\mathcal{V}\) be a set of representatives for the \(G\)-conjugacy classes of \(p\)-elements of \(G\). For each \(v\in\mathcal{V}\) set
\[\varphi_{v}:=\sum_{(v,f)\in\mathcal{BE}_{\mathcal{O}}(B)}\psi_{(v,f)}\in CF_{p ^{\prime}}(C_{G}(v);\mathbb{K}).\]
The sum above is taken over all \(B\)-Brauer elements whose first component is equal to \(v\). If there are no such Brauer elements then \(\varphi_{v}=0\). By Proposition 4.1 there exists a class function \(\chi\in CF(G;\mathbb{K})\) such that \(d_{G}^{v}(\chi)=\varphi_{v}\) for each \(v\in\mathcal{V}\). I claim that \(d_{G}^{u,e}(\chi)=\psi_{(u,e)}\) for all \((u,e)\in\mathcal{U}\). Let \((u,e)\in\mathcal{U}\). Since \(u\in G\) is a \(p\)-element of \(G\) there exists an element \(g\in G\) such that \(v:={}^{g}u\in\mathcal{V}\). Set \(f:={}^{g}e\), so \({}^{g}(u,e)=(v,f)\). Note that \((v,f)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(B)\). Now
\[d_{G}^{v,f}(\chi)=f\cdot d_{G}^{v}(\chi)=f\cdot\varphi_{v}=\psi_{(v,f)}={}^{g }\psi_{{}^{g-1}(v,f)}={}^{g}\psi_{(u,e)}\]
and therefore \(d_{G}^{u,e}(\chi)={}^{g-1}d_{G}^{v,f}(\chi)=\psi_{(u,e)}\). This proves the claim. Since each \((u,e)\in\mathcal{U}\) belongs to \(B\) we have \(d_{G}^{u,e}(\chi)=d_{G}^{u,e}(e_{B}\cdot\chi)\) for all \((u,e)\in\mathcal{U}\). Now \(e_{B}\cdot\chi\in CF(G,B;\mathbb{K})\) and we have
\[\left(\bigoplus_{(u,e)\in\mathcal{U}}d_{G}^{u,e}\right)(e_{B}\cdot\chi)= \sum_{(u,e)\in\mathcal{U}}\psi_{(u,e)}.\]
We have shown that \(\oplus_{(u,e)\in\mathcal{U}}d_{G}^{u,e}\) is surjective, hence is an isomorphism.
Now let \(\chi,\psi\in CF(G,B;\mathbb{K})\). Continue to let \(\mathcal{V}\) denote a set of representatives for the \(G\)-conjugacy classes of \(p\)-elements of \(G\). We compute:
\[\sum_{(u,e)\in\mathcal{U}}(d_{G}^{u,e}(\chi),d_{G}^{u-1,e}(\psi)) _{C_{G}(u)} =\sum_{(u,e)\in\mathcal{U}}\frac{1}{|G:\operatorname{Stab}_{G}( u,e)|}\sum_{(v,f)\in\operatorname{Orb}_{G}(u,e)}(d_{G}^{v,f}(\chi),d_{G}^{v-1,f}(\psi))_{C_{G}(v)}\] \[=\sum_{(u,e)\in\mathcal{U}}\frac{|C_{G}(u)|}{|G|}\sum_{(v,f)\in \operatorname{Orb}_{G}(u,e)}(d_{G}^{v,f}(\chi),d_{G}^{v-1,f}(\psi))_{C_{G}(v)}\] \[=\sum_{(v,f)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(B)}\frac{|C_{G }(v)|}{|G|}(d_{G}^{v,f}(\chi),d_{G}^{v-1,f}(\psi))_{C_{G}(v)}\] \[=\sum_{(v,f)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(G)}\frac{|C_{ G}(v)|}{|G|}(d_{G}^{v,f}(\chi),d_{G}^{v-1,f}(\psi))_{C_{G}(v)}\] \[=\sum_{\begin{subarray}{c}v\in G\\ \text{a $p$-element}\end{subarray}}\frac{|C_{G}(v)|}{|G|}\sum_{f\in \operatorname{bli}(\mathcal{O}C_{G}(v))}(f\cdot d_{G}^{v}(\chi),f\cdot d_{G}^{ v-1}(\psi))_{C_{G}(v)}\] \[=\sum_{\begin{subarray}{c}v\in G\\ \text{a $p$-element}\end{subarray}}\frac{|C_{G}(v)|}{|G|}(d_{G}^{v}(\chi),d_{G}^{v-1 }(\psi))_{C_{G}(v)}\] \[=\sum_{v\in\mathcal{V}}(d_{G}^{v}(\chi),d_{G}^{v-1}(\psi))_{C_{G} (v)}\] \[=(\chi,\psi)_{G}.\]
The fourth equality above follows from Brauer's 2nd Main Theorem (Theorem 4.5) and the final equality follows from Proposition 4.1. The proof is complete.
Let \(B\in\operatorname{Bl}(\mathcal{O}G)\). If \(\chi\in CF(G,B;\mathbb{K})\) then Propositions 4.3 and 4.6 imply that \(\chi\) is completely determined by the values \(\chi(use)\) where \(u\in G\) is a \(p\)-element, \(s\in C_{G}(u)_{p^{\prime}}\), and \(e\in\operatorname{bli}(\mathcal{O}C_{G}(u))\) such that \((u,e)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(B)\).
Now let \(G\) and \(H\) be finite groups and assume that \((\mathbb{K},\mathcal{O},F)\) is large enough for \(G\times H\). Let \(e\in Z(\mathcal{O}G)\) and \(f\in Z(\mathcal{O}H)\) be central idempotents. Then \(e\otimes f^{*}\) is a central idempotent of \(\mathcal{O}[G\times H]\). Let \(\mu\in CF_{p^{\prime}}(G\times H,e\otimes f^{*};\mathbb{K})\). Then for any \(\nu\in CF(H,f;\mathbb{K})\) we have \(\mu\otimes_{H}\nu\in CF_{p^{\prime}}(G,e;\mathbb{K})\). Indeed, \(\mu\otimes_{H}\nu\) is certainly an element of \(CF(G,e;\mathbb{K})\) and if \(g\in G\setminus G_{p^{\prime}}\) then \((g,h)\notin(G\times H)_{p^{\prime}}\) for any \(h\in H\) hence
\[(\mu\otimes_{H}\nu)(g)=\frac{1}{|H|}\sum_{h\in H}\mu(g,h)\nu(h)=0.\]
**Proposition 4.7**.: _Let \(G\) and \(H\) be finite groups, let \(A\in\operatorname{Bl}(\mathcal{O}G)\), and let \(B\in\operatorname{Bl}(\mathcal{O}H)\). Let \(\mathcal{U}\) denote a set of representatives for the \(G\)-conjugacy classes of \(\mathcal{B}\mathcal{E}_{\mathcal{O}}(A)\) and let \(\mathcal{V}\) denote a set of representatives for the \(H\)-conjugacy classes of \(\mathcal{B}\mathcal{E}_{\mathcal{O}}(B)\). Let \(\mu\in CF(G\times H,A\otimes B^{*};\mathbb{K})\). Then the \(\mathbb{K}\)-linear map_
\[\bigoplus_{(v,f)\in\mathcal{V}}CF_{p^{\prime}}(C_{H}(v),f; \mathbb{K}) \to\bigoplus_{(u,e)\in\mathcal{U}}CF_{p^{\prime}}(C_{G}(u),e; \mathbb{K})\] \[\sum_{(v,f)\in\mathcal{V}}\nu_{(v,f)} \mapsto\sum_{(u,e)\in\mathcal{U}}\sum_{(v,f)\in\mathcal{V}}d_{G \times H}^{(u,v),e\otimes f^{*}}(\mu)\otimes_{C_{H}(v)}\nu_{(v,f)}\]
_is the unique map making the diagram below commute:_
Proof.: Note that if \((u,e)\in\mathcal{U}\) and \((v,f)\in\mathcal{V}\) then \(((u,v),e\otimes f^{*})\) is a Brauer element of \(\mathcal{O}[G\times H]\). Therefore \(d_{G\times H}^{(u,v),e\otimes f^{*}}(\mu)\in CF_{p^{\prime}}(C_{G}(u)\times C _{H}(v),e\otimes f^{*};\mathbb{K})\). In particular, \(d_{G\times H}^{(u,v),e\otimes f^{*}}(\mu)\otimes_{C_{H}(v)}-\) defines a \(\mathbb{K}\)-linear map \(CF_{p^{\prime}}(C_{H}(v),f;\mathbb{K})\to CF_{p^{\prime}}(C_{G}(u),e; \mathbb{K})\). Therefore the map defined in the statement makes sense.
Let \(\nu\in CF(H,B;\mathbb{K})\). To see that the diagram commutes we must show that
\[\sum_{(u,e)\in\mathcal{U}}d_{G}^{u,e}(\mu\otimes_{H}\nu)=\sum_{(u,e)\in \mathcal{U}}\sum_{(v,f)\in\mathcal{V}}d_{G\times H}^{(u,v),e\otimes f^{*}}(\mu )\otimes_{C_{H}(v)}d_{H}^{v,f}(\nu).\]
To accomplish this we will check that
\[d_{G}^{u,e}(\mu\otimes_{H}\nu)=\sum_{(v,f)\in\mathcal{V}}d_{G\times H}^{(u,v),e \otimes f^{*}}(\mu)\otimes_{C_{H}(v)}d_{H}^{v,f}(\nu)\]
for any fixed \((u,e)\in\mathcal{U}\). Let \(s\in C_{G}(u)_{p^{\prime}}\). Note first that for any \((v,f)\in\mathcal{V}\) we have
\[d_{H}^{v,f^{*}}(\mu(use,\cdot))=d_{G\times H}^{(u,v),e\otimes f^{*}}(\mu)(s, \cdot).\]
We compute:
\[d_{G}^{u,e}(\mu\otimes_{H}\nu)(s) =(\mu\otimes_{H}\nu)(use)\] \[=(\nu,\mu(use,\cdot)^{\circ})_{H}\] \[=\sum_{(v,f)\in\mathcal{V}}(d_{H}^{v,f}(\nu),d_{H}^{v^{-1},f}( \mu(use,\cdot)^{\circ}))_{C_{H}(v)}\] \[=\sum_{(v,f)\in\mathcal{V}}(d_{H}^{v,f}(\nu),d_{H}^{v,f^{*}}(\mu (use,\cdot))^{\circ})_{C_{H}(v)}\] \[=\sum_{(v,f)\in\mathcal{V}}(d_{H}^{v,f}(\nu),d_{G\times H}^{(u,v ),e\otimes f^{*}}(\mu)(s,\cdot)^{\circ})_{C_{H}(v)}\] \[=\sum_{(v,f)\in\mathcal{V}}(d_{G\times H}^{(u,v),e\otimes f^{*}}( \mu)\otimes_{C_{H}(v)}d_{H}^{v,f}(\nu))(s).\]
The first equality above holds by Proposition 4.3 and the third by Proposition 4.6. Since \(s\) was an arbitrary element of \(C_{G}(u)_{p^{\prime}}\) we find that
\[d_{G}^{u,e}(\mu\otimes_{H}\nu)=\sum_{(v,f)\in\mathcal{V}}d_{G\times H}^{(u,v), e\otimes f^{*}}(\mu)\otimes_{C_{H}(v)}d_{H}^{v,f}(\nu).\]
It follows that
\[\sum_{(u,e)\in\mathcal{U}}d_{G}^{u,e}(\mu\otimes_{H}\nu)=\sum_{(u,e)\in \mathcal{U}}\sum_{(v,f)\in\mathcal{V}}d_{G\times H}^{(u,v),e\otimes f^{*}}( \mu)\otimes_{C_{H}(v)}d_{H}^{v,f}(\nu)\]
and since \(\nu\) was an arbitrary class function in \(CF(H,B;\mathbb{K})\) the diagram commutes.
The uniqueness of the map follows from the fact that the vertical arrows in the diagram are isomorphisms, thanks to Proposition 4.6.
## 5 Trivial source modules
Let \(G\) be a finite group and let \((\mathbb{K},\mathcal{O},F)\) be a \(p\)-modular system large enough for \(G\). Let \(R\in\{\mathcal{O},F\}\). In this section we establish some results about trivial
source \(RG\)-modules that will be needed in the sequel. Recall that an \(RG\)-module \(M\) is called a _trivial source_ or \(p\)-_permutation module_ if \(\operatorname{Res}_{P}^{G}M\) is a permutation \(RP\)-module for all \(p\)-subgroups \(P\) of \(G\). We write \({}_{RG}\mathbf{triv}\) for the category of (finitely generated) trivial source \(RG\)-modules and we let \(T_{R}(G)=T(RG)\) denote the Grothendieck ring of \({}_{RG}\mathbf{triv}\).
Recall that if \(N\) is a trivial source \(FG\)-module then there exists a unique (up to isomorphism) trivial source \(\mathcal{O}G\)-module \(M\) such that \(F\otimes_{\mathcal{O}}M\cong N\).
Let \(M\) be a trivial source \(\mathcal{O}G\)-module and let \(P\) be a \(p\)-subgroup of \(G\). We will write \(\overline{M}(P)\in{}_{FN_{G}(P)}\mathbf{triv}\) for the usual Brauer construction applied to \(M\) and we will write \(M(P)\in{}_{\mathcal{O}N_{G}(P)}\mathbf{triv}\) for a lift of \(\overline{M}(P)\). Of course, \(M(P)\) is only well-defined up to isomorphism. If \((P,e)\in\mathcal{BP}_{F}(G)\) we write
\[\overline{M}(P,e):=e\operatorname{Res}_{N_{G}(P,e)}^{N_{G}(P)}\overline{M}(P) \in{}_{FN_{G}(P,e)e}\mathbf{triv}\]
and if \((P,e)\in\mathcal{BP}_{\mathcal{O}}(G)\) then we write
\[M(P,e):=e\operatorname{Res}_{N_{G}(P,e)}^{N_{G}(P)}M(P)\in{}_{\mathcal{O}N_{ G}(P,e)e}\mathbf{triv}.\]
Note that \(M(P,e)\) is only well-defined up to isomorphism.
**Lemma 5.1**.: _Let \(B\in\operatorname{Bl}(\mathcal{O}G)\), \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\), and set \(I_{(P,e)}=N_{G}(P,e)\). Let \(M\in{}_{B}\mathbf{triv}\) and let \(Q\) be a \(p\)-subgroup of \(I_{(P,e)}\). Then there is an isomorphism of \(\mathcal{O}N_{I_{(P,e)}}(Q)\)-modules_
\[(M(P,e))(Q)\cong e(Q)\cdot\operatorname{Res}_{N_{I_{(P,e)}}(Q)}^{N_{G}(PQ)}(M (PQ))\]
_where \(e(Q)\) denotes the unique lift of \(\operatorname{br}_{Q}^{I_{(P,e)}}(e)\) to a central idempotent of \(\mathcal{O}N_{I_{(P,e)}}(Q)\)._
Proof.: Note that \((M(P,e))(Q)\) is an \(\mathcal{O}\)-lift of \(\overline{(\overline{M}(P,\overline{e}))}(Q)\) and that \(e(Q)\cdot\operatorname{Res}_{N_{I_{(P,e)}}(Q)}^{N_{G}(PQ)}(M(PQ))\) is an \(\mathcal{O}\)-lift of \(\operatorname{br}_{Q}^{I_{(P,e)}}(\overline{e})\cdot\operatorname{Res}_{N_{I_ {(P,e)}}(Q)}^{N_{G}(PQ)}(\overline{M}(PQ))\). Thus we only need to show that there is an isomorphism of \(FN_{I_{(P,e)}}(Q)\)-modules
\[\overline{(\overline{M}(P,\overline{e}))}(Q)\cong\operatorname{br}_{Q}^{I_{( P,e)}}(\overline{e})\cdot\operatorname{Res}_{N_{I_{(P,e)}}(Q)}^{N_{G}(PQ)}( \overline{M}(PQ)).\]
First note that there is an isomorphism of \(FN_{I_{(P,e)}}(Q)\)-modules
\[\overline{(\overline{M}(P,\overline{e}))}(Q)\cong\operatorname{br}_{Q}^{I_{( P,e)}}(\overline{e})\cdot\overline{((\operatorname{Res}_{I_{(P,e)}}^{N_{G}(P)} (\overline{M}(P)))}(Q))\]
by [2, Lemma 3.7] (with the group \(G\) of the Lemma replaced with \(I_{(P,e)}\), \(M\) replaced with \(\operatorname{Res}^{N_{G}(P)}_{I_{(P,e)}}(\overline{M}(P))\), \(P\) replaced with \(Q\), \(i\) with \(\overline{e}\), and \(H\) with \(I_{(P,e)}\)). Now observe that we have an isomorphism of \(FN_{I_{(P,e)}}(Q)\)-modules
\[\overline{(\operatorname{Res}^{N_{G}(P)}_{I_{(P,e)}}(\overline{M} (P)))}(Q) =\overline{(\operatorname{Res}^{I_{(P,e)}}_{N_{I_{(P,e)}}(Q)} \operatorname{Res}^{N_{G}(P)}_{I_{(P,e)}}(\overline{M}(P)))}(Q)\] \[=\overline{(\operatorname{Res}^{N_{G}(P)}_{N_{I_{(P,e)}}(Q)}( \overline{M}(P)))}(Q)\] \[=\operatorname{Res}^{N_{G}(P)\cap N_{G}(Q)}_{N_{I_{(P,e)}}(Q)}( \overline{(\overline{M}(P))}(Q))\] \[\cong\operatorname{Res}^{N_{G}(P)\cap N_{G}(Q)}_{N_{G}(P)\cap N_ {G}(Q)}(\operatorname{Res}^{N_{G}(PQ)}_{N_{G}(P)\cap N_{G}(Q)}(\overline{M}( PQ)))\] \[=\operatorname{Res}^{N_{G}(PQ)}_{N_{I_{(P,e)}}(Q)}(\overline{M}( PQ)).\]
So the result follows.
**Lemma 5.2**.: _Let \(G\) be a finite group, \(B\in\operatorname{Bl}(\mathcal{O}G)\). Let \(M\in{}_{B}\mathbf{triv}\), \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\), and \((U,\epsilon)\in\mathcal{BP}_{\mathcal{O}}(C_{G}(P),e)\) such that \(U\) is abelian. Then \((PU,\epsilon)\) is a \(B\)-Brauer pair, \((P,e)\trianglelefteq(PU,\epsilon)\), and there is an isomorphism of \(\mathcal{O}C_{G}(PU)\)-modules_
\[\operatorname{Res}^{N_{C_{G}(P)}(U,\epsilon)}_{C_{G}(PU)}((\operatorname{Res} ^{N_{G}(P,e)}_{C_{G}(P)}M(P,e))(U,\epsilon))\cong\operatorname{Res}^{N_{G}(PU,\epsilon)}_{C_{G}(PU)}M(PU,\epsilon).\]
Proof.: Since \(C_{C_{G}(P)}(U)=C_{G}(PU)\) the idempotent \(\epsilon\) is a block of \(\mathcal{O}C_{G}(PU)\). In particular, \((PU,\epsilon)\in\mathcal{BP}_{\mathcal{O}}(G)\). Since \((U,\epsilon)\in\mathcal{BP}_{\mathcal{O}}(C_{G}(P),e)\) we have that \(\operatorname{br}^{C_{G}(P)}_{U}(e)\overline{\epsilon}=\overline{\epsilon}\). But \(\operatorname{br}^{C_{G}(P)}_{U}(e)=\operatorname{br}^{G}_{PU}(e)\), so \((P,e)\trianglelefteq(PU,\epsilon)\) and in particular \((PU,\epsilon)\) belongs to \(B\). Now by Lemma 5.1 there is an isomorphism of \(\mathcal{O}[N_{G}(P,e)\cap N_{G}(U)]\)-modules
\[(M(P,e))(U)\cong e(U)\cdot\operatorname{Res}^{N_{G}(PU)}_{N_{G}(P,e)\cap N_{ G}(U)}M(PU)\]
where \(e(U)\) is the unique lift of \(\operatorname{br}^{N_{G}(P,e)}_{U}(e)\) to a central idempotent of \(\mathcal{O}[N_{G}(P,e)\cap N_{G}(U)]\). Since \(\operatorname{br}^{N_{G}(P,e)}_{U}(e)=\operatorname{br}^{G}_{PU}(e)\) we have \(e(U)\cdot\epsilon=\epsilon\). By restricting to \(C_{G}(PU)\) and cutting with \(\epsilon\) we obtain an isomorphism of \(\mathcal{O}C_{G}(PU)\)-modules
\[\epsilon\cdot\operatorname{Res}^{N_{G}(P,e)\cap N_{G}(U)}_{C_{G}( PU)}((M(P,e))(U)) \cong\epsilon\operatorname{Res}^{N_{G}(PU)}_{C_{G}(PU)}M(PU)\] \[=\operatorname{Res}^{N_{G}(PU,\epsilon)}_{C_{G}(PU)}M(PU,\epsilon).\]
By applying Remark 3.2(b) of [2], noting that \(U\leq C_{G}(PU)\) since \(U\) is
assumed to be abelian, we compute that
\[\epsilon\cdot \operatorname{Res}_{C_{G}(PU)}^{N_{G}(P,e)\cap N_{G}(U)}((M(P,e))(U))\] \[=\epsilon\cdot((\operatorname{Res}_{C_{G}(PU)}^{N_{G}(P,e)}(M(P,e )))(U))\] \[=\epsilon\cdot((\operatorname{Res}_{C_{G}(PU)}^{C_{G}(P)} \operatorname{Res}_{C_{G}(P)}^{N_{G}(P,e)}M(P,e))(U))\] \[=\epsilon\cdot\operatorname{Res}_{C_{G}(PU)}^{N_{C_{G}(P)}(U)}(( \operatorname{Res}_{C_{G}(P)}^{N_{G}(P,e)}M(P,e))(U))\] \[=\epsilon\cdot\operatorname{Res}_{C_{G}(PU)}^{N_{C_{G}(P)}(U, \epsilon)}\operatorname{Res}_{N_{C_{G}(P)}(U,\epsilon)}^{N_{C_{G}(P)}(U)}(( \operatorname{Res}_{C_{G}(P)}^{N_{G}(P,e)}M(P,e))(U))\] \[=\operatorname{Res}_{C_{G}(PU)}^{N_{C_{G}(P)}(U,\epsilon)}(( \operatorname{Res}_{C_{G}(P)}^{N_{G}(P,e)}M(P,e))(U,\epsilon)).\]
## 6 Coherence conditions
Let \(G\) be a finite group and let \((\mathbb{K},\mathcal{O},F)\) be a \(p\)-modular system large enough for \(G\). Consider the product
\[\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P).\]
If \((\chi_{P})_{P\in S_{p}(G)}\) is an element of this product and \(g\in G\) define \({}^{g}(\chi_{P})\) to be the tuple whose \(P\)th entry, for \(P\in S_{p}(G)\), is equal to \({}^{g}\chi_{{}^{g-1}P}\). Thus
\[{}^{g}(\chi_{P})_{P\in S_{p}(G)}=({}^{g}\chi_{{}^{g-1}P})_{P\in S_{p}(G)}.\]
This action makes \(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P)\) into a \(\mathbb{Z}G\)-algebra. Therefore the subset \(\left(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P)\right)^{G}\) of \(G\)-fixed tuples forms a unital subring. Note that this subring consists of all tuples \((\chi_{P})_{P\in S_{p}(G)}\) that satisfy \({}^{g}\chi_{P}=\chi_{{}^{g}P}\) for all \(g\in G\) and all \(p\)-subgroups \(P\leq G\).
In [1] Boltje and Carman introduce a ring homomorphism
\[\beta_{G}:=\beta:T_{\mathcal{O}}(G) \to\left(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P)\right)^ {G}\] \[\mapsto(\chi_{M(P)})_{P\in S_{p}(G)}\]
where \(M\) is a trivial source \(\mathcal{O}G\)-module and \(\chi_{M(P)}\) is the character of an \(\mathcal{O}\)-lift \(M(P)\) of the Brauer construction of \(M\) at \(P\). Their main theorem about this homomorphism is given below.
**Theorem 6.1**.: _([1, Theorem A]) The ring homomorphism \(\beta_{G}\) is injective and its image consists of those tuples \((\chi_{P})\in\left(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P)\right)^{G}\) that satisfy_
\[\chi_{P}(x)=\chi_{P\langle x_{p}\rangle}(x)\]
_for all \(P\in S_{p}(G)\) and \(x\in N_{G}(P)\), where \(x_{p}\) denotes the \(p\)-part of \(x\)._
Alternatively, the image of \(\beta_{G}\) consists of those \(G\)-invariant tuples \((\chi_{P})\) in \(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P)\) that satisfy
\[\chi_{P}(u\hskip-1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt\hskip 1.0pt \hskip 1.0pt\hskip 1.
**Theorem 6.2**.: _([1, Corollary 3.3]) Let \(e\) be a central idempotent of \(\mathcal{O}G\). Then the image of \(T_{\mathcal{O}}(G,e)\) under \(\beta_{G}\) is the subgroup of tuples \((\chi_{P})\in\left(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P,\mathrm{br}_{ P}(e))\right)^{G}\) that satisfy \(\chi_{P}(x)=\chi_{P\langle x_{p}\rangle}(x)\) for all \(P\in S_{p}(G)\) and \(x\in N_{G}(P)\)._
Let \(B\in\mathrm{Bl}(\mathcal{O}G)\). Our goal now is to give new "coherent character conditions" as in Theorems 6.1 and 6.2 that describe the group \(T_{\mathcal{O}}(B)\).
If \((P,e)\in\mathcal{BP}_{\mathcal{O}}(G)\) set \(I_{(P,e)}:=N_{G}(P,e)\). Note that \(N_{G}(P,e)=N_{G}(P,\overline{e})\).
Consider the product
\[\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e).\]
If \((\chi_{(P,e)})\) is a tuple in this product and \(g\in G\) define \({}^{g}(\chi_{(P,e)})=({}^{g}\chi_{{}^{g-1}(P,e)})\). This defines an action of \(G\) on \(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e)\) via group automorphisms. The subgroup \(\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P, e)\right)^{G}\) of \(G\)-fixed points consists of all tuples \((\chi_{(P,e)})\) such that \({}^{g}\chi_{(P,e)}=\chi_{{}^{g}(P,e)}\) for all \(g\in G\) and all \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\).
Define a map
\[\rho_{B}:=\rho:\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P, \mathrm{br}_{P}(e_{B})) \to\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{ (P,e)}/P,e)\] \[(\chi_{P})_{P\in S_{p}(G)} \mapsto(e\cdot\mathrm{Res}^{N_{G}(P)}_{I_{(P,e)}}(\chi_{P}))_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}.\]
Note that \(\rho\) is a \(G\)-equivariant group homomorphism. It is also injective -- this follows from the following
**Lemma 6.3**.: _Let \((Q,f)\in\mathcal{BP}_{\mathcal{O}}(G)\), set \(I:=N_{G}(Q,f)\) and set \(e:=\mathrm{tr}^{N_{G}(Q)}_{I}(f)\in\mathrm{bli}(\mathcal{O}N_{G}(Q))\). The functor \(f\cdot\mathrm{Res}^{N_{G}(Q)}_{I}:{\mathcal{O}N_{G}(Q)e}\mathbf{mod}\to{ \mathcal{O}_{If}\mathbf{mod}}\) is an equivalence of categories, with inverse \(\mathrm{Ind}^{N_{G}(Q)}_{I}:{\mathcal{O}_{If}\mathbf{mod}}\to{\mathcal{O}_{N_ {G}(Q)e}\mathbf{mod}}\). In particular, the map \(f\cdot\mathrm{Res}^{N_{G}(Q)}_{I}:R_{\mathbb{K}}(N_{G}(Q),e)\to R_{\mathbb{K}} (I,f)\) is a group isomorphism with inverse \(\mathrm{Ind}^{N_{G}(Q)}_{I}\)._
**Lemma 6.4**.: _The map \(\rho_{B}\) defined above is injective, and restricts to a group isomorphism_
\[\rho_{B}:\left(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P,\mathrm{br}_{P}( e_{B}))\right)^{G}\stackrel{{\sim}}{{\to}}\left(\prod_{(P,e)\in \mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}.\]
Proof.: We first show that \(\rho=\rho_{B}\) is injective. Let \((\chi_{P})_{P\in S_{p}(G)}\in\ker(\rho)\). Then for all \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) we have \(e\cdot\mathrm{Res}_{I_{(P,e)}}^{N_{G}(P)}(\chi_{P})=0\). Let \(P\in S_{p}(G)\). We must show that \(\chi_{P}=0\). This is clear if \(\mathrm{br}_{P}(e_{B})=0\), so assume \(\mathrm{br}_{P}(e_{B})\neq 0\). Write \(e(P)\) for the unique central idempotent of \(\mathcal{O}N_{G}(P)\) such that \(e(P)=\mathrm{br}_{P}(e_{B})\). By Lemma 2.2 we may write
\[e(P)=\sum_{i=1}^{n}\mathrm{tr}_{I_{(P,e_{i})}}^{N_{G}(P)}(e_{i})\]
for some blocks \(e_{i}\in\mathrm{bli}(\mathcal{O}C_{G}(P))\). Note that for each \(i\) in the range \(1\leq i\leq n\) we have \(e(P)e_{i}=e_{i}\), hence \(\mathrm{br}_{P}(e_{B})\overline{e_{i}}=\overline{e_{i}}\) and \((P,e_{i})\in\mathcal{BP}_{\mathcal{O}}(B)\). Now by Lemma 6.3 we have an isomorphism
\[R_{\mathbb{K}}(N_{G}(P),\mathrm{br}_{P}(e_{B}))= \bigoplus_{i=1}^{n}R_{\mathbb{K}}(N_{G}(P),\mathrm{tr}_{I_{(P,e _{i})}}^{N_{G}(P)}(e_{i}))\] \[\underset{\oplus e_{i}\cdot\mathrm{Res}_{I_{(P,e_{i})}}^{N_{G}(P )}}{\rightarrow}\bigoplus_{i=1}^{n}R_{\mathbb{K}}(I_{(P,e_{i})},e_{i})\]
and the image of \(\chi_{P}\in R_{\mathbb{K}}(N_{G}(P),\mathrm{br}_{P}(e_{B}))\) under this isomorphism is \(\sum_{i=1}^{n}e_{i}\).
\(\mathrm{Res}_{I_{(P,e_{i})}}^{N_{G}(P)}(\chi_{P})=0\). Therefore \(\chi_{P}=0\). This shows that \(\rho\) is injective.
Since \(\rho\) is \(G\)-equivariant we have that
\[\rho\left(\left(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P,\mathrm{br}_{P }(e_{B}))\right)^{G}\right)\subseteq\left(\prod_{(P,e)\in\mathcal{BP}_{ \mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}.\]
To see that the reverse containment also holds, let \((\chi_{(P,e)})\) be a \(G\)-invariant tuple in \(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e)\). Define a tuple \((\psi_{P})\in\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P,\mathrm{br}_{P}(e_ {B}))\) as follows: let \(P\in S_{p}(G)\). If \(\mathrm{br}_{P}(e_{B})=0\) set \(\psi_{P}=0\). Otherwise, let \(e(P)\) denote the lift of \(\mathrm{br}_{P}(e_{B})\) to a central idempotent of \(\mathcal{O}N_{G}(P)\) and choose blocks \(e_{1},\dots,e_{n}\in\mathrm{bli}(\mathcal{O}C_{G}(P))\) such that \(e(P)=\sum_{i=1}^{n}\mathrm{tr}_{I_{(P,e_{i})}}^{N_{G}(P)}(e_{i})\). Then \((P,e_{i})\in\mathcal{BP}_{\mathcal{O}}(B)\) for each \(1\leq i\leq n\). Note that
\[\mathrm{Ind}_{I_{(P,e_{i})}}^{N_{G}(P)}(\chi_{(P,e_{i})})\in R_{\mathbb{K}}(N_ {G}(P)/P,\mathrm{tr}_{I_{(P,e_{i})}}^{N_{G}(P)}(e_{i})).\]
Set
\[\psi_{P}:=\sum_{i=1}^{n}\mathrm{Ind}_{I_{(P,e_{i})}}^{N_{G}(P)}(\chi_{(P,e_{i} )})\in R_{\mathbb{K}}(N_{G}(P)/P,\mathrm{br}_{P}(e_{B})).\]
Since the tuple \((\chi_{(P,e)})\) is \(G\)-fixed the definition of \(\psi_{P}\) does not depend on the choice of blocks \(e_{1},\dots,e_{n}\). It follows also that \((\psi_{P})\) is a \(G\)-fixed tuple
in \(\prod_{P\in S_{p}(G)}R_{\mathbb{K}}(N_{G}(P)/P,\operatorname{br}_{P}(e_{B}))\). So to complete the proof it remains to show that \(\rho(\psi_{P})=(\chi_{(P,e)})\), i.e., that for all \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) we have
\[e\operatorname{Res}_{I_{(P,e)}}^{N_{G}(P)}(\psi_{P})=\chi_{(P,e)}.\]
Let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\). Then \(\operatorname{br}_{P}(e_{B})\neq 0\). Let \(e(P),e_{1},\ldots,e_{n}\) be as above. Without loss of generality we may assume that \(e=e_{1}\). Then
\[e\operatorname{Res}_{I_{(P,e)}}^{N_{G}(P)}(\psi_{P}) =e_{1}\operatorname{Res}_{I_{(P,e_{1})}}^{N_{G}(P)}\left(\sum_{i= 1}^{n}\operatorname{Ind}_{I_{(P,e_{i})}}^{N_{G}(P)}(\chi_{(P,e_{i})})\right)\] \[=e_{1}\cdot\sum_{i=1}^{n}\operatorname{Res}_{I_{(P,e_{1})}}^{N_{G }(P)}(\operatorname{Ind}_{I_{(P,e_{i})}}^{N_{G}(P)}(\chi_{(P,e_{i})}))\] \[=e_{1}\operatorname{Res}_{I_{(P,e_{1})}}^{N_{G}(P)}(\operatorname {Ind}_{I_{(P,e_{1})}}^{N_{G}(P)}(\chi_{(P,e_{1})}))\] \[=\chi_{(P,e)}\]
where the last equality holds by Lemma 6.3. The proof is complete.
Now let
\[\alpha_{B}=\alpha:T_{\mathcal{O}}(B)\to\left(\prod_{(P,e)\in\mathcal{BP}_{ \mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}\]
denote the composite of the maps \(\beta\) and \(\rho\) -- in other words, \(\alpha\) is the homomorphism making the diagram below commute:
Note that if \(M\in{}_{B}\mathbf{triv}\) then
\[\alpha([M])=(\chi_{M(P,e)})_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}\]
where \(\chi_{M(P,e)}\) denotes the character of the trivial source \(\mathcal{O}I_{(P,e)}e\)-module \(M(P,e)=e\operatorname{Res}_{I_{(P,e)}}^{N_{G}(P)}M(P)\).
**Theorem 6.5**.: _Let \(B\in\operatorname{Bl}(\mathcal{O}G)\). The image of \(T_{\mathcal{O}}(B)\) under the map_
\[\alpha_{B}=\alpha:T_{\mathcal{O}}(B) \hookrightarrow\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)} R_{\mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}\] \[\mapsto(\chi_{M(P,e)})_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)} M\in{}_{B}\mathbf{triv}\]
_is equal to the subgroup of tuples \((\chi_{(P,e)})\in\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{ \mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}\) that satisfy_
\[\left\{\begin{aligned} \chi_{(P,e)}(us\epsilon)& =\operatorname{Ind}_{C_{I(P,e)}\cap I_{(P(u),f)}(u)}^{C_{I_{(P,e)} \cap I_{(P(u),f)}}(u)}(\operatorname{Res}_{C_{I(P,e)}\cap I_{(P(u),f)}(u)}^{I _{(P(u),f)}(u)}(\chi_{(P(u),f)}))(s)\\ \text{for all }(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\text{, }(u, \epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\text{, }s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\text{,}\\ \text{and all }f\in\operatorname{bli}(\mathcal{O}C_{G}(P \langle u\rangle))\text{ such that }\epsilon\cdot f\neq 0.\end{aligned}\right.\] (C1)
_The image of \(T_{\mathcal{O}}(B)\) under \(\alpha\) is also equal to the subgroup of tuples \((\chi_{(P,e)})\in\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{ \mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}\) that satisfy_
\[\left\{\begin{aligned} \chi_{(P,e)}(us)&=\sum_{ \begin{subarray}{c}f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle ))\\ (P,e)\sqcup(P\langle u\rangle,f)\\ s\in I_{(P\langle u\rangle,f)}\end{subarray}}\chi_{(P\langle u\rangle,f)}(s) \\ \text{for all }(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\text{, all }p \text{-elements }u\in I_{(P,e)}\text{,}\\ \text{and all }s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\text{.}\end{aligned}\right.\] (C2)
Proof.: Before beginning the proof, we note that if \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) and \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\) then by Lemma 2.4 there exists a block idempotent \(f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) satisfying \(\epsilon f\neq 0\), so Condition (C1) makes sense. In Condition (C2) the sum on the right is equal to \(0\) if no such block idempotent \(f\) exists.
For convenience, let \(A_{1}\) and \(A_{2}\) denote the collections of tuples \((\chi_{(P,e)})\in\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{ \mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}\) that satisfy Conditions (C1) and (C2), respectively. Observe that \(A_{1}\) and \(A_{2}\) are subgroups of \(\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P, e)\right)^{G}\). We must show that
\[\alpha(T_{\mathcal{O}}(B))=A_{1}=A_{2}.\]
We first show that \(\alpha(T_{\mathcal{O}}(B))\subseteq A_{1}\). Since \(A_{1}\) is a subgroup it suffices to check that \(\alpha([M])\in A_{1}\) for all \(M\in{}_{B}\mathbf{triv}\). So let \(M\) be a trivial source \(B\)-module, let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\), \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\), and let
\(f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) be such that \(\epsilon\cdot f\neq 0\). Then by Lemma 2.4 we have \((P\langle u\rangle,f)\in\mathcal{BP}_{\mathcal{O}}(B)\), \((P,e)\trianglelefteq(P\langle u\rangle,f)\), and
\[\epsilon=\operatorname{tr}_{C_{I_{(P,e)}}^{\,C_{I_{(P,e)}}(u)}}^{C_{I_{(P,e)} }(u)}(f).\]
We must show that
\[\chi_{M(P,e)}(us\epsilon)=\operatorname{Ind}_{C_{I_{(P,e)}}^{\,C_{I_{(P,e)}}(u )}(u)}^{C_{I_{(P,e)}}(u)}(\operatorname{Res}_{C_{I_{(P,e)}\cap I_{(P\langle u \rangle,f)}}(u)}^{I_{(P\langle u\rangle,f)}}(\chi_{M(P\langle u\rangle,f)}))(s)\]
for all \(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\).
Now \(M(P,e)\) is a trivial source \(\mathcal{O}I_{(P,e)}e\)-module, so we may apply the map \(\beta_{I_{(P,e)}}\) of Theorem 6.2 to obtain a coherent character tuple
\[\beta_{I_{(P,e)}}([M(P,e)])=(\chi_{(M(P,e))(Q)})\in\prod_{Q\in S_{p}(I_{(P,e)} )}R_{\mathbb{K}}(N_{I_{(P,e)}}(Q)/Q,\operatorname{br}_{Q}^{I_{(P,e)}}(e)).\]
Notice that \(\chi_{(M(P,e))(1)}=\chi_{M(P,e)}\). The coherence condition of Theorem 6.2 then gives that
\[\chi_{M(P,e)}(us)=\chi_{(M(P,e))(\langle u\rangle)}(s)\]
for any \(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\). Now by Lemma 5.1 there is an isomorphism of \(\mathcal{O}N_{I_{(P,e)}}(\langle u\rangle)\)-modules
\[(M(P,e))(\langle u\rangle)\cong e(\langle u\rangle)\cdot\operatorname{Res}_{N _{I_{(P,e)}}(\langle u\rangle)}^{N_{G}(P\langle u\rangle)}(M(P\langle u \rangle))\]
where \(e(\langle u\rangle)\) is the unique central idempotent of \(\mathcal{O}N_{I_{(P,e)}}(\langle u\rangle)\) satisfying \(\overline{e(\langle u\rangle)}=\operatorname{br}_{\langle u\rangle}^{I_{(P,e) }}(e)\). Since \(e(\langle u\rangle)\in\mathcal{O}C_{I_{(P,e)}}(u)\) we have
\[\chi_{M(P,e)}(us) =\chi_{(M(P,e))(\langle u\rangle)}(s)\] \[=\chi_{e(\langle u\rangle)\cdot\operatorname{Res}_{N_{I_{(P,e)}}( u)}^{N_{G}(P\langle u\rangle)}(M(P\langle u\rangle))}(s)\] \[=\chi_{e(\langle u\rangle)\cdot\operatorname{Res}_{C_{I_{(P,e)}}( u)}^{N_{G}(P\langle u\rangle)}(M(P\langle u\rangle))}(s)\]
for any \(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\). It follows that
\[d_{I_{(P,e)}}^{u}(\chi_{M(P,e)})=d_{C_{I_{(P,e)}}(u)}^{1}(\chi_{e(\langle u \rangle)\cdot\operatorname{Res}_{C_{I_{(P,e)}}(u)}^{N_{G}(P\langle u\rangle)}( M(P\langle u\rangle))}),\]
which is an equality of class functions in \(CF_{p^{\prime}}(C_{I_{(P,e)}}(u);\mathbb{K})\). Multiplying both sides of this equality by \(\epsilon\) we obtain
\[d^{u,\epsilon}_{I_{(P,e)}}(\chi_{M(P,e)}) =d^{1,\epsilon}_{C_{I_{(P,e)}}(u)}(\chi_{e((u))\cdot\text{Res}^{N_ {G}(P\langle u\rangle)}_{C_{I_{(P,e)}}(u)}(M(P\langle u\rangle))}\] \[=d^{1}_{C_{I_{(P,e)}}(u)}(\epsilon\cdot\chi_{e((u))\cdot\text{Res }^{N_{G}(P\langle u\rangle)}_{C_{I_{(P,e)}}(u)}(M(P\langle u\rangle))}\] \[=d^{1}_{C_{I_{(P,e)}}(u)}(\chi_{\epsilon\cdot(e((u))\cdot\text{ Res}^{N_{G}(P\langle u\rangle)}_{C_{I_{(P,e)}}(u)}(M(P\langle u\rangle)))}\] \[=d^{1}_{C_{I_{(P,e)}}(u)}(\chi_{\epsilon\cdot\text{Res}^{N_{G}(P \langle u\rangle)}_{C_{I_{(P,e)}}(u)}(M(P\langle u\rangle))}\]
where the last equality holds because \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\), so that \(\epsilon\cdot e(\langle u\rangle)=\epsilon\). Since \(\epsilon=\text{tr}^{C_{I_{(P,e)}}(u)}_{C_{I_{(P,e)}\cap I_{(P\langle u\rangle, f\rangle}}(u)}(f)\) we have a decomposition
\[\text{Res}^{C_{I_{(P,e)}}(u)}_{C_{G}(P\langle u\rangle)} (\epsilon\cdot\text{Res}^{N_{G}(P\langle u\rangle)}_{C_{I_{(P,e)}}(u)}( M(P\langle u\rangle)))\] \[=\bigoplus_{x\in[C_{I_{(P,e)}}(u)/C_{I_{(P,e)}\cap I_{(P\langle u \rangle,f\rangle}}(u)]}{}^{x}f\cdot\text{Res}^{N_{G}(P\langle u\rangle)}_{C_{G }(P\langle u\rangle)}(M(P\langle u\rangle)).\]
The summands are permuted transitively by \(C_{I_{(P,e)}}(u)\), and the stabilizer of the \(f\)-component is \(C_{I_{(P,e)}\cap I_{(P\langle u\rangle,f\rangle}}(u)\). It follows that
\[\epsilon\cdot\text{Res}^{N_{G}(P\langle u\rangle)}_{C_{I_{(P,e)} }(u)}(M(P\langle u\rangle))\\ \cong\text{Ind}^{C_{I_{(P,e)}}(u)}_{C_{I_{(P,e)}\cap I_{(P\langle u \rangle,f\rangle}}(u)}(\text{Res}^{I_{(P\langle u\rangle,f)}}_{C_{I_{(P,e)} \cap I_{(P\langle u\rangle,f\rangle}}(u)}(M(P\langle u\rangle,f))).\]
So now we have
\[d^{u,\epsilon}_{I_{(P,e)}}(\chi_{M(P,e)}) =d^{1}_{C_{I_{(P,e)}}(u)}(\chi_{\epsilon\cdot\text{Res}^{N_{G}(P \langle u\rangle)}_{C_{I_{(P,e)}}(u)}(M(P\langle u\rangle))})\] \[=d^{1}_{C_{I_{(P,e)}}(u)}(\text{Ind}^{C_{I_{(P,e)}}(u)}_{C_{I_{( P,e)}\cap I_{(P\langle u\rangle,f\rangle}}(u)}(\text{Res}^{I_{(P\langle u \rangle,f\rangle}}_{C_{I_{(P,e)}\cap I_{(P\langle u\rangle,f\rangle}}(u)}( \chi_{M(P\langle u\rangle,f)}))).\]
In particular, by Proposition 4.3 we obtain
\[\chi_{M(P,e)}(us\epsilon)=\text{Ind}^{C_{I_{(P,e)}}(u)}_{C_{I_{(P,e)}\cap I_{( P\langle u\rangle,f\rangle}}(u)}(\text{Res}^{I_{(P\langle u\rangle,f\rangle}}_{C_{I_{ (P,e)}\cap I_{(P\langle u\rangle,f\rangle}}(u)}(\chi_{M(P\langle u\rangle,f \rangle})))(s)\]
for any \(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\), as desired.
We have shown that \(\alpha(T_{\mathcal{O}}(B))\subseteq A_{1}\). Next we show that \(A_{1}\subseteq A_{2}\). Let \((\chi_{(P,e)})\in A_{1}\). Let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\), \(u\in I_{(P,e)}\) a \(p\)-element, and let
\(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\). By Brauer's 2nd Main Theorem we have
\[\chi_{(P,e)}(us)=d^{u}_{I_{(P,e)}}(\chi_{(P,e)})(s)=\sum_{\begin{subarray}{c} \epsilon\in\operatorname{bli}(\mathcal{O}C_{I_{(P,e)}}(u))\\ (u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\end{subarray}}d^{u, \epsilon}_{I_{(P,e)}}(\chi_{(P,e)})(s).\]
For each \(\epsilon\in\operatorname{bli}(\mathcal{O}C_{I_{(P,e)}}(u))\) such that \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\) choose a block \(f_{\epsilon}\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) such that \(\epsilon\cdot f_{\epsilon}\neq 0\). Note that \(f_{\epsilon}\) exists for each \(\epsilon\) by Lemma 2.4 and that \(\operatorname{Stab}_{C_{I_{(P,e)}}(u)}(f_{\epsilon})=C_{I_{(P,e)}\cap I_{(P \langle u\rangle,f_{\epsilon})}}(u)\). Then for each \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\) we have
\[\chi_{(P,e)}(us\epsilon) =\operatorname{Ind}^{C_{I_{(P,e)}}(u)}_{C_{I_{(P,e)}\cap I_{(P \langle u\rangle,f_{\epsilon})}}(u)}(\operatorname{Res}^{I_{(P\langle u\rangle,f_{\epsilon})}}_{C_{I_{(P,e)}\cap I_{(P\langle u\rangle,f_{\epsilon})}}(u)}( \chi_{(P\langle u\rangle,f_{\epsilon})}))(s)\] \[=\sum_{\begin{subarray}{c}g\in[C_{I_{(P,e)}}(u)/\operatorname{ Stab}_{C_{I_{(P,e)}}(u)}(f_{\epsilon})]\\ s^{g}\in\operatorname{Stab}_{C_{I_{(P,e)}}(u)}(f_{\epsilon})\end{subarray}} \chi_{(P\langle u\rangle,f_{\epsilon})}(s^{g})\] \[=\sum_{\begin{subarray}{c}f\in\operatorname{Orb}_{C_{I_{(P,e)}}( u)}(f_{\epsilon})\\ s\in\operatorname{Stab}_{C_{I_{(P,e)}}(u)}(f)\end{subarray}}\chi_{(P\langle u \rangle,f)}(s)\] \[=\sum_{\begin{subarray}{c}f\in\operatorname{bli}(\mathcal{O}C_{G }(P\langle u\rangle)\\ s\in\operatorname{Stab}_{C_{I_{(P\langle u\rangle,f\rangle}}})\end{subarray}} \chi_{(P\langle u\rangle,f)}(s).\]
It follows that
\[\chi_{(P,e)}(us) =\sum_{\begin{subarray}{c}\epsilon\in\operatorname{bli}( \mathcal{O}C_{I_{(P,e)}}(u))\\ (u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\end{subarray}}\chi_{(P,e)}(us\epsilon)\] \[=\sum_{\begin{subarray}{c}\epsilon\in\operatorname{bli}( \mathcal{O}C_{I_{(P,e)}}(u))\\ (u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\end{subarray}}\sum_{ \begin{subarray}{c}f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle )\\ f\in\neq 0\\ s\in I_{(P\langle u\rangle,f)}\end{subarray}}\chi_{(P\langle u\rangle,f)}(s)\] \[=\sum_{\begin{subarray}{c}f\in\operatorname{bli}(\mathcal{O}C_{G }(P\langle u\rangle))\\ (P,e)\subseteq(P\langle u\rangle,f)\end{subarray}}\chi_{(P\langle u\rangle,f)}(s)\]
where the last line follows from the bijection of Lemma 2.5. Thus we see that \((\chi_{(P,e)})\in A_{2}\), and that \(A_{1}\subseteq A_{2}\).
To complete the proof we must show that \(A_{2}\subseteq\alpha(T_{\mathcal{O}}(B))\). Let \((\chi_{(P,e)})\in A_{2}\). Recall the isomorphism \(\rho\) of Lemma 6.4. Write
\[(\psi_{P})_{P\in S_{p}(G)}:=\rho^{-1}((\chi_{(P,e)}))\in\left(\prod_{P\in S_{p} (G)}R_{\mathbb{X}}(N_{G}(P)/P,\operatorname{br}_{P}(e_{B}))\right)^{G}.\]
I claim that \((\psi_{P})\in\beta(T_{\mathcal{O}}(B))\). Note that if the claim is correct then it will follow that \((\chi_{(P,e)})\in\rho(\beta(T_{\mathcal{O}}(B)))=\alpha(T_{\mathcal{O}}(B))\) and the proof will be complete.
By Theorem 6.2, the tuple \((\psi_{P})\) belongs to \(\beta(T_{\mathcal{O}}(B))\) if and only if
\[\psi_{P}(x)=\psi_{P\langle x_{p}\rangle}(x)\]
for all \(P\in S_{p}(G)\) and \(x\in N_{G}(P)\), or equivalently,
\[\psi_{P}(us)=\psi_{P\langle u\rangle}(s)\]
for all \(P\in S_{p}(G)\), all \(p\)-elements \(u\in N_{G}(P)\), and all \(s\in C_{N_{G}(P)}(u)_{p^{\prime}}\). Recall from the proof of Lemma 6.4 how the characters \(\psi_{P}\), \(P\in S_{p}(G)\), are defined: if \(\operatorname{br}_{P}(e_{B})=0\) then \(\psi_{P}=0\). If \(\operatorname{br}_{P}(e_{B})\neq 0\) let \(e(P)\) denote the lift of \(\operatorname{br}_{P}(e_{B})\) to a central idempotent of \(\mathcal{O}N_{G}(P)\) and choose blocks \(e_{1},\dots,e_{n}\in\operatorname{bli}(\mathcal{O}C_{G}(P))\) such that \(e(P)=\sum_{i=1}^{n}\operatorname{tr}_{I_{(P,e_{i})}}^{N_{G}(P)}(e_{i})\). Then
\[\psi_{P}=\sum_{i=1}^{n}\operatorname{Ind}_{I_{(P,e_{i})}}^{N_{G}(P)}(\chi_{(P, e_{i})}).\]
In this case, the value \(\psi_{P}(us)\) for \(u\in N_{G}(P)\) a \(p\)-element and \(s\in C_{N_{G}(P)}(u)_{p^{\prime}}\) can be computed:
\[\psi_{P}(us) =\sum_{i=1}^{n}\operatorname{Ind}_{I_{(P,e_{i})}}^{N_{G}(P)}( \chi_{(P,e_{i})})(us)\] \[=\sum_{i=1}^{n}\sum_{\begin{subarray}{c}g\in N_{G}(P)/I_{(P,e_{i })}\\ (us)^{g}\in I_{(P,e_{i})}\end{subarray}}\chi_{(P,e_{i})}((us)^{g})\] \[=\sum_{\begin{subarray}{c}e\in\operatorname{bli}(\mathcal{O}C_{G }(P))\\ (P,e)\in\mathcal{B}\mathcal{P}\mathcal{O}(B)\\ us\in I_{(P,e)}\end{subarray}}\chi_{(P,e)}(us).\]
Note that the formula
\[\psi_{P}(us)=\sum_{\begin{subarray}{c}e\in\operatorname{bli}(\mathcal{O}C_{G}(P)) \\ (P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\\ us\in I_{(P,e)}\end{subarray}}\chi_{(P,e)}(us)\]
holds for each \(P\in S_{p}(G)\), \(u\in N_{G}(P)\) a \(p\)-element, and \(s\in C_{N_{G}(P)}(u)_{p^{\prime}}\) independent of whether \(\operatorname{br}_{P}(e_{B})=0\) or \(\operatorname{br}_{P}(e_{B})\neq 0\), for in the former case \(\psi_{P}(us)=0\) and there do not exist \(B\)-Brauer pairs of the form \((P,e)\) so the sum on the right is \(0\).
Now let \(P\in S_{p}(G)\), \(u\in N_{G}(P)\) a \(p\)-element, and let \(s\in C_{N_{G}(P)}(u)_{p^{\prime}}\). If \(us\in I_{(P,e)}\) for a Brauer pair \((P,e)\) then \(u\in I_{(P,e)}\) and \(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\), so by Condition (C2)
\[\psi_{P}(us)=\sum_{\begin{subarray}{c}e\in\operatorname{bli}(\mathcal{O}C_{G} (P))\\ (P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\\ us\in I_{(P,e)}\end{subarray}}\sum_{\begin{subarray}{c}f\in\operatorname{bli }(\mathcal{O}C_{G}(P\langle u\rangle))\\ (P,e)\trianglelefteqslant(P\langle u\rangle,f)\\ s\in I_{(P\langle u\rangle,f)}\end{subarray}}\chi_{(P\langle u\rangle,f)}(s).\]
The sum above can be reindexed after making the following observations: let \(\mathcal{I}\) denote the set of ordered pairs \((e,f)\) where \(e\in\operatorname{bli}(\mathcal{O}C_{G}(P))\) such that \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) and \(f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) such that \((P,e)\trianglelefteqslant(P\langle u\rangle,f)\). Let \(\mathcal{J}\) denote the set of block idempotents \(f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) such that \((P\langle u\rangle,f)\in\mathcal{BP}_{\mathcal{O}}(B)\). The map \(\mathcal{I}\to\mathcal{J}\), \((e,f)\mapsto f\) is well-defined and is a bijection by the existence and uniqueness of Brauer pairs contained within a fixed Brauer pair (see [7, Part IV, Theorem 2.10]). Let \(\mathcal{I}^{\prime}\) denote the subset of \((e,f)\in\mathcal{I}\) for which \(us\in I_{(P,e)}\) and \(s\in I_{(P\langle u\rangle,f)}\) and let \(\mathcal{J}^{\prime}\) denote the subset of \(f\in\mathcal{J}\) such that \(s\in I_{(P\langle u\rangle,f)}\). The bijection \(\mathcal{I}\overset{\sim}{\to}\mathcal{J}\) clearly maps \(\mathcal{I}^{\prime}\) into \(\mathcal{J}^{\prime}\). In fact the image of \(\mathcal{I}^{\prime}\) is precisely \(\mathcal{J}^{\prime}\), for if \(f\in\mathcal{J}^{\prime}\) and if \(e\) is the unique block of \(\mathcal{O}C_{G}(P)\) such that \((P,e)\trianglelefteqslant(P\langle u\rangle,f)\) then \(u,s\in N_{G}(P)\cap I_{(P\langle u\rangle,f)}\leq I_{(P,e)}\), hence \(us\in I_{(P,e)}\) and \((e,f)\in\mathcal{I}^{\prime}\). Now the sum above is indexed by the elements of \(\mathcal{I}^{\prime}\). Reindexing by the elements of \(\mathcal{J}^{\prime}\) gives
\[\psi_{P}(us)=\sum_{\begin{subarray}{c}f\in\operatorname{bli}(\mathcal{O}C_{G} (P\langle u\rangle))\\ (P\langle u\rangle,f)\in\mathcal{BP}_{\mathcal{O}}(B)\\ s\in I_{(P\langle u\rangle,f)}\end{subarray}}\chi_{(P\langle u\rangle,f)}(s).\]
But this is precisely the formula for \(\psi_{P\langle u\rangle}(s)\) given previously. We conclude that \(\psi_{P}(us)=\psi_{P\langle u\rangle}(s)\) and that \((\psi_{P})\in\beta(T_{\mathcal{O}}(B))\), proving the claim and completing the proof of the theorem.
**Corollary 6.6**.: _Let \(B\in\operatorname{Bl}(\mathcal{O}G)\) and let \((\chi_{(P,e)})\in\alpha(T_{\mathcal{O}}(B))\). For each \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) set_
\[\psi_{(P,e)}:=\operatorname{Res}_{C_{G}(P)}^{I_{(P,e)}}(\chi_{(P,e)})\in R_{ \mathbb{K}}(C_{G}(P),e).\]
_Let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) and let \((u,f)\in\mathcal{BE}_{\mathcal{O}}(C_{G}(P),e)\). Then \((P\langle u\rangle,f)\in\mathcal{BP}_{\mathcal{O}}(B)\) and_
\[\psi_{(P,e)}(usf)=\psi_{(P\langle u\rangle,f)}(s)\]
_for all \(s\in C_{G}(P\langle u\rangle)_{p^{\prime}}\)._
Proof.: Since \((u,f)\) is a Brauer element for \(\mathcal{OC}_{G}(P)\) we have by definition that \(u\) is a \(p\)-element of \(C_{G}(P)\) and \(f\) is a block idempotent of \(\mathcal{OC}_{C_{G}(P)}(u)=\mathcal{OC}_{G}(P\langle u\rangle)\), so \((P\langle u\rangle,f)\) is a Brauer pair of \(\mathcal{O}G\). Since \((u,f)\) belongs to \(e\) we have \(\mathrm{br}_{\langle u\rangle}^{C_{G}(P)}(e)\overline{f}=\overline{f}\). Now \(\mathrm{br}_{\langle u\rangle}^{C_{G}(P)}(e)=\mathrm{br}_{P\langle u\rangle}^{G }(e)\), so we see that \((P,e)\trianglelefteqslant(P\langle u\rangle,f)\) and in particular \((P\langle u\rangle,f)\in\mathcal{BP}_{\mathcal{O}}(B)\).
If \(s\in C_{G}(P\langle u\rangle)_{p^{\prime}}\) then Condition (C2) of Theorem 6.5 gives
\[\psi_{(P,e)}(us)=\sum_{\begin{subarray}{c}f^{\prime}\in\mathrm{bli}(\mathcal{OC }_{G}(P\langle u\rangle))\\ (P,e)\unlhd(P\langle u\rangle,f^{\prime})\end{subarray}}\chi_{(P\langle u \rangle,f^{\prime})}(s)=\sum_{\begin{subarray}{c}f^{\prime}\in\mathrm{bli}( \mathcal{OC}_{G}(P\langle u\rangle))\\ (P,e)\unlhd(P\langle u\rangle,f^{\prime})\end{subarray}}\psi_{(P\langle u \rangle,f^{\prime})}(s).\]
It follows that
\[d^{u}_{C_{G}(P)}(\psi_{(P,e)})=\sum_{\begin{subarray}{c}f^{\prime}\in\mathrm{ bli}(\mathcal{OC}_{G}(P\langle u\rangle))\\ (P,e)\unlhd(P\langle u\rangle,f^{\prime})\end{subarray}}d^{1}_{C_{G}(P\langle u \rangle)}(\psi_{(P\langle u\rangle,f^{\prime})}).\]
Projecting onto the \(f\)-component of this sum, we obtain
\[d^{u,f}_{C_{G}(P)}(\psi_{(P,e)})=d^{1}_{C_{G}(P\langle u\rangle)}(\psi_{(P \langle u\rangle,f)})\]
and therefore \(\psi_{(P,e)}(usf)=\psi_{(P\langle u\rangle,f)}(s)\) for all \(s\in C_{G}(P\langle u\rangle)_{p^{\prime}}\), as desired.
Now let \(B\in\mathrm{Bl}(\mathcal{O}G)\) and let \((D,e_{D})\in\mathcal{BP}_{\mathcal{O}}(B)\) be a maximal \(B\)-Brauer pair. If \(P\leq D\) write \(e_{P}\) for the unique block idempotent of \(\mathcal{OC}_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\). Set \(\mathcal{F}=\mathcal{F}_{(D,e_{D})}(G,B)\) and for each subgroup \(P\leq D\) set
\[I_{P}:=I_{(P,e_{P})}=N_{G}(P,e_{P}).\]
Suppose that \(P,Q\leq D\) and that \(\varphi:P\stackrel{{\sim}}{{\to}}Q\) is an \(\mathcal{F}\)-isomorphism. Let \(g\in G\) be such that \(\varphi=c_{g}\) and \({}^{g}(P,e_{P})=(Q,e_{Q})\). Then \({}^{g}I_{P}=I_{Q}\) and conjugation by \(g\) induces a group isomorphism
\[{}^{g}(\cdot):R_{\mathbb{K}}(I_{P}/P,e_{P})\stackrel{{\sim}}{{\to }}R_{\mathbb{K}}(I_{Q}/Q,e_{Q}).\]
If \(h\in G\) is another element such that \(\varphi=c_{h}\) and \({}^{h}(P,e_{P})=(Q,e_{Q})\) then \(h^{-1}g\in I_{P}\), hence \({}^{g}(\cdot)={}^{h}(\cdot)\) as maps \(R_{\mathbb{K}}(I_{P}/P,e_{P})\to R_{\mathbb{K}}(I_{Q}/Q,e_{Q})\). In light of this, we obtain a well-defined group isomorphism
\[{}^{\varphi}(\cdot):R_{\mathbb{K}}(I_{P}/P,e_{P}) \stackrel{{\sim}}{{\to}}R_{\mathbb{K}}(I_{Q}/Q,e_{Q})\] \[\chi \mapsto{}^{g}\chi\]
where \(g\in G\) is any element such that \(\varphi=c_{g}\) and \({}^{g}(P,e_{P})=(Q,e_{Q})\), and we call this map _conjugation by_\(\varphi\). Note that if \(\varphi:P\stackrel{{\sim}}{{\to}}Q\) and \(\psi:Q\stackrel{{\sim}}{{\to}}R\) are \(\mathcal{F}\)-isomorphisms and \(\chi\in R_{\mathbb{K}}(I_{P}/P,e_{P})\) then \({}^{\psi}(\varphi\chi)={}^{\psi\varphi}\chi\).
Now consider the product \(\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\). Say a tuple \((\chi_{P})_{P\leq D}\in\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\) is \(\mathcal{F}\)-_fixed_ if \({}^{\varphi}\chi_{P}=\chi_{\varphi(P)}\) for all \(P\leq D\) and all \(\mathcal{F}\)-isomorphisms \(\varphi:P\stackrel{{\sim}}{{\to}}Q\). The subset \(\bigl{(}\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\bigr{)}^{\mathcal{F}}\) of \(\mathcal{F}\)-fixed tuples forms a subgroup.
**Proposition 6.7**.: _The canonical projection_
\[\pi:\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e )\to\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\]
_restricts to a group isomorphism_
\[\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e)\right)^{G}\stackrel{{\sim}}{{\to}}\left(\prod_{P\leq D}R_{ \mathbb{K}}(I_{P}/P,e_{P})\right)^{\mathcal{F}}.\]
Proof.: Let \((\chi_{(P,e)})\in\Bigl{(}\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{ \mathbb{K}}(I_{(P,e)}/P,e)\Bigr{)}^{G}\). For each \(P\leq D\) set \(\chi_{P}:=\chi_{(P,e_{P})}\). So then \(\pi((\chi_{(P,e)}))=(\chi_{P})_{P\leq D}\). Now let \(P,Q\leq D\) and let \(\varphi:P\stackrel{{\sim}}{{\to}}Q\) be an \(\mathcal{F}\)-isomorphism. Say \(g\in G\) is such that \(\varphi=c_{g}\) and \({}^{g}(P,e_{P})=(Q,e_{Q})\). Then
\[{}^{\varphi}\chi_{P}={}^{g}\chi_{P}={}^{g}\chi_{(P,e_{P})}=\chi_{{}^{g}(P,e_ {P})}=\chi_{(Q,e_{Q})}=\chi_{Q}=\chi_{\varphi(P)},\]
so \(\pi\) restricts to a group homomorphism from \(\Bigl{(}\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/ P,e)\Bigr{)}^{G}\) to \(\bigl{(}\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\bigr{)}^{\mathcal{F}}\).
Suppose that \((\chi_{(P,e)})\in\Bigl{(}\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{ \mathbb{K}}(I_{(P,e)}/P,e)\Bigr{)}^{G}\) is such that \(\pi((\chi_{(P,e)}))=0\). Then \(\chi_{(P,e_{P})}=0\) for all \(P\leq D\). Now if \((P,e)\) is any \(B\)-Brauer pair then there exists some element \(g\in G\) such that \({}^{g}(P,e)\leq(D,e_{D})\). Then \(0=\chi_{{}^{g}(P,e)}={}^{g}\chi_{(P,e)}\), and it follows that \(\chi_{(P,e)}=0\). This shows that the restriction of \(\pi\) to \(\Bigl{(}\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/ P,e)\Bigr{)}^{G}\) is injective.
It remains to show that \(\pi\) maps \(\Bigl{(}\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/ P,e)\Bigr{)}^{G}\) onto \(\bigl{(}\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\bigr{)}^{\mathcal{F}}\). Let \((\chi_{P})\in\bigl{(}\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\bigr{)}^{ \mathcal{F}}\). For each \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) choose an element \(g\in G\) such that \({}^{g}(P,e)\leq(D,e_{D})\). Then set
\[\chi_{(P,e)}:={}^{g^{-1}}\chi_{{}^{g}P}\in R_{\mathbb{K}}(I_{(P,e)}/P,e).\]
Note that the definition of \(\chi_{(P,e)}\) does not depend on the choice of \(g\): indeed, if \(h\in G\) is another element such that \({}^{h}(P,e)\leq(D,e_{D})\) then \(\varphi=c_{gh^{-1}}:{}^{h}P\xrightarrow{\sim}{}^{g}P\) is an \(\mathcal{F}\)-isomorphism, hence \({}^{gh^{-1}}\chi_{{}^{h}P}={}^{\varphi}\chi_{{}^{h}P}=\chi_{{}^{g}P}\) and therefore \({}^{h^{-1}}\chi_{{}^{h}P}={}^{g^{-1}}\chi_{{}^{g}P}\). Observe that the tuple \((\chi_{(P,e)})\in\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I _{(P,e)}/P,e)\) just defined is \(G\)-fixed: to see this, let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) and let \(g\in G\). Choose an element \(h\in G\) such that \({}^{h}(P,e)\leq(D,e_{D})\). Then \(\chi_{(P,e)}={}^{h^{-1}}\chi_{{}^{h}P}\) by definition. Now \({}^{hg^{-1}}\big{(}{}^{g}(P,e)\big{)}\leq(D,e_{D})\), so \(\chi_{{}^{g}(P,e)}={}^{gh^{-1}}\chi_{{}^{hg^{-1}}gP}={}^{gh^{-1}}\chi_{{}^{h}P}\). It follows that
\[{}^{g}\chi_{(P,e)}={}^{g}\big{(}{}^{h^{-1}}\chi_{{}^{h}P}\big{)}=\chi_{{}^{g}( P,e)}.\]
So the tuple \((\chi_{(P,e)})\) is \(G\)-fixed. Finally, note that if \(P\leq D\) then \(\chi_{(P,e_{P})}=\chi_{P}\), so that \(\pi((\chi_{(P,e)}))=(\chi_{P})\). The proof is complete.
Let
\[\delta_{B}:=\delta:T_{\mathcal{O}}(B)\to\left(\prod_{P\leq D}R_{\mathbb{K}}(I _{P}/P,e_{P})\right)^{\mathcal{F}}\]
denote the composite of the maps \(\alpha\) and \(\pi\) -- in other words, \(\delta\) is the homomorphism making the diagram below commute:
Note that if \(M\in{}_{B}\mathbf{triv}\) then
\[\delta([M])=(\chi_{M(P,e_{P})})_{P\leq D}\]
where \(\chi_{M(P,e_{P})}\) denotes the character of the trivial source \(\mathcal{O}I_{P}e_{P}\)-module \(M(P,e_{P})=e_{P}\operatorname{Res}_{I_{P}}^{N_{G}(P)}M(P)\).
**Theorem 6.8**.: _Let \(B\in\operatorname{Bl}(\mathcal{O}G)\) and let \((D,e_{D})\in\mathcal{BP}_{\mathcal{O}}(B)\) be a maximal \(B\)-Brauer pair. If \(P\leq D\) write \(e_{P}\) for the unique block idempotent of \(\mathcal{O}C_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\) and set \(I_{P}=N_{G}(P,e_{P})\). Let \(\mathcal{F}=\mathcal{F}_{(D,e_{D})}(G,B)\). The image of \(T_{\mathcal{O}}(B)\) under the map_
\[\delta_{B}:=\delta:T_{\mathcal{O}}(B) \hookrightarrow\left(\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P}) \right)^{\mathcal{F}}\] \[\mapsto(\chi_{M(P,e_{P})})_{P\leq D} M\in{}_{B}\mathbf{triv}\]
_is equal to the subgroup of tuples \((\chi_{P})\in\left(\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\right)^{\mathcal{F}}\) that satisfy_
\[\left\{\begin{array}{c}\chi_{P}(use_{\langle u\rangle})=\mathrm{Ind}_{C_{I_{P} \cap I_{P\langle u\rangle}}(u)}^{C_{I_{P}}(u)}(\mathrm{Res}_{C_{I_{P}\cap I_{P \langle u\rangle}}(u)}^{I_{P\langle u\rangle}}(\chi_{P\langle u\rangle}))(s)\\ \qquad\text{for all $P\leq D$, $u\in N_{D}(P)$, and $s\in C_{I_{P}}(u)_{p^{\prime}}$,}\\ \text{where $\epsilon_{\langle u\rangle}$ is the unique block of $\mathcal{O}C_{I_{P}}(u)$ covering $e_{P\langle u\rangle}$.}\end{array}\right.\] (C3)
Proof.: For ease, let \(A\) denote the subgroup of tuples \((\chi_{P})\in\left(\prod_{P\leq D}R_{\mathbb{K}}(I_{P}/P,e_{P})\right)^{ \mathcal{F}}\) that satisfy Condition (C3). We need to show \(\delta(T_{\mathcal{O}}(B))=A\). By definition of \(\delta\), this means we need to show that \(\pi(\alpha(T_{\mathcal{O}}(B)))=A\). Let \((\chi_{(P,e)})\in\alpha(T_{\mathcal{O}}(B))\). Then the tuple \((\chi_{(P,e)})\) satisfies Condition (C1) of Theorem 6.5. Let \(P\leq D\), \(u\in N_{D}(P)\), and \(s\in C_{I_{P}}(u)_{p^{\prime}}\). Set \(\epsilon_{\langle u\rangle}=\mathrm{tr}_{C_{I_{P}\cap I_{P\langle u\rangle}}( u)}^{C_{I_{P}}(u)}(e_{P\langle u\rangle})\). Then by Lemma 2.10, \((u,\epsilon_{\langle u\rangle})\in\mathcal{BE}_{\mathcal{O}}(I_{P},e_{P})\). Note that \(e_{P\langle u\rangle}\in\mathrm{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) satisfies \(\epsilon_{\langle u\rangle}\cdot e_{P\langle u\rangle}\neq 0\). So by Condition (C1),
\[\chi_{(P,e_{P})}(use_{\langle u\rangle})=\mathrm{Ind}_{C_{I_{P}\cap I_{P \langle u\rangle}}(u)}^{C_{I_{P}}(u)}(\mathrm{Res}_{C_{I_{P}\cap I_{P\langle u \rangle}}(u)}^{I_{P\langle u\rangle}}(\chi_{(P\langle u\rangle,e_{P\langle u \rangle})}))(s).\]
This shows that \(\pi((\chi_{(P,e)}))\in A\), hence \(\delta(T_{\mathcal{O}}(B))\subseteq A\).
It remains to show that \(A\subseteq\delta(T_{\mathcal{O}}(B))\). Since \(\delta(T_{\mathcal{O}}(B))=\pi(\alpha(T_{\mathcal{O}}(B)))\) and \(\pi\) restricts to an isomorphism
\[\left(\prod_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}R_{\mathbb{K}}(I_{(P,e)}/P,e )\right)^{G}\stackrel{{\sim}}{{\to}}\left(\prod_{P\leq D}R_{ \mathbb{K}}(I_{P}/P,e_{P})\right)^{\mathcal{F}}.\]
it is enough to show that \(\pi^{-1}(A)\subseteq\alpha(T_{\mathcal{O}}(B))\). Let \((\chi_{P})\in A\). Recall from the proof of Proposition 6.7 how \(\pi^{-1}((\chi_{P}))\) is defined: let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\) and choose an element \(g\in G\) such that \({}^{g}(P,e)\leq(D,e_{D})\). Set \(\chi_{(P,e)}:={}^{g^{-1}}\chi_{{}^{g}P}\in R_{\mathbb{K}}(I_{(P,e)}/P,e)\). Then the definition of \(\chi_{(P,e)}\) does not depend on the choice of \(g\), and \(\pi^{-1}((\chi_{P}))=(\chi_{(P,e)})_{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)}\). We need to show \((\chi_{(P,e)})\in\alpha(T_{\mathcal{O}}(B))\). To achieve this, we show that the tuple \((\chi_{(P,e)})\) satisfies Condition (C1) of Theorem 6.5.
Let \(P\) be a fully \(\mathcal{F}\)-normalized subgroup of \(D\). Let \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{P},e_{P})\), let \(s\in C_{I_{P}}(u)_{p^{\prime}}\), and let \(f\in\mathrm{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) be such that \(\epsilon\cdot f\neq 0\). Note that \((N_{D}(P),e_{N_{D}(P)})\) is a maximal \(\mathcal{OI}_{P}e_{P}\)-Brauer pair by Lemma 2.10. Therefore there exists an element \(g\in I_{P}\) such that \({}^{g}(\langle u\rangle,\epsilon)\leq(N_{D}(P),e_{N_{D}(P)})\). By Lemma 2.10 and the uniqueness of Brauer pairs it follows that \({}^{g}\epsilon=\epsilon_{\langle g\,u\rangle}\). Since \(\epsilon\cdot f\neq 0\) we have that \(\epsilon_{\langle g\,u\rangle}\cdot{}^{g}f\neq 0\), hence \({}^{g}f\) is \(C_{I_{P}}({}^{g}u)\)-conjugate to \(e_{P\langle g\,u\rangle}\). Let \(h\in C_{I_{P}}({}^{g}u)\) be such that \({}^{hg}f=e_{P\langle g\,u\rangle}\). Now by Condition (C3) we have that
\[\chi_{P}({}^{g}u^{g}s\epsilon_{\langle g\,u\rangle})=\mathrm{Ind}_{C_{I_{P} \cap I_{P\langle g\,u\rangle}}({}^{g}u)}^{C_{I_{P}}({}^{g}u)}(\mathrm{Res}_{C_{I _{P}\cap I_{P\langle g\,u\rangle}}({}^{g}u)}^{I_{P\langle g\,u\rangle}}(\chi_{ P\langle g\,u\rangle}))({}^{g}s).\]
Note that the left hand side of the above equation can be rewritten:
\[\chi_{P}({}^{g}u^{g}s_{\epsilon\langle{}^{g}u\rangle})=\chi_{P}({}^{g}(us\epsilon) )=\chi_{P}(us\epsilon)\]
since \(g\in I_{P}\). The right hand side can also be rewritten: since \({}^{hg}(P\langle{u}\rangle,f)=(P\langle{}^{g}u\rangle,e_{P\langle{}^{g}u \rangle})\) we have \(\chi_{P\langle{}^{g}u\rangle}={}^{hg}\chi_{(P\langle{u}\rangle,f)}\), and therefore
\[\operatorname{Ind}_{C_{I_{P}\cap I_{P\langle{}^{g}u\rangle}}({}^{g}u)}^{C_{I_ {P}({}^{g}u)}}( \operatorname{Res}_{C_{I_{P}\cap I_{P\langle{}^{g}u\rangle}}({}^{g}u)}^{I _{P({}^{g}u)}}(\chi_{P\langle{}^{g}u\rangle}))({}^{g}s)\] \[=\operatorname{Ind}_{C_{I_{P}\cap I_{P\langle{}^{g}u\rangle}}({}^ {g}u)}^{C_{I_{P}({}^{g}u)}}(\operatorname{Res}_{C_{I_{P}\cap I_{P\langle{}^{g} u\rangle}}({}^{g}u)}^{I_{P({}^{g}u)}}({}^{hg}\chi_{(P\langle{u}\rangle,f)}))({}^{g}s)\] \[={}^{hg}\operatorname{Ind}_{C_{I_{P}\cap I_{P\langle{}^{g}u \rangle},f}}^{C_{I_{P}({}^{g}u)}}(\operatorname{Res}_{C_{I_{P}\cap I_{P\langle{ }^{g}u\rangle},f}}^{I_{(P{}^{u}),f}}(u)(\chi_{(P\langle{u}\rangle,f)}))({}^{g}s)\] \[={}^{g}\operatorname{Ind}_{C_{I_{P}\cap I_{P\langle{}^{g}u \rangle},f}}^{C_{I_{P}({}^{g}u)}}(\operatorname{Res}_{C_{I_{P}\cap I_{P\langle{ }^{g}u\rangle},f}}^{I_{(P{}^{u}),f}}(u)(\chi_{(P\langle{u}\rangle,f)}))({}^{g}s)\] \[=\operatorname{Ind}_{C_{I_{P}\cap I_{P\langle{}^{g}u\rangle},f}}^ {C_{I_{P}({}^{g}u)}}(\operatorname{Res}_{C_{I_{P}\cap I_{P\langle{}^{g}u \rangle},f}}^{I_{(P{}^{u}),f}}(u)(\chi_{(P\langle{u}\rangle,f)}))(s).\]
Thus we have
\[\chi_{P}(us\epsilon)=\operatorname{Ind}_{C_{I_{P}\cap I_{P\langle{}^{g}u \rangle},f}}^{C_{I_{P}({}^{g}u)}}(\operatorname{Res}_{C_{I_{P}\cap I_{P\langle {}^{g}u\rangle},f}}^{I_{(P{}^{u}),f}}(u)(\chi_{(P\langle{u}\rangle,f)}))(s).\]
Now let \((P,e)\in\mathcal{BP}_{\mathcal{O}}(B)\), let \((u,\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{(P,e)},e)\), let \(s\in C_{I_{(P,e)}}(u)_{p^{\prime}}\), and let \(f\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle{u}\rangle))\) be such that \(\epsilon\cdot f\neq 0\). We need to show that
\[\chi_{(P,e)}(us\epsilon)=\operatorname{Ind}_{C_{I_{(P,e)}}\cap I_{(P\langle{u }\rangle,f}}^{C_{I_{(P,e)}}(u)}(\operatorname{Res}_{C_{I_{(P,e)}\cap I_{(P \langle{u}\rangle,f}}^{I_{(P{}^{u}),f}}(u)}(\chi_{(P\langle{u}\rangle,f)}))(s).\]
Let \(g\in G\) be such that \({}^{g}(P,e)\leq(D,e_{D})\). Since any subgroup of \(D\) is \(\mathcal{F}\)-conjugate to a fully \(\mathcal{F}\)-normalized subgroup of \(D\) we can assume that \({}^{g}P\) is fully \(\mathcal{F}\)-normalized. Recall that \(\chi_{(P,e)}={}^{g^{-1}}\chi_{{}^{g}P}\). Now \(({}^{g}u,{}^{g}\epsilon)\in\mathcal{BE}_{\mathcal{O}}(I_{{}^{g}P},e_{{}^{g}P}), {}^{g}s\in C_{I_{g}P}({}^{g}u)_{p^{\prime}}\), and \({}^{g}f\) is a block idempotent of \(\mathcal{O}C_{G}({}^{g}P\langle{}^{g}u\rangle)\) satisfying \({}^{g}\epsilon\cdot{}^{g}f\neq 0\). So by the previous paragraph we have
\[\chi_{{}^{g}P}({}^{g}(us\epsilon))=\operatorname{Ind}_{C_{I_{g}P\cap I_{(gP \langle{}^{g}u\rangle},{}^{g}f)}}^{C_{I_{g}P\langle{}^{g}u\rangle}}( \operatorname{Res}_{C_{I_{g}P\cap I_{(gP\langle{}^{g}u\rangle},{}^{g}f)}}^{I _{(gP\langle{}^{g}u\rangle},{}^{g}f)}(\chi_{({}^{g}P\langle{}^{g}u\rangle,{}^{ g}f)}))({}^{g}s).\]
The left hand side of this equation can be rewritten:
\[\chi_{{}^{g}P}({}^{g}(us\epsilon))={}^{g^{-1}}\chi_{{}^{g}P}(us\epsilon)=\chi _{(P,e)}(us\epsilon).\]
Since \(\chi_{(g\,P\langle g\,u\rangle,g\,f)}=\chi_{(g\,(P\langle u\rangle,f)}={}^{g}\chi_ {(P\langle u\rangle,f)}\) the right hand side can also be rewritten:
\[\operatorname{Ind}_{C_{Ig\,P\cap(g\langle g\,u\rangle,g\,f)}}^{C_{Ig \,P\,(g\langle u\rangle,g\,f)}}( \operatorname{Res}_{C_{Ig\,P\cap(g\langle u\rangle,g\,f)}}^{I_{(g \langle g\,\langle u\rangle,g\,f)}}{}^{(g\,\langle g\,\langle u\rangle,g\,f \rangle)}{})({}^{g}s)\] \[=\operatorname{Ind}_{C_{Ig\,P\cap(g\langle u\rangle,g\,f)}}^{C_{Ig \,P\,(g\langle u\rangle,g\,f)}}{}^{(g\,u)}(\operatorname{Res}_{C_{Ig\,P\cap(g \langle u\rangle,g\,f)}}^{I_{(g\langle g\,\langle u\rangle,g\,f\rangle)}}{}^{(g \,\chi_{(P\langle u\rangle,f)})})({}^{g}s)\] \[={}^{g}\operatorname{Ind}_{C_{I(P,e)}\cap_{I(P\langle u\rangle,f )}}^{C_{I(P\langle u\rangle,f)}}{}^{(u)}(\operatorname{Res}_{C_{I(P,e)\cap_{ I(P\langle u\rangle,f)}}}^{I_{(P\langle u\rangle,f)}}{}^{(u)}(\chi_{(P \langle u\rangle,f)}))({}^{g}s)\] \[=\operatorname{Ind}_{C_{I(P,e)}\cap_{I(P\langle u\rangle,f)}}^{C_ {I(P\langle u\rangle,f)}}{}^{(u)}(\operatorname{Res}_{C_{I(P,e)\cap_{I(P\langle u \rangle,f)}}}^{I_{(P\langle u\rangle,f)}}{}^{(u)}(\chi_{(P\langle u\rangle,f )}))(s).\]
Thus we have
\[\chi_{(P,e)}(us\epsilon)=\operatorname{Ind}_{C_{I(P,e)}\cap_{I(P\langle u \rangle,f)}}^{C_{I(P,e)}}{}^{(u)}(\operatorname{Res}_{C_{I(P,e)}\cap_{I(P \langle u\rangle,f)}}^{I_{(P\langle u\rangle,f)}}{}^{(u)}(\chi_{(P\langle u \rangle,f)}))(s).\]
This shows that the tuple \((\chi_{(P,e)})\) satisfies Condition (C1) of Theorem 6.5, and hence that \((\chi_{(P,e)})\in\alpha(T_{\mathcal{O}}(B))\). By definition of \(\delta\) we obtain that \((\chi_{P})=\pi((\chi_{(P,e)}))\in\delta(T_{\mathcal{O}}(B))\), and since \((\chi_{P})\) was an arbitrary element of \(A\) it follows that \(A\subseteq\delta(T_{\mathcal{O}}(B))\). The proof is complete.
## 7 Coherence conditions for trivial source bimodules with twisted diagonal vertices
Throughout this section \(G\) and \(H\) denote finite groups and \((\mathbb{K},\mathcal{O},F)\) is a \(p\)-modular system large enough for \(G\times H\). We follow the conventions set up in [2]. In particular, if \(R\) is a commutative ring and \(M\) is an \((RG,RH)\)-bimodule we always assume that the induced left and right \(R\)-module structures on \(M\) coincide. Any \((RG,RH)\)-bimodule \(M\) may be viewed as a left \(R[G\times H]\)-module by defining \((g,h)m=gmh^{-1}\), and vice versa. One obtains an isomorphism of categories \({}_{RG}\mathbf{mod}_{RH}\cong{}_{R[G\times H]}\mathbf{mod}\) in this way. We also identify \(R[G\times H]\) with \((RG)\otimes_{R}(RH)\) via the isomorphism \((g,h)\mapsto g\otimes h\). If \(e\in Z(RG)\) and \(f\in Z(RH)\) are idempotents then an \((RGe,RHf)\)-bimodule \(M\) is the same thing as a left \(R[G\times H](e\otimes f^{*})\)-module with these conventions. Here \(f^{*}\) denotes the image of \(f\) under the antipode \((-)^{*}:RH\to RH\), \(h\mapsto h^{-1}\).
Set \(T_{\mathcal{O}}(G,H):=T(\mathcal{O}G,\mathcal{O}H):=T(\mathcal{O}[G\times H])\). More generally, if \(e\in Z(\mathcal{O}G)\) and \(f\in Z(\mathcal{O}H)\) are idempotents, set
\[T(\mathcal{O}Ge,\mathcal{O}Hf):=T(\mathcal{O}[G\times H](e\otimes f^{*})).\]
Let \(e\in Z(\mathcal{O}G)\) and \(f\in Z(\mathcal{O}H)\) be idempotents. Let \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)=T^{\Delta}(\mathcal{O}[G\times H](e \otimes f^{*}))\) denote the subgroup of \(T(\mathcal{O}Ge,\mathcal{O}Hf)\) spanned by
the standard basis elements \([M]\) where \(M\) is an indecomposable trivial source \(\mathcal{O}[G\times H](e\otimes f^{*})\)-module with twisted diagonal vertices.
Recall from Section 6 (specifically, Theorem 6.2) that we have an injective homomorphism
\[\beta_{G\times H}=\beta:T(\mathcal{O}Ge,\mathcal{O}Hf)\hookrightarrow\left( \prod_{P\in S_{p}(G\times H)}R_{\mathbb{K}}(N_{G\times H}(P)/P,\mathrm{br}_{P} (e\otimes f^{*}))\right)^{G\times H}.\]
Our next result characterizes the image of \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\) under \(\beta\).
**Theorem 7.1**.: _Let \(G\) and \(H\) be finite groups and let \(e\in Z(\mathcal{O}G)\), \(f\in Z(\mathcal{O}H)\) be idempotents. With the notation above,_
\[\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))=\left\{(\chi_{P})\in\beta(T( \mathcal{O}Ge,\mathcal{O}Hf))|\chi_{P}=0\text{ if }P\notin S_{p}^{\Delta}(G\times H) \right\}.\]
Proof.: First note that the collection of character tuples \((\chi_{P})\in\beta(T(\mathcal{O}Ge,\mathcal{O}Hf))\) that satisfy \(\chi_{P}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\) is a subgroup of \(\beta(T(\mathcal{O}Ge,\mathcal{O}Hf))\). For ease, let us denote this subgroup by \(B\). So we need to show that \(\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))=B\). Let \(M\) be an indecomposable trivial source \(\mathcal{O}[G\times H](e\otimes f^{*})\)-module with twisted diagonal vertices (so \([M]\) is a standard basis element in \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\)). Recall that \(\beta([M])=(\chi_{M(P)})_{P\in S_{p}(G\times H)}\). Let \(P\in S_{p}(G\times H)\setminus S_{p}^{\Delta}(G\times H)\) and suppose that \(\chi_{M(P)}\neq 0\). Then \(M(P)\neq 0\), hence \(\overline{M}(P)\neq 0\) also. By [2, Lemma 3.6(a)] \(P\) must be \(G\times H\)-conjugate to a subgroup of a vertex of \(M\). But this implies that \(P\) is a twisted diagonal subgroup of \(G\times H\), a contradiction. Thus \(\chi_{M(P)}=0\). This shows that \(\beta([M])\in B\). Since character tuples of the form \(\beta([M])\) generate \(\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\) it follows that \(\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\subseteq B\).
It remains to show that \(B\subseteq\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\). It suffices to show that \(\beta^{-1}(B)\subseteq T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\). Suppose, by way of contradiction, that \(\beta^{-1}(B)\not\subseteq T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\). Let \(m\in\beta^{-1}(B)\setminus T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\). Write \(m=\sum a_{[M]}[M]\) where \([M]\) runs over the standard basis elements of \(T(\mathcal{O}Ge,\mathcal{O}Hf)\) -- i.e., the isomorphism classes of indecomposable trivial source \(\mathcal{O}[G\times H](e\otimes f^{*})\)-modules -- and where \(a_{[M]}\in\mathbb{Z}\). Since \(m\notin T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\) there exists a standard basis element \([M]\) such that \(a_{[M]}\neq 0\) and \(M\) has a vertex that is not twisted diagonal. Choose a \(p\)-subgroup \(P\in S_{p}(G\times H)\setminus S_{p}^{\Delta}(G\times H)\) maximal with respect to the property that \(P\) is a vertex of an indecomposable trivial source \(\mathcal{O}[G\times H](e\otimes f^{*})\)-module \(M\) with \(a_{[M]}\neq 0\). Since \(P\) is not a twisted diagonal \(p\)-subgroup of \(G\times H\) and \(\beta(m)\in B\) we have that \(\sum a_{[M]}\chi_{M(P)}=0\). Now if \(M\) is an indecomposable trivial source \(\mathcal{O}[G\times H](e\otimes f^{*})\)-module and \(P\) is not \(G\times H\)-conjugate to a subgroup of a vertex of \(M\) then \(M(P)=0\) by [2, Lemma 3.6(a)]. On the other hand, if \(P\) is \(G\times H\)-conjugate to a subgroup
of a vertex of \(M\) and \(a_{[M]}\neq 0\) then \(P\) must be a vertex of \(M\) by maximality. Thus we have that
\[\sum_{P\in\operatorname{vtx}(M)}a_{[M]}\chi_{M(P)}=0.\]
(Note: the sum above is taken over the set of isomorphism classes of indecomposable trivial source \(\mathcal{O}[G\times H](e\otimes f^{*})\)-modules that have \(P\) as a vertex.) Recall that the Brauer construction \(M\mapsto\overline{M}(P)\) induces a bijection between the set of isomorphism classes of indecomposable trivial source \(\mathcal{O}[G\times H]\)-modules with vertex \(P\) and the set of isomorphism classes of projective indecomposable \(F[N_{G\times H}(P)/P]\)-modules (see [2, Proposition 3.3(c)]). It follows that the "\(\mathcal{O}\)-lifted" Brauer construction \(M\mapsto M(P)\) induces a bijection between the set of isomorphism classes of indecomposable trivial source \(\mathcal{O}[G\times H]\)-modules with vertex \(P\) and the set of isomorphism classes of projective indecomposable \(\mathcal{O}[N_{G\times H}(P)/P]\)-modules. So we see that the equality \(\sum_{P\in\operatorname{vtx}(M)}a_{[M]}\chi_{M(P)}=0\) is a nontrivial dependence relation between the characters of the projective indecomposable \(\mathcal{O}[N_{G\times H}(P)/P]\)-modules, contradicting the fact that such characters are always \(\mathbb{K}\)-linearly independent. Therefore we must have \(\beta^{-1}(B)\subseteq T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\), and hence \(B\subseteq\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\). The proof is complete.
The corollary below follows from Theorems 6.2 and 7.1.
**Corollary 7.2**.: _Let \(G\) and \(H\) be finite groups and let \(e\in Z(\mathcal{O}G)\), \(f\in Z(\mathcal{O}H)\) be idempotents. The image of \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\) under \(\beta_{G\times H}\) is the subgroup of tuples \((\chi_{P})\in\left(\prod_{P\in S_{p}(G\times H)}R_{\mathbb{K}}(N_{G\times H}(P )/P,\operatorname{br}_{P}^{G\times H}(e\otimes f^{*}))\right)^{G\times H}\) such that_
1. \(\chi_{P}(us,vt)=\chi_{P\langle(u,v)\rangle}(s,t)\) _for all_ \(P\in S_{p}(G\times H)\)_, all_ \(p\)_-elements_ \((u,v)\in N_{G\times H}(P)\)_, and all_ \((s,t)\in C_{N_{G\times H}(P)}(u,v)_{p^{\prime}}\)_; and_
2. \(\chi_{P}=0\) _if_ \(P\notin S_{p}^{\Delta}(G\times H)\)_._
Let \(G\) and \(H\) be finite groups and let \(e\in Z(\mathcal{O}G)\), \(f\in Z(\mathcal{O}H)\) be idempotents. Note that if \(\Delta(P,\phi,Q)\) is a twisted diagonal \(p\)-subgroup of \(G\times H\) then
\[\operatorname{br}_{\Delta(P,\phi,Q)}^{G\times H}(e\otimes f^{*})=\operatorname {br}_{P}^{G}(e)\otimes\operatorname{br}_{Q}^{H}(f)^{*}.\]
Write \(\pi\) and \(i\) for the obvious projection and inclusion maps below:
\[\prod_{P\in S_{p}(G\times H)}R_{\mathbb{K}}(N_{G\times H}(P)/P, \operatorname{br}_{P}^{G\times H}(e\otimes f^{*}))\] \[\begin{CD}\pi\Big{\downarrow}&\Big{[}i\\ \prod_{\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H)}R_{\mathbb{K}}(N_{G\times H }(\Delta(P,\phi,Q))/\Delta(P,\phi,Q),\operatorname{br}_{P}^{G}(e)\otimes \operatorname{br}_{Q}^{H}(f)^{*})\end{CD}\]
Since \(\pi\) and \(i\) are \(G\times H\)-homomorphisms they restrict to maps on the respective subgroups of \(G\times H\)-invariant tuples. We will abusively denote these restrictions by \(\pi\) and \(i\) in what follows. Note that the image of (the restriction of) \(i\) is the subgroup of tuples \((\chi_{P})\) in \((\prod_{P\in S_{p}(G\times H)}R_{\mathbb{K}}(N_{G\times H}(P)/P,\operatorname{ br}_{P}^{G\times H}(e\otimes f^{*})))^{G\times H}\) that satisfy \(\chi_{P}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\).
**Corollary 7.3**.: _Let \(G\) and \(H\) be finite groups and let \(e\in Z(\mathcal{O}G)\), \(f\in Z(\mathcal{O}H)\) be idempotents. The composite \(\pi\beta\) is injective on \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\), and the image of \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\) under \(\pi\beta\) is the subgroup of tuples \((\chi_{\Delta(P,\phi,Q)})\) in \((\prod_{\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H)}R_{\mathbb{K}}(N_{G \times H}(\Delta(P,\phi,Q))/\Delta(P,\phi,Q),\operatorname{br}_{P}^{G}(e) \otimes\operatorname{br}_{Q}^{H}(f)^{*})^{G\times H}\) that satisfy:_
\[\chi_{\Delta(P,\phi,Q)}(us,vt)=\begin{cases}\chi_{\Delta(P,\phi,Q)\langle(u,v )\rangle}(s,t)&\text{if }\Delta(P,\phi,Q)\langle(u,v)\rangle\in S_{p}^{\Delta}(G \times H)\\ 0&\text{else}\end{cases}\]
_for all \(\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H)\), all \(p\)-elements \((u,v)\in N_{G\times H}(\Delta(P,\phi,Q))\), and all \((s,t)\in C_{N_{G\times H}(\Delta(P,\phi,Q))}(u,v)_{p^{\prime}}\)._
Proof.: We first show that \(\pi\beta\) is injective on \(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\). Let \(m\in T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf)\) and suppose that \(\pi\beta(m)=0\). Write \(\beta(m)=(\chi_{P})_{P\in S_{p}(G\times H)}\). By Theorem 7.1\(\chi_{P}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\). Since \(\pi\beta(m)=0\) also \(\chi_{P}=0\) if \(P\in S_{p}^{\Delta}(G\times H)\). Therefore \(\beta(m)=0\). But \(\beta\) is injective, so \(m=0\).
Now let \((\chi_{P})\in\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\). Then
\[(\chi_{P})\in\left(\prod_{P\in S_{p}(G\times H)}R_{\mathbb{K}}(N_{G\times H}(P )/P,\operatorname{br}_{P}^{G\times H}(e\otimes f^{*}))\right)^{G\times H}\]
and \((\chi_{P})\) satisfies Conditions (1) and (2) of Corollary 7.2. Of course we have
\[\pi((\chi_{P}))\in(\prod_{\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H)}R_{ \mathbb{K}}(N_{G\times H}(\Delta(P,\phi,Q))/\Delta(P,\phi,Q),\operatorname{ br}_{P}^{G}(e)\otimes\operatorname{br}_{Q}^{H}(f)^{*})^{G\times H}.\]
Let \(\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H)\), let \((u,v)\in N_{G\times H}(\Delta(P,\phi,Q))\) be a \(p\)-element, and let \((s,t)\in C_{N_{G\times H}(\Delta(P,\phi,Q))}(u,v)_{p^{\prime}}\). If \(\Delta(P,\phi,Q)\langle(u,v)\rangle\) is not twisted diagonal then \(\chi_{\Delta(P,\phi,Q)((u,v))}=0\), so \(\chi_{\Delta(P,\phi,Q)}(us,vt)=0\). If \(\Delta(P,\phi,Q)\langle(u,v)\rangle\) is twisted diagonal then \(\chi_{\Delta(P,\phi,Q)}(us,vt)=\chi_{\Delta(P,\phi,Q)\langle(u,v)\rangle}(s,t)\) by Condition (1) of Corollary 7.2. This shows that \(\pi\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\) is contained in the subgroup of character tuples specified in the statement of the corollary.
Conversely, suppose that
\[(\chi_{\Delta(P,\phi,Q)})\in(\prod_{\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H )}R_{\mathbb{K}}(N_{G\times H}(\Delta(P,\phi,Q))/\Delta(P,\phi,Q),\operatorname {br}_{P}^{G}(e)\otimes\operatorname{br}_{Q}^{H}(f)^{*})^{G\times H}\]
and that \((\chi_{\Delta(P,\phi,Q)})\) satisfies
\[\chi_{\Delta(P,\phi,Q)}(us,vt)=\begin{cases}\chi_{\Delta(P,\phi,Q)\langle(u,v) \rangle}(s,t)&\text{if }\Delta(P,\phi,Q)\langle(u,v)\rangle\in S_{p}^{\Delta}(G\times H)\\ 0&\text{else}\end{cases}\]
for all \(\Delta(P,\phi,Q)\in S_{p}^{\Delta}(G\times H)\), all \(p\)-elements \((u,v)\in N_{G\times H}(\Delta(P,\phi,Q))\), and all \((s,t)\in C_{N_{G\times H}(\Delta(P,\phi,Q))}(u,v)_{p^{\prime}}\). It is straightforward to check that \(i((\chi_{\Delta(P,\phi,Q)}))\in\beta(T^{\Delta}(\mathcal{O}Ge,\mathcal{O}Hf))\) using Corollary 7.2. It follows that
\[(\chi_{\Delta(P,\phi,Q)})=\pi i((\chi_{\Delta(P,\phi,Q)}))\in\pi\beta(T^{ \Delta}(\mathcal{O}Ge,\mathcal{O}Hf)).\]
This completes the proof.
Now let \(G\) and \(H\) be finite groups, and let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\) with respective identities \(e_{A}\) and \(f_{B}\). Then every Brauer pair of \(A\otimes_{\mathcal{O}}B^{*}\) is of the form \((P,e\otimes f^{*})\) where \(P\in S_{p}(G\times H)\), \((p_{1}(P),e)\in\mathcal{BP}_{\mathcal{O}}(A)\) and \((p_{2}(P),f)\in\mathcal{BP}_{\mathcal{O}}(B)\). Set
\[Y_{(P,e\otimes f^{*})}=N_{G\times H}(P,e\otimes f^{*})\]
for each \((P,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\). Recall from Section 6 that we have an injective group homomorphism
defined by
\[\alpha([M])=(\chi_{M(P,e\otimes f^{*})})_{(P,e\otimes f^{*})\in\mathcal{BP}_{ \mathcal{O}}(A\otimes B^{*})}\qquad M\in{}_{A\otimes B^{*}}\text{\bf triv}.\]
**Theorem 7.4**.: _Let \(G\) and \(H\) be finite groups and let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\) with respective identities \(e_{A}\) and \(f_{B}\). With the notation above,_
\[\alpha(T^{\Delta}(A,B))=\left\{(\chi_{(P,e\otimes f^{*})})\in\alpha(T(A,B))| \chi_{(P,e\otimes f^{*})}=0\text{ if }P\notin S_{p}^{\Delta}(G\times H) \right\}.\]
Proof.: Recall from Section 6 that we have a commutative diagram
\[T(A,B)\]
The isomorphism \(\rho\) maps a \(G\times H\)-fixed character tuple \((\chi_{P})_{P\in S_{p}(G\times H)}\) to the tuple whose \((P,e\otimes f^{*})\)-component is \((e\otimes f^{*})\cdot\operatorname{Res}_{Y_{(P,e\otimes f^{*})}}^{N_{G\times H} (P)}(\chi_{P})\); that is
\[\rho((\chi_{P})_{P\in S_{p}(G\times H)})=((e\otimes f^{*})\cdot\operatorname{ Res}_{Y_{(P,e\otimes f^{*})}}^{N_{G\times H}(P)}(\chi_{P}))_{(P,e\otimes f^{*}) \in\mathcal{B}\mathcal{P}_{\mathcal{O}}(A\otimes B^{*})}.\]
Let \((\chi_{P})\in\beta(T^{\Delta}(A,B))\). Then by Theorem 7.1 we have that \((\chi_{P})\in\beta(T(A,B))\) and \(\chi_{P}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\). For each \(A\otimes B^{*}\)-Brauer pair \((P,e\otimes f^{*})\) set \(\psi_{(P,e\otimes f^{*})}=(e\otimes f^{*})\cdot\operatorname{Res}_{Y_{(P,e \otimes f^{*})}}^{N_{G\times H}(P)}(\chi_{P})\), so that \(\rho((\chi_{P}))=(\psi_{(P,e\otimes f^{*})})\). Since \((\chi_{P})\in\beta(T(A,B))\) we have that \((\psi_{(P,e\otimes f^{*})})\in\alpha(T(A,B))\). Moreover if \((P,e\otimes f^{*})\) is an \(A\otimes B^{*}\)-Brauer pair such that \(P\) is not a twisted diagonal subgroup of \(G\times H\) then \(\chi_{P}=0\), hence \(\psi_{(P,e\otimes f^{*})}=0\). Since \(\alpha(T^{\Delta}(A,B))=\rho(\beta(T^{\Delta}(A,B)))\) this shows that
\[\alpha(T^{\Delta}(A,B))\subseteq\left\{(\chi_{(P,e\otimes f^{*})})\in\alpha(T (A,B))|\chi_{(P,e\otimes f^{*})}=0\text{ if }P\notin S_{p}^{\Delta}(G\times H) \right\}.\]
For the reverse containment, suppose that \((\chi_{(P,e\otimes f^{*})})\in\alpha(T(A,B))\) with the property that \(\chi_{(P,e\otimes f^{*})}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\). Set \((\psi_{P})=\rho^{-1}((\chi_{(P,e\otimes f^{*})}))\). Then \((\psi_{P})\in\beta(T(A,B))\). Furthermore if \(P\) is a \(p\)-subgroup of \(G\times H\) that is not twisted diagonal then the proof of Lemma 6.4 makes it clear that \(\psi_{P}=0\) (indeed, if \(\operatorname{br}_{P}^{G\times H}(e_{A}\otimes f_{B}^{*})=0\) then \(\psi_{P}=0\), and otherwise \(\psi_{P}\) is a sum of characters of the form \(\operatorname{Ind}_{Y_{(P,e\otimes f^{*})}}^{N_{G\times H}(P)}(\chi_{(P,e \otimes f^{*})})\), which are all \(0\)). So Theorem 7.1 tells us that \((\psi_{P})\in\beta(T^{\Delta}(A,B))\). It follows that \((\chi_{(P,e\otimes f^{*})})\in\alpha(T^{\Delta}(A,B))\). This gives the reverse containment and completes the proof.
The following corollary is immediate from Theorem 7.4 and Condition (C1) of Theorem 6.5.
**Corollary 7.5**.: _Let \(G\) and \(H\) be finite groups, let \(A\in\operatorname{Bl}(\mathcal{O}G)\) and let \(B\in\operatorname{Bl}(\mathcal{O}H)\) with respective identities \(e_{A}\) and \(f_{B}\). For each Brauer pair \((P,e\otimes f^{*})\in\mathcal{B}\mathcal{P}_{\mathcal{O}}(A\otimes B^{*})\) set \(Y_{(P,e\otimes f^{*})}=N_{G\times H}(P,e\otimes f^{*})\). The image of \(T^{\Delta}(A,B)\) under the map_
_is the subgroup of tuples \((\chi_{(P,e\otimes f^{*})})\in(\prod_{(P,e\otimes f^{*})\in\mathcal{B}\mathcal{ P}_{\mathcal{O}}(A\otimes B^{*})}R_{\mathbb{K}}(Y_{(P,e\otimes f^{*})}/P,e\otimes f ^{*}))^{G\times H}\) that satisfy \(\chi_{(P,e\otimes f^{*})}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\) and_
\[\chi_{(P,e\otimes f^{*})}((us,vt)\epsilon)\] \[=\operatorname{Ind}_{C_{Y_{(P,e\otimes f^{*})}}^{C_{Y_{(P,e \otimes f^{*})}}(u,v)}}^{C_{Y_{(P,e\otimes f^{*})}}^{Y_{(P((u,v)),e^{\prime} \otimes f^{*})}}(u,v)}(\operatorname{Res}_{C_{Y_{(P,e\otimes f^{*})}}^{Y_{(P((u,v)),e^{\prime}\otimes f^{*})}}(u,v)}^{Y_{(P((u,v)),e^{\prime}\otimes f^{*})} }(u,v)}(\chi_{(P((u,v)),e^{\prime}\otimes f^{*})}))(s,t)\]
_for all \((P,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\), \(((u,v),\epsilon)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(Y_{(P,e\otimes f^{*})},e \otimes f^{*})\), \((s,t)\in C_{Y_{(P,e\otimes f^{*})}}(u,v)_{p^{\prime}}\), and all \(e^{\prime}\in\operatorname{bli}(\mathcal{OC}_{G}(p_{1}(P)\langle u\rangle))\), \(f^{\prime}\in\operatorname{bli}(\mathcal{OC}_{H}(p_{2}(P)\langle v\rangle))\) such that \(\epsilon\cdot(e^{\prime}\otimes f^{\prime*})\neq 0\)._
**Corollary 7.6**.: _Let \(G\) and \(H\) be finite groups, let \(A\in\operatorname{Bl}(\mathcal{O}G)\) and let \(B\in\operatorname{Bl}(\mathcal{O}H)\) with respective identities \(e_{A}\) and \(f_{B}\). For each Brauer pair \((P,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\) set \(Y_{(P,e\otimes f^{*})}=N_{G\times H}(P,e\otimes f^{*})\). The image of \(T^{\Delta}(A,B)\) under the map_
\[T_{\mathcal{O}}(A,B)\xhookrightarrow{\alpha}\ \left(\prod_{(P,e\otimes f^{*})\in \mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})}R_{\mathbb{K}}(Y_{(P,e\otimes f^{* })}/P,e\otimes f^{*})\right)^{G\times H}\]
_is the subgroup of tuples \((\chi_{(P,e\otimes f^{*})})\in(\prod_{(P,e\otimes f^{*})\in\mathcal{BP}_{ \mathcal{O}}(A\otimes B^{*})}R_{\mathbb{K}}(Y_{(P,e\otimes f^{*})}/P,e \otimes f^{*}))^{G\times H}\) that satisfy \(\chi_{(P,e\otimes f^{*})}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\) and_
\[\chi_{(P,e\otimes f^{*})}(us,vt)=\sum_{\begin{subarray}{c}e^{\prime}\in \operatorname{bli}(\mathcal{OC}_{G}(p_{1}(P)\langle u\rangle))\\ (p_{1}(P),e)\preceq(p_{1}(P)\langle u\rangle,e^{\prime})\\ s\in N_{G}(p_{1}(P)\langle u\rangle,e^{\prime})\end{subarray}}\sum_{ \begin{subarray}{c}f^{\prime}\in\operatorname{bli}(\mathcal{OC}_{H}(p_{2}(P) \langle v\rangle))\\ (p_{2}(P),f)\preceq(p_{2}(P)\langle v\rangle,f^{\prime})\\ t\in N_{H}(p_{2}(P)\langle v\rangle,f^{\prime})\end{subarray}}\chi_{(P \langle(u,v)\rangle,e^{\prime}\otimes f^{\prime*})}(s,t)\]
_for all \((P,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\), all \(p\)-elements \((u,v)\in Y_{(P,e\otimes f^{*})}\), and all \((s,t)\in C_{Y_{(P,e\otimes f^{*})}}(u,v)_{p^{\prime}}\)._
Proof.: From Theorem 7.4 we know that \(\alpha(T^{\Delta}(A,B))\) is the subgroup of tuples \((\chi_{(P,e\otimes f^{*})})\) in \((\prod_{(P,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})}R_{ \mathbb{K}}(Y_{(P,e\otimes f^{*})}/P,e\otimes f^{*}))^{G\times H}\) satisfying \(\chi_{(P,e\otimes f^{*})}=0\) if \(P\notin S_{p}^{\Delta}(G\times H)\) and Condition (C2) of Theorem 6.5, which translates literally to
\[\chi_{(P,e\otimes f^{*})}(us,vt)=\sum_{\begin{subarray}{c}\varphi\in \operatorname{bli}(\mathcal{OC}_{G\times H}(P\langle(u,v)\rangle))\\ (P,e\otimes f^{*})\preceq(P\langle(u,v)\rangle,\varphi)\\ (s,t)\in Y_{(P\langle(u,v)\rangle,\varphi)}\end{subarray}}\chi_{(P\langle(u,v )\rangle,\varphi)}(s,t)\]
for all \((P,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\), all \(p\)-elements \((u,v)\in Y_{(P,e\otimes f^{*})}\), and all \((s,t)\in C_{Y_{(P,e\otimes f^{*})}}(u,v)_{p^{\prime}}\). But for fixed such \((P,e\otimes f^{*})\), \((u,v)\), and \((s,t)\) this sum can be reindexed, because the indexing set is in bijection with the set of ordered pairs \((e^{\prime},f^{\prime})\) where \(e^{\prime}\in\operatorname{bli}(\mathcal{OC}_{G}(p_{1}(P)\langle u\rangle))\) is such that \((p_{1}(P),e)\trianglelefteq(p_{1}(P)\langle u\rangle,e^{\prime})\) and \(s\in N_{G}(p_{1}(P)\langle u\rangle,e^{\prime})\) and where \(f^{\prime}\in\operatorname{bli}(\mathcal{OC}_{H}(p_{2}(P)\langle v\rangle))\) is such that \((p_{2}(P),f)\trianglelefteq(p_{2}(P)\langle v\rangle,f^{\prime})\) and \(t\in N_{H}(p_{2}(P)\langle v\rangle,f^{\prime})\): a bijection is given by \((e^{\prime},f^{\prime})\mapsto e^{\prime}\otimes f^{\prime*}\). Note that to prove this map is a well-defined bijection one makes use of Lemma 2.6. Reindexing the sum via this bijection gives the result.
Let \(G\) and \(H\) be finite groups and let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\). Let \(\mathcal{BP}^{\Delta}_{\mathcal{O}}(A\otimes B^{*})\) denote the subset of \(\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\) consisting of Brauer pairs with twisted diagonal first component. Note that \(\mathcal{BP}^{\Delta}_{\mathcal{O}}(A\otimes B^{*})\) is stable under \(G\times H\)-conjugation. Let \(\pi\) and \(i\) denote the obvious projection and inclusion maps in the diagram below:
\[\prod_{(R,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})}R_{ \mathbb{K}}(Y_{(R,e\otimes f^{*})}/R,e\otimes f^{*})\]
The maps \(\pi\) and \(i\) are both \(G\times H\)-equivariant, hence restrict to maps on the respective subgroups of \(G\times H\)-fixed tuples. The restriction of \(\pi\) to the respective subgroups of \(G\times H\)-fixed tuples is a surjective map, and the image of the restriction of \(i\) is equal to the subgroup of \(G\times H\)-fixed tuples \((\chi_{(R,e\otimes f^{*})})\) that satisfy \(\chi_{(R,e\otimes f^{*})}=0\) if \(R\notin S^{\Delta}_{p}(G\times H)\).
The following corollaries can be proved in the same way as Corollary 7.3.
**Corollary 7.7**.: _Let \(G\) and \(H\) be finite groups and let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\). The composite \(\pi\alpha\) is injective on \(T^{\Delta}(A,B)\). The image of \(T^{\Delta}(A,B)\) under \(\pi\alpha\) is equal to the subgroup of tuples \((\chi_{(\Delta(P,\phi,Q),e\otimes f^{*})})\) in_
\[(\prod_{(\Delta(P,\phi,Q),e\otimes f^{*})\in\mathcal{BP}^{\Delta}_{\mathcal{O} }(A\otimes B^{*})}R_{\mathbb{K}}(Y_{(\Delta(P,\phi,Q),e\otimes f^{*})}/\Delta( P,\phi,Q),e\otimes f^{*}))^{G\times H}\]
_that satisfy: for all \((\Delta(P,\phi,Q),e\otimes f^{*})\in\mathcal{BP}^{\Delta}_{\mathcal{O}}(A \otimes B^{*})\), \(((u,v),\epsilon)\in\mathcal{BE}_{\mathcal{O}}(Y_{(\Delta(P,\phi,Q),e\otimes f ^{*})},e\otimes f^{*})\), \((s,t)\in C_{Y_{(\Delta(P,\phi,Q),e\otimes f^{*})}}(u,v)_{p^{\prime}}\), and all \(e^{\prime}\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\) and \(f^{\prime}\in\operatorname{bli}(\mathcal{O}C_{H}(Q\langle v\rangle))\) such that \(\epsilon\cdot(e^{\prime}\otimes f^{\prime*})\neq 0\), one has_
\[\chi_{(\Delta(P,\phi,Q),e\otimes f^{*})}((us,vt)\epsilon)=\]
_if \(\Delta(P,\phi,Q)\langle(u,v)\rangle\in S^{\Delta}_{p}(G\times H)\), where for ease we write \(C=C_{Y_{(\Delta(P,\phi,Q),e\otimes f^{*})}}(u,v)\); and_
\[\chi_{(\Delta(P,\phi,Q),e\otimes f^{*})}((us,vt)\epsilon)=0\]
_if \(\Delta(P,\phi,Q)\langle(u,v)\rangle\notin S^{\Delta}_{p}(G\times H)\)._
**Corollary 7.8**.: _Let \(G\) and \(H\) be finite groups and let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\). The composite \(\pi\alpha\) is injective on \(T^{\Delta}(A,B)\). The image of \(T^{\Delta}(A,B)\) under \(\pi\alpha\) is equal to the subgroup of tuples \((\chi_{(\Delta(P,\phi,Q),e\otimes f^{*})})\) in_
\[(\prod_{(\Delta(P,\phi,Q),e\otimes f^{*})\in\mathcal{BP}^{\Delta}_{\mathcal{O} }(A\otimes B^{*})}R_{\mathbb{K}}(Y_{(\Delta(P,\phi,Q),e\otimes f^{*})}/\Delta( P,\phi,Q),e\otimes f^{*}))^{G\times H}\]
_that satisfy: for all \((\Delta(P,\phi,Q),e\otimes f^{*})\in\mathcal{BP}^{\Delta}_{\mathcal{O}}(A\otimes B ^{*})\), all \(p\)-elements \((u,v)\in Y_{(\Delta(P,\phi,Q),e\otimes f^{*})}\), and all \((s,t)\in C_{Y_{(\Delta(P,\phi,Q),e\otimes f^{*})}}(u,v)_{p^{\prime}}\) one has_
\[\chi_{(\Delta(P,\phi,Q),e\otimes f^{*})}(us,vt)=\sum_{\begin{subarray}{c}e^{ \prime}\in\operatorname{bli}(\mathcal{O}C_{G}(P\langle u\rangle))\\ (P,e)\preceq(P\langle u\rangle,e^{\prime})\\ s\in N_{G}(P\langle u\rangle,e^{\prime})\end{subarray}}\sum_{\begin{subarray}{c }f^{\prime}\in\operatorname{bli}(\mathcal{O}C_{H}(Q\langle v\rangle))\\ (Q,f)\preceq(Q\langle v\rangle,f^{\prime})\\ t\in N_{H}(Q\langle v\rangle,f^{\prime})\end{subarray}}\chi_{(\Delta(P,\phi,Q)((u,v)),e^{\prime}\otimes f^{\prime*})}(s,t)\]
_if \(\Delta(P,\phi,Q)\langle(u,v)\rangle\in S^{\Delta}_{p}(G\times H)\), and_
\[\chi_{(\Delta(P,\phi,Q),e\otimes f^{*})}(us,vt)=0\]
_if \(\Delta(P,\phi,Q)\langle(u,v)\rangle\notin S^{\Delta}_{p}(G\times H)\)._
Let \(G\) and \(H\) be finite groups, let \(A\in\operatorname{Bl}(\mathcal{O}G)\) and let \(B\in\operatorname{Bl}(\mathcal{O}H)\). Let \((D,e_{D})\in\mathcal{BP}_{\mathcal{O}}(A)\) be a maximal \(A\)-Brauer pair and let \((E,f_{E})\in\mathcal{BP}_{\mathcal{O}}(B)\) be a maximal \(B\)-Brauer pair. If \(P\leq D\) write \(e_{P}\) for the unique block idempotent of \(\mathcal{O}C_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\). Likewise if \(Q\leq E\) write \((Q,f_{Q})\leq(E,f_{E})\). Recall that \((E,f_{E}^{*})\) is a maximal \(B^{*}\)-Brauer pair and if \(Q\leq E\) then \((Q,f_{Q}^{*})\leq(E,f_{E}^{*})\). By Lemma 2.11\((D\times E,e_{D}\otimes f_{E}^{*})\) is a maximal \(A\otimes_{\mathcal{O}}B^{*}\)-Brauer pair. If \(R\leq D\times E\) then
\[(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\leq(D\times E,e_{D}\otimes f_{E}^{*})\]
is a containment of \(A\otimes B^{*}\)-Brauer pairs by Lemma 2.12. Let \(\mathcal{A}=\mathcal{F}_{(D,e_{D})}(A,G)\) and let \(\mathcal{B}^{*}=\mathcal{F}_{(E,f_{E}^{*})}(B^{*},H)\). Note that \(\mathcal{B}^{*}=\mathcal{F}_{(E,f_{E})}(B,H)\) -- we prefer to write \(\mathcal{B}^{*}\) for consistency of notation. By Lemma 2.12 we have
\[\mathcal{F}_{(D\times E),e_{D}\otimes f_{E}^{*})}(A\otimes_{\mathcal{O}}B^{*},G\times H)=\mathcal{A}\times\mathcal{B}^{*}.\]
For each \(R\leq D\times E\) set
\[Y_{R}:=Y_{(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})}=N_{G\times H}(R,e_{p_{1}( R)}\otimes f_{p_{2}(R)}^{*}).\]
Recall from Section 6 that we have an injective group homomorphism
\[\delta_{A\otimes B^{*}}=\delta:T(A,B)\hookrightarrow\left(\prod_{R\leq D \times E}R_{\mathbb{K}}(Y_{R}/R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\right)^ {\mathcal{A}\times\mathcal{B}^{*}}.\]
If \(M\in{}_{A\otimes B^{*}}\)**triv** then
\[\delta([M])=(\chi_{M(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})})_{R\leq D\times E}.\]
**Theorem 7.9**.: _Let \(G\) and \(H\) be finite groups, let \(A\in\mathrm{Bl}(\mathcal{O}G)\) and \(B\in\mathrm{Bl}(\mathcal{O}H)\) and choose maximal Brauer pairs \((D,e_{D})\in\mathcal{BP}_{\mathcal{O}}(A)\), \((E,f_{E})\in\mathcal{BP}_{\mathcal{O}}(B)\). With the notation set above,_
\[\delta(T^{\Delta}(A,B))=\left\{(\chi_{R})_{R\leq D\times E}\in\delta(T(A,B))| \chi_{R}=0\text{ if }R\notin S_{p}^{\Delta}(G\times H)\right\}.\]
Proof.: Recall that we have a commutative diagram
In particular, \(\delta(T^{\Delta}(A,B))=\pi(\alpha(T^{\Delta}(A,B)))\). By Theorem 7.4 we have that
\[\alpha(T^{\Delta}(A,B))=\left\{(\chi_{(R,e\otimes f^{*})})\in\alpha(T(A,B))| \chi_{(R,e\otimes f^{*})}=0\text{ if }R\notin S_{p}^{\Delta}(G\times H)\right\}.\]
Now let \((\chi_{(R,e\otimes f^{*})})\in\alpha(T^{\Delta}(A,B))\). Then
\[\pi((\chi_{(R,e\otimes f^{*})}))=(\chi_{(R,e\otimes f^{*})})_{R\leq D\times E }\in\delta(T(A,B))\]
and if \(R\leq D\times E\) is such that \(R\notin S_{p}^{\Delta}(G\times H)\) then \(\chi_{(R,e\otimes f^{*})}=0\). It follows that
\[\delta(T^{\Delta}(A,B))\subseteq\left\{(\chi_{R})_{R\leq D\times E}\in\delta( T(A,B))|\chi_{R}=0\text{ if }R\notin S_{p}^{\Delta}(G\times H)\right\}.\]
Conversely, let \((\chi_{R})_{R\leq D\times E}\in\delta(T(A,B))\) be such that \(\chi_{R}=0\) if \(R\notin S_{p}^{\Delta}(G\times H)\). Set \((\psi_{(R,e\otimes f^{*})})=\pi^{-1}((\chi_{R}))\). Recall from the proof of Proposition 6.7 that if \((R,e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(A\otimes B^{*})\) then
\[\psi_{(R,e\otimes f^{*})}={}^{(g,h)^{-1}}\chi_{{}^{(g,h)}R}\]
where \((g,h)\in G\times H\) is an element satisfying \({}^{(g,h)}(R,e\otimes f^{*})\leq(D\times E,e_{D}\otimes f^{*}_{E})\). Since \((\chi_{R})\in\delta(T(A,B))=\pi(\alpha(T(A,B)))\) we have that \((\psi_{(R,e\otimes f^{*})})\in\alpha(T(A,B))\). If \((R,e\otimes f^{*})\) is an \(A\otimes B^{*}\)-Brauer pair such that \(R\notin S_{p}^{\Delta}(G\times H)\) then \(\psi_{(R,e\otimes f^{*})}={}^{(g,h)^{-1}}\chi_{{}^{(g,h)}R}=0\) since \(\chi_{{}^{(g,h)}R}=0\). Thus we have that \((\psi_{(R,e\otimes f^{*})})\in\alpha(T^{\Delta}(A,B))\), and so \((\chi_{R})\in\pi(\alpha(T^{\Delta}(A,B)))=\delta(T^{\Delta}(A,B))\). The proof is complete.
The corollary below follows immediately from Theorems 6.8 and 7.9.
**Corollary 7.10**.: _Let \(G\) and \(H\) be finite groups, \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\). Let \((D,e_{D})\) and \((E,f_{E})\) be maximal Brauer pairs (over \(\mathcal{O}\)) for \(A\) and \(B\), respectively. For each \(P\leq D\) write \(e_{P}\) for the unique block idempotent of \(\mathcal{O}C_{G}(P)\) such that \((P,e_{P})\leq(D,e_{D})\). Similarly, for each \(Q\leq E\) write \((Q,f_{Q})\leq(E,f_{E})\). Set \(\mathcal{A}=\mathcal{F}_{(D,e_{D})}(A,G)\) and \(\mathcal{B}^{*}=\mathcal{F}_{(E,f_{E}^{*})}(B^{*},H)\). For each \(R\leq D\times E\) set \(Y_{R}=Y_{(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})}=N_{G\times H}(R,e_{p_{1}(R )}\otimes f_{p_{2}(R)}^{*})\). Then the image of \(T^{\Delta}(A,B)\) under the map_
\[\delta:T(A,B)\hookrightarrow\left(\prod_{R\leq D\times E}R_{\mathbb{K}}(Y_{R} /R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\right)^{\mathcal{A}\times\mathcal{B}^ {*}}.\]
_is equal to the subgroup of tuples \((\chi_{R})\in\left(\prod_{R\leq D\times E}R_{\mathbb{K}}(Y_{R}/R,e_{p_{1}(R)} \otimes f_{p_{2}(R)}^{*})\right)^{\mathcal{A}\times\mathcal{B}^{*}}\) that satisfy \(\chi_{R}=0\) if \(R\notin S_{p}^{\Delta}(G\times H)\) and_
\[\chi_{R}((us,vt)\epsilon_{\langle(u,v)\rangle})=\operatorname{Ind}_{C_{Y_{R} }(u,v)}^{C_{Y_{R}(u,v)}}(u,v)(\operatorname{Res}_{C_{Y_{R}\cap Y_{R((u,v))}}(u,v)}^{Y_{R(\langle u,v\rangle)}}(\chi_{R\langle(u,v)\rangle}))(s,t)\]
_for all \(R\leq D\times E\), \((u,v)\in N_{D\times E}(R)\), and \((s,t)\in C_{Y_{R}}(u,v)_{p^{\prime}}\), where_
\[\epsilon_{\langle(u,v)\rangle}=\operatorname{tr}_{C_{Y_{R}}\cap_{K(u,v)}}^{C_ {Y_{R}}(u,v)}(u,v)(e_{p_{1}(R)\langle u\rangle}\otimes f_{p_{2}(R)\langle v \rangle}^{*}).\]
## 8 Strong isotypies and \(p\)-permutation equivalences
Throughout this section \(G\) and \(H\) are finite groups, \((\mathbb{K},\mathcal{O},F)\) is a \(p\)-modular system large enough for \(G\times H\), \(A\) is a block of \(\mathcal{O}G\), and \(B\) is a block of \(\mathcal{O}H\). The purpose of this section is to introduce a new type of block equivalence that we call a _strong isotypy_ and to compare this definition with that of a \(p\)-permutation equivalence.
**Hypotheses 8.1**.: Let \(A\in\operatorname{Bl}(\mathcal{O}G)\) and \(B\in\operatorname{Bl}(\mathcal{O}H)\). Let \((D,e_{D})\in\mathcal{BP}_{\mathcal{O}}(A)\) and \((E,f_{E})\in\mathcal{BP}_{\mathcal{O}}(B)\) be maximal Brauer pairs for \(A\) and \(B\), respectively. For each subgroup \(P\leq D\) (respectively, \(Q\leq E\)) write \(e_{P}\) (resp. \(f_{Q}\)) for the unique block idempotent of \(\mathcal{O}C_{G}(P)\) (resp. \(\mathcal{O}C_{H}(Q)\)) such that \((P,e_{P})\leq(D,e_{D})\) (resp. \((Q,f_{Q})\leq(E,f_{E})\)). Set \(\mathcal{A}=\mathcal{F}_{(D,e_{D})}(G,A)\), set \(\mathcal{B}=\mathcal{F}_{(E,f_{E})}(H,B)\), and let \(\phi:E\stackrel{{\sim}}{{\to}}D\) be a group isomorphism that induces an isomorphism of fusion systems \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\). For each \(Q\leq E\) set
\[Y_{Q} =N_{G\times H}(\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*}),\] \[J_{Q} =N_{H}(Q,f_{Q})\]
and for each \(P\leq D\) set
\[I_{P}=N_{G}(P,e_{P}).\]
The definition below makes use of the notion of the "extended tensor product" of two characters. See [2, Section 6] for more information.
**Definition 8.2**.: Assume Hypotheses 8.1. Then a _strong isotypy between A and B_ (relative to \((D,e_{D})\), \((E,f_{E})\), and \(\phi:E\stackrel{{\sim}}{{\to}}D\)) is a family of virtual characters
\[\chi_{Q}\in R_{\mathbb{K}}(Y_{Q}/\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{ Q}^{*}),\qquad Q\leq E\]
that satisfies each of the following conditions:
1. Let \(Q_{1},Q_{2}\leq E\) and set \(P_{i}:=\phi(Q_{i})\), \(i=1,2\). If \(g\in G\), \(h\in H\) are such that \({}^{g}(P_{1},e_{P_{1}})=(P_{2},e_{P_{2}})\), \({}^{h}(Q_{1},f_{Q_{1}})=(Q_{2},f_{Q_{2}})\), and \(c_{g}\phi=\phi c_{h}:Q_{1}\stackrel{{\sim}}{{\to}}P_{2}\), then \({}^{(g,h)}\chi_{Q_{1}}=\chi_{Q_{2}}\).
2. Let \(Q\leq E\) and set \(P:=\phi(Q)\). Let \(((u,v),\epsilon)\in\mathcal{BE}_{\mathcal{O}}(Y_{Q},e_{P}\otimes f_{Q}^{*})\) be such that \[(\langle(u,v)\rangle,\epsilon)\leq(N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P) }\otimes f_{N_{E}(Q)}^{*})\] and let \((s,t)\in C_{Y_{Q}}(u,v)_{p^{\prime}}\). 1. If \(u=\phi(v)\) then \[\chi_{Q}((us,vt)\epsilon)=\mathrm{Ind}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{C_{Y_{Q} (u,v)}}(\mathrm{Res}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{Y_{Q(v)}}(\chi_{Q\langle v \rangle}))(s,t).\] 2. If \(\chi_{Q}((us,vt)\epsilon)\neq 0\) then \[(\Delta(P,\phi,Q)\langle(u,v)\rangle,e_{P\langle u\rangle}\otimes f_{Q(v)}^{*} )\leq_{G\times H}(\Delta(D,\phi,E),e_{D}\otimes f_{E}^{*})\] or equivalently, there exists an \(\mathcal{A}\)-isomorphism \(\alpha:P\langle u\rangle\stackrel{{\sim}}{{\to}}\alpha(P\langle u\rangle)\) and a \(\mathcal{B}\)-isomorphism \(\beta:Q\langle v\rangle\stackrel{{\sim}}{{\to}}\beta(Q\langle v\rangle)\) such that \(\alpha=\phi\beta\phi^{-1}:P\stackrel{{\sim}}{{\to}}\alpha(P)\) and \(\alpha(u)=\phi(\beta(v))\).
3. Let \(Q\leq E\) and set \(P:=\phi(Q)\). Then \[\chi_{Q}\stackrel{{ Y_{Q},Y_{Q}^{\circ}}}{{\otimes}}\chi_{Q}^{ \circ}=[\mathbb{K}C_{G}(P)e_{P}]\in R_{\mathbb{K}}(N_{G\times G}(\Delta(P),e_{ P}\otimes e_{P}^{*})/\Delta(P),e_{P}\otimes e_{P}^{*})\] and \[\chi_{Q}^{\circ}\stackrel{{ Y_{Q}^{\circ},Y_{Q}}}{{\otimes}}\chi_{Q}=[ \mathbb{K}C_{H}(Q)f_{Q}]\in R_{\mathbb{K}}(N_{H\times H}(\Delta(Q),f_{Q} \otimes f_{Q}^{*})/\Delta(Q),f_{Q}\otimes f_{Q}^{*}).\]
Before continuing we analyze further the conclusion of condition (2b) in Definition 8.2.
**Lemma 8.3**.: _Let \(\mathcal{A}\) and \(\mathcal{B}\) be saturated fusion systems over \(D\) and \(E\), respectively, and let \(\phi:E\stackrel{{\sim}}{{\to}}D\) be a group isomorphism that induces an isomorphism of fusion systems \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\). Let \(Q\leq E\) and set \(P:=\phi(Q)\). Let \(u\in N_{D}(P)\), \(v\in N_{E}(Q)\) and suppose that there exists an \(\mathcal{A}\)-isomorphism \(\alpha:P\langle u\rangle\stackrel{{\sim}}{{\to}}\alpha(P\langle u\rangle)\) and a \(\mathcal{B}\)-isomorphism \(\beta:Q\langle v\rangle\stackrel{{\sim}}{{\to}}\beta(Q\langle v\rangle)\) such that \(\alpha=\phi\beta\phi^{-1}:P\stackrel{{\sim}}{{\to}}\alpha(P)\) and \(\alpha(u)=\phi(\beta(v))\). Then_
1. _There exists an_ \(\mathcal{A}\)_-isomorphism_ \(\psi:P\langle u\rangle\stackrel{{\sim}}{{\to}}P\langle\phi(v)\rangle\) _such that_ \(\psi|_{P}=\mathrm{id}_{P}\) _and_ \(\psi(u)=\phi(v)\)_. In particular,_ \(u\) _and_ \(\phi(v)\) _are_ \(\mathcal{N}_{\mathcal{A}}(P)\)_-conjugate._
2. _There exists a_ \(\mathcal{B}\)_-isomorphism_ \(\omega:Q\langle v\rangle\stackrel{{\sim}}{{\to}}Q\langle\phi^{-1 }(u)\rangle\) _such that_ \(\omega|_{Q}=\mathrm{id}_{Q}\) _and_ \(\omega(v)=\phi^{-1}(u)\)_. In particular,_ \(v\) _and_ \(\phi^{-1}(u)\) _are_ \(\mathcal{N}_{\mathcal{B}}(Q)\)_-conjugate._
Proof.: The assumptions imply that \(\phi\beta^{-1}\phi^{-1}:\alpha(P\langle u\rangle)\stackrel{{\sim} }{{\to}}P\langle\phi(v)\rangle\) is an \(\mathcal{A}\)-isomorphism. Then the \(\mathcal{A}\)-isomorphism \(\psi=\phi\beta^{-1}\phi^{-1}\alpha:P\langle u\rangle\stackrel{{ \sim}}{{\to}}P\langle\phi(v)\rangle\) satisfies \(\psi|_{P}=\mathrm{id}_{P}\) and \(\psi(u)=\phi(v)\). In particular, \(\psi|_{\langle u\rangle}:\langle u\rangle\stackrel{{\sim}}{{\to}} \langle\phi(v)\rangle\) is an \(\mathcal{N}_{\mathcal{A}}(P)\)-isomorphism that maps \(u\) to \(\phi(v)\). This shows that (1) holds. Setting \(\omega=\phi^{-1}\psi^{-1}\phi:Q\langle v\rangle\stackrel{{\sim}}{{ \to}}Q\langle\phi^{-1}(u)\rangle\) gives (2).
**Corollary 8.4**.: _Assume Hypotheses 8.1 and let \(\chi_{Q}\), \(Q\leq E\), be a strong isotopy between \(A\) and \(B\). Let \(Q\leq E\), \(P:=\phi(Q)\), and let \(((u,v),\epsilon)\in\mathcal{B}\mathcal{E}_{\mathcal{O}}(Y_{Q},e_{P}\otimes f_{ Q}^{*})\) be such that_
\[(\langle(u,v)\rangle,\epsilon)\leq(N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P)} \otimes f_{N_{E}(Q)}^{*}).\]
_If \(u\) and \(\phi(v)\) are not \(\mathcal{N}_{\mathcal{A}}(P)\)-conjugate then \(d_{Y_{Q}}^{(u,v),\epsilon}(\chi_{Q})=0\)._
Proof.: If \(u\) and \(\phi(v)\) are not \(\mathcal{N}_{\mathcal{A}}(P)\)-conjugate then Lemma 8.3 and (2b) of Definition 8.2 imply that \(\chi_{Q}((us,vt)\epsilon)=0\) for all \((s,t)\in C_{Y_{Q}}(u,v)_{p^{\prime}}\), hence \(d_{Y_{Q}}^{(u,v),\epsilon}(\chi_{Q})=0\).
**Lemma 8.5**.: _Let \(A\in\mathrm{Bl}(\mathcal{O}G)\), \(B\in\mathrm{Bl}(\mathcal{O}H)\), and let \(\gamma\in T^{\Delta}(A,B)\) be a \(p\)-permutation equivalence with maximal \(\gamma\)-Brauer pair \((\Delta(D,\phi,E),e_{D}\otimes f_{E}^{*})\). Then \((D,e_{D})\in\mathcal{B}\mathcal{P}_{\mathcal{O}}(G,A)\) and \((E,f_{E})\in\mathcal{B}\mathcal{P}_{\mathcal{O}}(H,B)\) are maximal Brauer pairs. For each \(P\leq D\) (respectively, \(Q\leq E\)) write \(e_{P}\) (resp. \(f_{Q}\)) for the unique block idempotent of \(\mathcal{O}C_{G}(P)\) (resp. \(\mathcal{O}C_{H}(Q)\)) such that \((P,e_{P})\leq(D,e_{D})\) (resp. \((Q,f_{Q})\leq(E,f_{E})\)). Set \(\mathcal{A}=\mathcal{F}_{(D,e_{D})}(G,A)\) and set \(\mathcal{B}=\mathcal{F}_{(E,f_{E})}(H,B)\). Then \(\phi:E\stackrel{{\sim}}{{\to}}D\) induces a fusion system isomorphism \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\). For each \(Q\leq E\) set_
\[Y_{Q}:=N_{G\times H}(\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*})\]
_and set_
\[\chi_{Q}:=\chi_{\gamma(\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*})}\in R _{\mathbb{K}}(Y_{Q}/\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*}).\]
_Then the characters \(\chi_{Q}\), \(Q\leq E\), form a strong isotypy between \(A\) and \(B\)._
Proof.: That \((D,e_{D})\) and \((E,f_{E})\) are maximal Brauer pairs for \(A\) and \(B\), respectively, follows from [2, Theorem 10.11(c)], and \(\phi:E\stackrel{{\sim}}{{\to}}D\) induces an isomorphism of fusion systems \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\) by [2, Theorem 11.2]. Recall that if \(\mathcal{B}^{*}:=\mathcal{F}_{(E,f_{E}^{*})}(H,B^{*})\) then \(\mathcal{B}=\mathcal{B}^{*}\) and also \(\mathcal{A}\times\mathcal{B}^{*}=\mathcal{F}_{(D\times E,e_{D}\otimes f_{E}^ {*})}(G\times H,A\otimes B^{*})\) by Lemma 2.12. Write \((\theta_{R})_{R\leq D\times E}\) for the image of \(\gamma\) under the map \(\delta\) of Theorem 7.9 and Corollary 7.10. Note that if \(Q\leq E\) then \(\chi_{Q}=\theta_{\Delta(\phi(Q),\phi,Q)}\).
We first verify condition (1) of Definition 8.2. Let \(Q_{1},Q_{2}\leq E\) and set \(P_{i}=\phi(Q_{i})\) for \(i=1,2\). Let \(g\in G\), \(h\in H\) be such that \({}^{g}(P_{1},e_{P_{1}})=(P_{2},e_{P_{2}})\), \({}^{h}(Q_{1},f_{Q_{1}})=(Q_{2},f_{Q_{2}})\), and \(c_{g}\phi=\phi c_{h}:Q_{1}\stackrel{{\sim}}{{\to}}P_{2}\). Note that for \(i=1,2\)
\[(\Delta(P_{i},\phi,Q_{i}),e_{P_{i}}\otimes f_{Q_{i}}^{*})\leq(D\times E,e_{D} \otimes f_{E}^{*})\]
is a containment of \(A\otimes B^{*}\)-Brauer pairs and
\[{}^{(g,h)}(\Delta(P_{1},\phi,Q_{1}),e_{P_{1}}\otimes f_{Q_{1}}^{*})=(\Delta(P _{2},\phi,Q_{2}),e_{P_{2}}\otimes f_{Q_{2}}^{*}),\]
so \(c_{(g,h)}:\Delta(P_{1},\phi,Q_{1})\stackrel{{\sim}}{{\to}}\Delta (P_{2},\phi,Q_{2})\) is an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism. Since the character tuple \((\theta_{R})_{R\leq D\times E}\) is \(\mathcal{A}\times\mathcal{B}^{*}\)-fixed it follows that \({}^{(g,h)}\theta_{\Delta(P_{1},\phi,Q_{1})}=\theta_{\Delta(P_{2},\phi,Q_{2})}\). In other words \({}^{(g,h)}\chi_{Q_{1}}=\chi_{Q_{2}}\), as needed.
Next we verify (2). Let \(Q\leq E\) and set \(P=\phi(Q)\). Let \(((u,v),\epsilon)\in\mathcal{BE}_{\mathcal{O}}(Y_{Q},e_{P}\otimes f_{Q}^{*})\) be such that
\[(\langle(u,v)\rangle,\epsilon)\leq(N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P )}\otimes f_{N_{E}(Q)}^{*})\]
and let \((s,t)\in C_{Y_{Q}}(u,v)_{p^{\prime}}\). First we prove (a). Assume that \(u=\phi(v)\). Lemma 2.10 implies that
\[(\langle(u,v)\rangle,\operatorname{tr}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{C_{Y_{Q }}(u,v)}(e_{P(u)}\otimes f_{Q(v)}^{*}))\]
is an \(\mathcal{O}Y_{Q}(e_{P}\otimes f_{Q}^{*})\)-Brauer pair contained in \((N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P)}\otimes f_{N_{E}(Q)}^{*})\), hence
\[\epsilon=\operatorname{tr}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{C_{Y_{Q}}(u,v)}(e_{ P(u)}\otimes f_{Q(v)}^{*})\]
by the uniqueness of Brauer pairs contained in a fixed Brauer pair. The equality
\[\chi_{Q}((us,vt)\epsilon)=\operatorname{Ind}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{C_ {Y_{Q}}(u,v)}(\operatorname{Res}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{Y_{Q(v)}}( \chi_{Q(v)}))(s,t).\]
then follows from the coherence condition of Corollary 7.10 applied at \(\Delta(P,\phi,Q)\), \((u,v)\), and \((s,t)\). This proves (a) and we move on to (b). Assume that \(\chi_{Q}((us,vt)\epsilon)\neq 0\). Then the coherence condition of Corollary 7.10 implies that \(\theta_{\Delta(P,\phi,Q)(\langle u,v\rangle)}\neq 0\), hence \(\gamma(\Delta(P,\phi,Q)\langle(u,v)\rangle,e_{P\langle u\rangle}\otimes f_{Q \langle v\rangle}^{*})\neq 0\). Thus \((\Delta(P,\phi,Q)\langle(u,v)\rangle,e_{P\langle u\rangle}\otimes f_{Q \langle v\rangle}^{*})\) is a \(\gamma\)-Brauer pair. Since any two maximal \(\gamma\)-Brauer pairs are \(G\times H\)-conjugate (see [2, Theorem 10.11(b)]) we have
\[(\Delta(P,\phi,Q)\langle(u,v)\rangle,e_{P\langle u\rangle}\otimes f_{Q\langle v \rangle}^{*})\leq_{G\times H}(\Delta(D,\phi,E),e_{D}\otimes f_{E}^{*}).\]
This proves (b) and completes the verification of (2).
Finally, the equalities in condition (3) of Definition 8.2 follow from [2, Propositions 11.8, 11.9], noting that
\[Y_{Q}*Y_{Q}^{\circ}=N_{G\times G}(\Delta(P),e_{P}\otimes e_{P}^{*})\]
and
\[Y_{Q}^{\circ}*Y_{Q}=N_{H\times H}(\Delta(Q),f_{Q}\otimes f_{Q}^{*}).\]
**Lemma 8.6**.: _Let \(B\in\operatorname{Bl}(\mathcal{O}G)\) and let \(0\neq\gamma\in T_{\mathcal{O}}(B)\). Set_
\[X(\gamma) :=\left\{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)|\chi_{\gamma(P,e)} \neq 0\right\}\] \[Y(\gamma) :=\left\{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)|\gamma(P,e)\neq 0\right\}\] \[Z(\gamma) :=\left\{(P,e)\in\mathcal{BP}_{\mathcal{O}}(B)|M(P,e)\neq 0\text{ for some indecomposable }M\in\operatorname{supp}(\gamma)\right\}.\]
_(Here \(\operatorname{supp}(\gamma)\) denotes the support of \(\gamma\), which is the set of isomorphism classes of indecomposable trivial source \(B\)-modules appearing in \(\gamma\) with nonzero coefficient when \(\gamma\) is written as a linear combination of the standard basis elements of \(T_{\mathcal{O}}(B)\).) Then_
\[\max X(\gamma)=\max Y(\gamma)=\max Z(\gamma).\]
Proof.: Recall that if \((Q,f)\in\mathcal{BP}_{\mathcal{O}}(B)\) then the Brauer construction \(-(Q,f):T_{\mathcal{O}}(B)\to T_{\mathcal{O}}(N_{G}(Q,f),f)\) restricts to an injective map from the span of the isomorphism classes \([M]\) of indecomposable trivial source \(B\)-modules \(M\) such that \((Q,f)\) is a maximal \(M\)-Brauer pair to the span of the isomorphism classes of projective indecomposable \(\mathcal{O}N_{G}(Q,f)f\)-modules in \(T_{\mathcal{O}}(N_{G}(Q,f),f)\).
It is clear that \(X(\gamma)\subseteq Y(\gamma)\subseteq Z(\gamma)\). Let \((Q,f)\in\max Z(\gamma)\). Let \(M\) be any indecomposable module in the support of \(\gamma\) satisfying \(M(Q,f)\neq 0\). Then \((Q,f)\) is a maximal \(M\)-Brauer pair, for if \((Q,f)\leq(Q^{\prime},f^{\prime})\) with \((Q^{\prime},f^{\prime})\) a maximal \(M\)-Brauer pair then \((Q^{\prime},f^{\prime})\in Z(\gamma)\) and hence \((Q,f)=(Q^{\prime},f^{\prime})\)
Now write \(\gamma=\sum a_{[M]}[M]\) where \([M]\) runs over the standard basis elements of \(T_{\mathcal{O}}(B)\). Then
\[\gamma(Q,f) =\sum_{\begin{subarray}{c}[M]\in\operatorname{supp}(\gamma)\\ M(Q,f)\neq 0\end{subarray}}a_{[M]}[M(Q,f)]=\sum_{\begin{subarray}{c}[M]\in \operatorname{supp}(\gamma)\\ (Q,f)\in\max\mathcal{B}\mathcal{P}_{\mathcal{O}}(M)\end{subarray}}a_{[M]}[M(Q,f)]\] \[=\left(\sum_{\begin{subarray}{c}[M]\in\operatorname{supp}(\gamma) \\ (Q,f)\in\max\mathcal{B}\mathcal{P}_{\mathcal{O}}(M)\end{subarray}}a_{[M]}[M] \right)(Q,f).\]
By the remark in the first paragraph of this proof it follows that \(\gamma(Q,f)\neq 0\) and that \(\chi_{\gamma(Q,f)}\neq 0\). Thus \((Q,f)\in Y(\gamma)\) and \((Q,f)\in X(\gamma)\). Since \(X(\gamma)\) and \(Y(\gamma)\) are both subsets of \(Z(\gamma)\) it follows that \((Q,f)\in\max Y(\gamma)\) and \((Q,f)\in\max X(\gamma)\). Since \((Q,f)\) was an arbitrary maximal element of \(Z(\gamma)\) this gives
\[\max Z(\gamma)\subseteq\max Y(\gamma)\qquad\text{and}\qquad\max Z(\gamma) \subseteq\max X(\gamma).\]
The reverse containments follow easily from the fact that \(X(\gamma)\subseteq Y(\gamma)\subseteq Z(\gamma)\).
**Lemma 8.7**.: _Assume Hypotheses 8.1 and let \(\chi_{Q}\), \(Q\leq E\), be a strong isotypy between \(A\) and \(B\). Then there exists a \(p\)-permutation equivalence \(\gamma\in T^{\Delta}(A,B)\) such that \((\Delta(D,\phi,E),e_{D}\otimes f_{E}^{*})\) is a maximal \(\gamma\)-Brauer pair and_
\[\chi_{Q}=\chi_{\gamma(\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*})}\]
_for each \(Q\leq E\)._
Proof.: Recall that if \(\mathcal{B}^{*}:=\mathcal{F}_{(E,f_{E}^{*})}(H,B^{*})\) then \(\mathcal{B}=\mathcal{B}^{*}\) and also \(\mathcal{A}\times\mathcal{B}^{*}=\mathcal{F}_{(D\times E,e_{D}\otimes f_{E}^{ *})}(G\times H,A\otimes B^{*})\) by Lemma 2.12. Recall also that if \(R\leq D\times E\) then
\[(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\leq(D\times E,e_{D}\otimes f_{E}^{*})\]
is a containment of \(A\otimes B^{*}\)-Brauer pairs, again by Lemma 2.12. For each \(R\leq D\times E\) set \(X_{R}=N_{G\times H}(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\). Note that if \(Q\leq E\) then \(X_{\Delta(\phi(Q),\phi,Q)}=Y_{Q}\).
For each \(R\leq D\times E\) we define a (virtual) character
\[\theta_{R}\in R_{\mathbb{K}}(X_{R}/R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\]
as follows: let \(R\leq D\times E\). If there exists a subgroup \(Q\leq E\) and an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism \(\psi:\Delta(\phi(Q),\phi,Q)\stackrel{{\sim}}{{\to}}R\) set \(\theta_{R}:={}^{\psi}\chi_{Q}\); otherwise, set \(\theta_{R}=0\)
Note that in the former case \(\theta_{R}\) does not depend on the subgroup \(Q\) or the isomorphism \(\psi\): indeed, suppose that \(Q^{\prime}\leq E\) and \(\psi^{\prime}:\Delta(\phi(Q^{\prime}),\phi,Q^{\prime})\stackrel{{ \sim}}{{\to}}R\) is another isomorphism in \({\cal A}\times{\cal B}^{*}\). Then \(\psi^{-1}\psi^{\prime}:\Delta(\phi(Q^{\prime}),\phi,Q^{\prime})\stackrel{{ \sim}}{{\to}}\Delta(\phi(Q),\phi,Q)\) is an \({\cal A}\times{\cal B}^{*}\)-isomorphism. Let \((g,h)\in G\times H\) be such that \(\psi^{-1}\psi^{\prime}=c_{(g,h)}\) and
\[{}^{(g,h)}(\Delta(\phi(Q^{\prime}),\phi,Q^{\prime}),e_{\phi(Q^{\prime})}\otimes f _{Q^{\prime}}^{*})=(\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*}).\]
By definition we have \({}^{\psi^{-1}\psi^{\prime}}\chi_{Q^{\prime}}={}^{(g,h)}\chi_{Q^{\prime}}\). Note that \({}^{g}(\phi(Q^{\prime}),e_{\phi(Q^{\prime})})=(\phi(Q),e_{\phi(Q)})\) and \({}^{h}(Q^{\prime},f_{Q^{\prime}})=(Q,f_{Q})\) by Lemma 2.7. Note also that \(c_{g}\phi=\phi c_{h}:Q^{\prime}\stackrel{{\sim}}{{\to}}\phi(Q)\). So by condition (1) of Definition 8.2 we have \({}^{(g,h)}\chi_{Q^{\prime}}=\chi_{Q}\), i.e., \({}^{\psi^{-1}\psi^{\prime}}\chi_{Q^{\prime}}=\chi_{Q}\). Thus \({}^{\psi^{\prime}}\chi_{Q^{\prime}}={}^{\psi}\chi_{Q}\) and the definition of \(\theta_{R}\) does not depend on the choice of \(Q\) or \(\psi\).
We now have a tuple \((\theta_{R})\in\prod_{R\leq D\times E}R_{\mathbb{K}}(X_{R}/R,e_{p_{1}(R)} \otimes f_{p_{2}(R)}^{*})\). Note that if \(Q\leq E\) then \(\theta_{\Delta(\phi(Q),\phi,Q)}=\chi_{Q}\). Recall that we have an injective homomorphism
\[\delta:T(A,B)\hookrightarrow\left(\prod_{R\leq D\times E}R_{\mathbb{K}}(X_{R} /R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\right)^{{\cal A}\times{\cal B}^{*}}\]
which maps the isomorphism class of a trivial source \(A\otimes B^{*}\)-module \(M\) to \((\chi_{M(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})})\), and that by Corollary 7.10 the image of \(T^{\Delta}(A,B)\) under \(\delta\) is the subgroup of tuples \((\theta_{R})\in\left(\prod_{R\leq D\times E}R_{\mathbb{K}}(X_{R}/R,e_{p_{1}(R )}\otimes f_{p_{2}(R)}^{*})\right)^{{\cal A}\times{\cal B}^{*}}\) satisfying \(\theta_{R}=0\) if \(R\notin S_{p}^{\Delta}(G\times H)\) and the "coherence condition"
\[\theta_{R}((us,vt)\epsilon_{\langle(u,v)\rangle})=\mbox{Ind}_{C_{X_{R}}\cap X _{R((u,v))}}^{C_{X_{R}}(u,v)}(\mbox{Res}_{C_{X_{R}\cap X_{R((u,v))}}(u,v)}^{X_ {R(\langle u,v\rangle)}}(\theta_{R\langle(u,v)\rangle}))(s,t) \tag{3}\]
for all \(R\leq D\times E\), \((u,v)\in N_{D\times E}(R)\), and \((s,t)\in C_{X_{R}}(u,v)_{p^{\prime}}\), where
\[\epsilon_{\langle(u,v)\rangle}=\mbox{tr}_{C_{X_{R}}\cap X_{R((u,v))}}^{C_{X_ {R}}(u,v)}(e_{p_{1}(R)\langle u\rangle}\otimes f_{p_{2}(R)\langle v\rangle}^{ *}).\]
I claim that the tuple \((\theta_{R})\) defined above belongs to \(\delta(T^{\Delta}(A,B))\). It is clear that \(\theta_{R}=0\) if \(R\) is not a twisted diagonal subgroup of \(G\times H\) and it is not hard to see that the tuple \((\theta_{R})\) is \({\cal A}\times{\cal B}^{*}\)-fixed: to check this, one needs to show that if \(\psi:R\stackrel{{\sim}}{{\to}}S\) is an \({\cal A}\times{\cal B}^{*}\)-isomorphism then \({}^{\psi}\theta_{R}=\theta_{S}\). But if \(R\) is not \({\cal A}\times{\cal B}^{*}\)-isomorphic to a subgroup of the form \(\Delta(\phi(Q),\phi,Q)\) for any \(Q\leq E\) then neither is \(S\), and hence both \(\theta_{R}\) and \(\theta_{S}\) are \(0\), and if there exists a subgroup \(Q\leq E\) and an \({\cal A}\times{\cal B}^{*}\)-isomorphism \(\psi^{\prime}:\Delta(\phi(Q),\phi,Q)\stackrel{{\sim}}{{\to}}R\) then \(\psi\psi^{\prime}:\Delta(\phi(Q),\phi,Q)\stackrel{{\sim}}{{\to}}S\) is an \({\cal A}\times{\cal B}^{*}\)-isomorphism and we have
\[{}^{\psi}\theta_{R}={}^{\psi}({}^{\psi^{\prime}}\chi_{Q})={}^{\psi\psi^{\prime} }\chi_{Q}=\theta_{S}.\]
So to verify the claim it remains to show that the tuple \((\theta_{R})\) satisfies the "coherence condition" (3) given above.
Let \(R\leq D\times E\), \((u,v)\in N_{D\times E}(R)\), and \((s,t)\in C_{X_{R}}(u,v)_{p^{\prime}}\). Set
\[\epsilon_{\langle(u,v)\rangle}=\operatorname{tr}_{C_{X_{R}\cap X_{R}((u,v))}(u,v)}^{C_{X_{R}}(u,v)}(e_{p_{1}(R)\langle u\rangle}\otimes f_{p_{2}(R)\langle v \rangle}^{*}).\]
By Lemma 2.10\((\langle(u,v)\rangle,\epsilon_{\langle(u,v)\rangle})\) is an \(\mathcal{O}X_{R}(e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\)-Brauer pair.
Suppose first that \(R\) is not \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphic to any subgroup of \(\Delta(D,\phi,E)\). Then neither is \(R\langle(u,v)\rangle\), so \(\theta_{R}=0\) and \(\theta_{R\langle(u,v)\rangle}=0\). In this case, Equation (3) clearly holds.
Now suppose that \(R\langle(u,v)\rangle\) is \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphic to a subgroup of \(\Delta(D,\phi,E)\). Let \(\psi^{\prime}:\Delta(P^{\prime},\phi,Q^{\prime})\stackrel{{ \sim}}{{\to}}R\langle(u,v)\rangle\) be an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism where \(Q^{\prime}\leq E\) and \(P^{\prime}=\phi(Q^{\prime})\). Write \(\Delta(P,\phi,Q)\) for the unique subgroup of \(\Delta(P^{\prime},\phi,Q^{\prime})\) such that \(\psi^{\prime}(\Delta(P,\phi,Q))=R\) and write \(\psi\) for the restriction of \(\psi^{\prime}\) to \(\Delta(P,\phi,Q)\). Then \(\psi:\Delta(P,\phi,Q)\stackrel{{\sim}}{{\to}}R\) is an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism. Let us also write \((u^{\prime},v^{\prime})\) for the unique element of \(\Delta(P^{\prime},\phi,Q^{\prime})\) satisfying \(\psi^{\prime}((u^{\prime},v^{\prime}))=(u,v)\). Note that \(u^{\prime}=\phi(v^{\prime})\), that \((u^{\prime},v^{\prime})\in N_{D\times E}(\Delta(P,\phi,Q))\), and that \(\Delta(P^{\prime},\phi,Q^{\prime})=\Delta(P\langle u^{\prime}\rangle,\phi,Q \langle v^{\prime}\rangle)\). Now let \((g,h)\in G\times H\) be such that \((\psi^{\prime})^{-1}=c_{(g,h)}\) and
\[{}^{(g,h)}(R\langle(u,v)\rangle,e_{p_{1}(R)\langle u\rangle} \otimes f_{p_{2}(R)\langle v\rangle}^{*}) =(\Delta(P^{\prime},\phi,Q^{\prime}),e_{P^{\prime}}\otimes f_{Q^ {\prime}}^{*})\] \[=(\Delta(P\langle u^{\prime}\rangle,\phi,Q\langle v^{\prime} \rangle),e_{P\langle u^{\prime}\rangle}\otimes f_{Q^{\prime}(v^{\prime})}^{*}).\]
Note that \((u^{\prime},v^{\prime})=({}^{g}u,{}^{h}v)\). Also, \(\psi^{-1}=c_{(g,h)}\) and
\[{}^{(g,h)}(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})=(\Delta(P,\phi,Q),e_{P} \otimes f_{Q}^{*}).\]
It follows that \({}^{(g,h)}(\langle(u,v)\rangle,\epsilon_{\langle(u,v)\rangle})\in\mathcal{BP} _{\mathcal{O}}(Y_{Q},e_{P}\otimes f_{Q}^{*})\). In other words, if we set \(\epsilon^{\prime}={}^{(g,h)}\epsilon_{\langle(u,v)\rangle}\) then \((\langle(u^{\prime},v^{\prime})\rangle,\epsilon^{\prime})\) is an \(\mathcal{O}Y_{Q}(e_{P}\otimes f_{Q}^{*})\)-Brauer pair. Note that
\[\epsilon^{\prime} ={}^{(g,h)}\epsilon_{\langle(u,v)\rangle}\] \[={}^{(g,h)}\operatorname{tr}_{C_{X_{R}\cap X_{R}((u,v))}(u,v)}^{C _{X_{R}}(u,v)}(e_{p_{1}(R)\langle u\rangle}\otimes f_{p_{2}(R)\langle v\rangle }^{*})\] \[=\operatorname{tr}_{C_{Y_{Q}\cap Y_{Q}\langle v^{\prime} \rangle}(u^{\prime},v^{\prime})}^{C_{Y_{Q}}(u^{\prime},v^{\prime})}(e_{P\langle u ^{\prime}\rangle}\otimes f_{Q\langle v^{\prime}\rangle}^{*}),\]
so by Lemma 2.10
\[(\langle(u^{\prime},v^{\prime})\rangle,\epsilon^{\prime})\leq(N_{D\times E}( \Delta(P,\phi,Q)),e_{N_{D}(P)}\otimes f_{N_{E}(Q)}^{*})\]
is a containment of \(\mathcal{O}Y_{Q}(e_{P}\otimes f_{Q}^{*})\)-Brauer pairs. Set \(s^{\prime}={}^{g}s\) and \(t^{\prime}={}^{h}t\). Then \((s^{\prime},t^{\prime})\in C_{Y_{Q}}(u^{\prime},v^{\prime})_{p^{\prime}}\). Condition (2a) of Definition 8.2 then implies that
\[\chi_{Q}((u^{\prime}s^{\prime},v^{\prime}t^{\prime})\epsilon^{\prime})= \operatorname{Ind}_{C_{Y_{Q}}(u^{\prime},v^{\prime})}^{C_{Y_{Q}}(u^{\prime},v^ {\prime})}(\operatorname{Res}_{C_{Y_{Q}\cap Y_{Q}\langle v^{\prime}\rangle}(u^{ \prime},v^{\prime})}^{Y_{Q\langle v^{\prime}\rangle}}(\chi_{Q\langle v^{ \prime}\rangle}))(s^{\prime},t^{\prime}).\]
Now \(\theta_{R}={}^{\psi}\chi_{Q}={}^{(g,h)^{-1}}\chi_{Q}\) and \(\theta_{R((u,v))}={}^{\psi^{\prime}}\chi_{Q^{\prime}}={}^{(g,h)^{-1}}\chi_{Q(v^{ \prime})}\), so we can compute:
\[\theta_{R}((us,vt)\epsilon_{\langle(u,v)\rangle}) ={}^{(g,h)^{-1}}\chi_{Q}((us,vt)\epsilon_{\langle(u,v)\rangle})\] \[=\chi_{Q}((u^{\prime}s^{\prime},v^{\prime}t^{\prime})\epsilon^{\prime})\] \[=\operatorname{Ind}_{C_{Y_{Q}\cap Y_{Q(v^{\prime})}}(u^{\prime}, v^{\prime})}^{C_{Y_{Q(v^{\prime})}}(u^{\prime},v^{\prime})}(\operatorname{Res}_{C_{Y_{Q} \cap Y_{Q(v^{\prime})}}(u^{\prime},v^{\prime})}^{Y_{Q(v^{\prime})}}(\chi_{Q(v^ {\prime})}))(s^{\prime},t^{\prime})\] \[=\operatorname{Ind}_{{}^{(g,h)}C_{X_{R}(u,v)}(u,v)}^{(g,h)}( \operatorname{Res}_{(g,h)}^{(g,h)}{}_{X_{R((u,v))}}(u,v){}^{(g,h)}( {}^{(g,h)}\theta_{R(\langle u,v\rangle)}))({}^{(g}s,{}^{h}t)\] \[={}^{(g,h)}[\operatorname{Ind}_{C_{X_{R}\cap X_{R((u,v))}}(u,v)}^{ C_{X_{R}(u,v)}}(u,v){}^{X_{R((u,v))}}(\theta_{R(\langle u,v\rangle)})]({}^{(g,h)}(s,t))\] \[=\operatorname{Ind}_{C_{X_{R}\cap X_{R((u,v))}}(u,v)}^{C_{X_{R}( (u,v))}}(\operatorname{Res}_{C_{X_{R}\cap X_{R((u,v))}}(u,v)}^{X_{R((u,v))}}( \theta_{R(\langle u,v\rangle)}))(s,t)\]
So Equation (3) holds in this case as well.
The final case to consider is the case in which \(R\) is \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphic to a subgroup of \(\Delta(D,\phi,E)\) but \(R\langle(u,v)\rangle\) is not. In this case there exists a subgroup \(\Delta(P,\phi,Q)\leq\Delta(D,\phi,E)\) and an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism \(\psi:\Delta(P,\phi,Q)\stackrel{{\sim}}{{\to}}R\). We may assume without loss of generality that \(Q\) is a fully \(\mathcal{B}\)-normalized subgroup of \(E\), for if \(Q^{\prime}\in Q^{\mathcal{B}}\) is a fully \(\mathcal{B}\)-normalized subgroup of \(E\) and \(\beta:Q^{\prime}\stackrel{{\sim}}{{\to}}Q\) is a \(\mathcal{B}\)-isomorphism then, setting \(P^{\prime}=\phi(Q^{\prime})\), we have that \(\alpha:=\phi\beta\phi^{-1}:P^{\prime}\stackrel{{\sim}}{{\to}}P\) is an \(\mathcal{A}\)-isomorphism, hence that \(\psi^{\prime}:=(\alpha,\beta):\Delta(P^{\prime},\phi,Q^{\prime})\stackrel{{ \sim}}{{\to}}\Delta(P,\phi,Q)\) is an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism, and so \(\psi\psi^{\prime}:\Delta(P^{\prime},\phi,Q^{\prime})\stackrel{{ \sim}}{{\to}}R\) is an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism.
Now since \(Q\) is fully \(\mathcal{B}\)-normalized and \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\) is an isomorphism of fusion systems, Lemma 2.9 implies that \(\Delta(P,\phi,Q)\) is a fully \(\mathcal{A}\times\mathcal{B}^{*}\)-normalized subgroup of \(D\times E\). In particular, we have that \((N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P)}\otimes f_{N_{E}(Q)}^{*})\) is a maximal \(\mathcal{O}Y_{Q}(e_{P}\otimes f_{Q}^{*})\)-Brauer pair by Lemma 2.10.
Now let \((g,h)\in G\times H\) be such that \(\psi^{-1}=c_{(g,h)}\) and
\[{}^{(g,h)}(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})=(\Delta(P,\phi,Q),e_{P} \otimes f_{Q}^{*}).\]
Since \((\langle(u,v)\rangle,\epsilon_{\langle(u,v)\rangle})\in\mathcal{BP}_{\mathcal{O }}(X_{R},e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})\) we have that \({}^{(g,h)}(\langle(u,v)\rangle,\epsilon_{\langle(u,v)\rangle})\in\mathcal{BP}_ {\mathcal{O}}(Y_{Q},e_{P}\otimes f_{Q}^{*})\). Because \((N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P)}\otimes f_{N_{E}(Q)}^{*})\) is a maximal \(\mathcal{O}Y_{Q}(e_{P}\otimes f_{Q}^{*})\)-Brauer pair there exists an element \((x,y)\in Y_{Q}\) such that
\[{}^{(x,y)}({}^{(g,h)}(\langle(u,v)\rangle,\epsilon_{\langle(u,v)\rangle}))\leq( N_{D\times E}(\Delta(P,\phi,Q)),e_{N_{D}(P)}\otimes f_{N_{E}(Q)}^{*}).\]
Set \((g^{\prime},h^{\prime})=(xg,yh)\). Note that \(c_{(x,y)}\in\operatorname{Aut}_{\mathcal{A}\times\mathcal{B}^{*}}(\Delta(P,\phi,Q))\). Set \(\psi^{\prime}=\psi\circ c_{(x,y)}^{-1}:\Delta(P,\phi,Q)\stackrel{{ \sim}}{{\to}}R\). Note that \(\psi^{\prime}\) is an \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphism. Note also that \((\psi^{\prime})^{-1}=c_{(g^{\prime},h^{\prime})}\) and that
\[{}^{(g^{\prime},h^{\prime})}(R,e_{p_{1}(R)}\otimes f_{p_{2}(R)}^{*})=(\Delta(P, \phi,Q),e_{P}\otimes f_{Q}^{*}).\]
Thus, after replacing \(\psi\) with \(\psi^{\prime}\) and \((g,h)\) with \((g^{\prime},h^{\prime})\), we may assume without loss of generality that
\[{}^{(g,h)}(\langle(u,v)\rangle,\epsilon_{\langle(u,v)\rangle})\leq(N_{D\times E }(\Delta(P,\phi,Q)),e_{N_{D}(P)}\otimes f^{*}_{N_{E}(Q)}).\]
Now since \(\Delta(P,\phi,Q)\) is fully \(\mathcal{A}\times\mathcal{B}^{*}\)-normalized also \(\Delta(P,\phi,Q)\) is receptive in \(\mathcal{A}\times\mathcal{B}^{*}\). Since \({}^{(g,h)}(u,v)\in N_{D\times E}(\Delta(P,\phi,Q))\) we have that -- in the notation of [7, Part I, Definition 2.2] -- \((u,v)\in N_{\psi^{-1}}\). Therefore \(\psi^{-1}\) extends to an \(\mathcal{A}\times\mathcal{B}^{*}\)-homomorphism defined on \(R\langle(u,v)\rangle\). In other words, there exists an element \((g^{\prime},h^{\prime})\in G\times H\) such that
\[{}^{(g^{\prime},h^{\prime})}(R\langle(u,v)\rangle,e_{p_{1}(R)\langle u\rangle }\otimes f^{*}_{p_{2}(R)\langle v\rangle})\leq(D\times E,e_{D}\otimes f^{*}_{ E})\]
and \(\psi^{-1}=c_{(g^{\prime},h^{\prime})}:R\stackrel{{\sim}}{{ \rightarrow}}\Delta(P,\phi,Q)\). Note that this implies
\[{}^{(g^{\prime},h^{\prime})}(R,e_{p_{1}(R)}\otimes f^{*}_{p_{2}(R)})=(\Delta( P,\phi,Q),e_{P}\otimes f^{*}_{Q})\]
and \({}^{(g^{\prime},h^{\prime})}(u,v)\in N_{D\times E}(\Delta(P,\phi,Q))\). Furthermore
\[{}^{(g^{\prime},h^{\prime})}\epsilon_{\langle(u,v)\rangle} ={}^{(g^{\prime},h^{\prime})}\operatorname{tr}_{C_{X_{R}\cap X_{R }(\langle u,v\rangle)}}^{C_{X_{R}}(u,v)}(e_{p_{1}(R)\langle u\rangle}\otimes f ^{*}_{p_{2}(R)\langle v\rangle})\] \[=\operatorname{tr}_{\operatorname{tr}_{C_{Y_{Q}}(g^{\prime},h^{ \prime}v)}}^{C_{Y_{Q}}(g^{\prime},h^{\prime}v)}{}^{(g^{\prime},h^{\prime}v)}{ }^{(g^{\prime},h^{\prime}v)}{}^{(g^{\prime},h^{\prime}v)}{}^{(g^{\prime},h^{ \prime}v)}{}^{(g^{\prime},h^{\prime}v)}{}^{(g^{\prime},h^{\prime}v)}{}^{}^{(g ^{\prime},h^{\prime}v)}{}^{}{}^{}{}^{}{}^{}{}^{}{}^{\prime}}{}^{{}^{\prime}}{}^{ {}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{ {}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{} ^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ ^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ ^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ ^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{ }^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{}^{ \prime}}{}^{{}^{\prime}}{}^{{}^{\prime}}^{{}^{\prime}}{}^{{}^{\prime}}{}^{{
Equation (3) in this final case we need to show that \(\theta_{R}((us,vt)\epsilon_{\langle(u,v)\rangle})=0\). We have \(\theta_{R}={}^{\psi}\chi_{Q}={}^{(g,h)^{-1}}\chi_{Q}\), so \(\theta_{R}((us,vt)\epsilon_{\langle(u,v)\rangle})=\chi_{Q}((u^{\prime}s^{ \prime},v^{\prime}t^{\prime})\epsilon^{\prime})\). Suppose \(\chi_{Q}((u^{\prime}s^{\prime},v^{\prime}t^{\prime})\epsilon^{\prime})\neq 0\). Then by condition (2b) of Definition 8.2 we have
\[(\Delta(P,\phi,Q)\langle(u^{\prime},v^{\prime})\rangle,e_{P\langle u^{\prime} \rangle}\otimes f^{*}_{Q\langle v^{\prime}\rangle})\leq_{G\times H}(\Delta(D, \phi,E),e_{D}\otimes f^{*}_{E}).\]
But this implies that
\[(R\langle(u,v)\rangle,e_{p_{1}(R)\langle u\rangle}\otimes f^{*}_{p_{2}(R) \langle v\rangle})\leq_{G\times H}(\Delta(D,\phi,E),e_{D}\otimes f^{*}_{E}),\]
or in other words, that \(R\langle(u,v)\rangle\) is \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphic to a subgroup of \(\Delta(D,\phi,E)\). So we must have \(\chi_{Q}((u^{\prime}s^{\prime},v^{\prime}t^{\prime})\epsilon^{\prime})=0\), hence \(\theta_{R}((us,vt)\epsilon_{\langle(u,v)\rangle})=0\) and Equation (3) holds. This completes the proof of the claim that \((\theta_{R})\) belongs to \(\delta(T^{\Delta}(A,B))\).
Let \(\gamma\in T^{\Delta}(A,B)\) be such that \(\delta(\gamma)=(\theta_{R})\). Note that for each \(Q\leq E\) we have
\[\chi_{Q}=\theta_{\Delta(\phi(Q),\phi,Q)}=\chi_{\gamma(\Delta(\phi(Q),\phi,Q), e_{\phi(Q)}\otimes f^{*}_{Q})}\]
by definition of the map \(\delta\). Now Condition (3) of Definition 8.2 implies in particular that \(\chi_{Q}\neq 0\) for all \(Q\leq E\). Therefore \(\gamma(\Delta(D,\phi,E),e_{D}\otimes f^{*}_{E})\neq 0\), i.e., \((\Delta(D,\phi,E),e_{D}\otimes f^{*}_{E})\) is a \(\gamma\)-Brauer pair. In fact, \((\Delta(D,\phi,E),e_{D}\otimes f^{*}_{E})\) is a maximal \(\gamma\)-Brauer pair since any \(\gamma\)-Brauer pair is an \(A\otimes B^{*}\)-Brauer pair and must have a twisted diagonal subgroup in its first entry.
To complete the proof it remains to show that \(\gamma\) is a \(p\)-permutation equivalence. By [2, Theorems 12.2, 12.3] it suffices to show that any maximal \(\gamma\)-Brauer pair is \(G\times H\)-conjugate to \((\Delta(D,\phi,E),e_{D}\otimes f^{*}_{E})\). Let \((\Delta(S,\psi,T),e\otimes f^{*})\in\mathcal{BP}_{\mathcal{O}}(\gamma)\) be maximal. Then
\[(\Delta(S,\psi,T),e\otimes f^{*})\leq_{G\times H}(D\times E,e_{D}\otimes f^{*} _{E})\]
Since \(\mathcal{BP}_{\mathcal{O}}(\gamma)\) is stable under conjugation we may assume without loss of generality that \((\Delta(S,\psi,T),e\otimes f^{*})\) is contained in \((D\times E,e_{D}\otimes f^{*}_{E})\). Then \(S\leq D\), \(e=e_{S}\), \(T\leq E\), and \(f=f_{T}\). By Lemma 8.6 we have
\[\theta_{\Delta(S,\psi,T)}=\chi_{\gamma(\Delta(S,\psi,T),e_{S}\otimes f^{*}_{T} )}\neq 0.\]
It follows that \(\Delta(S,\psi,T)\) is \(\mathcal{A}\times\mathcal{B}^{*}\)-isomorphic to a subgroup of \(\Delta(D,\phi,E)\). In other words, there exists an element \((g,h)\in G\times H\) such that
\[{}^{(g,h)}(\Delta(S,\psi,T),e_{S}\otimes f^{*}_{T})\leq(\Delta(D,\phi,E),e_{D} \otimes f^{*}_{E}).\]
Since \((\Delta(S,\psi,T),e_{S}\otimes f^{*}_{T})\) is a maximal \(\gamma\)-Brauer pair we must have
\[{}^{(g,h)}(\Delta(S,\psi,T),e_{S}\otimes f^{*}_{T})=(\Delta(D,\phi,E),e_{D} \otimes f^{*}_{E}).\]
The proof is complete.
**Theorem 8.8**.: _Assume Hypotheses 8.1. Then the construction of Lemma 8.5 defines a bijection from the set of \(p\)-permutation equivalences \(\gamma\) such that \((\Delta(D,\phi,E),e_{D}\otimes f_{E}^{*})\) is a maximal \(\gamma\)-Brauer pair and the set of strong isotypies between \(A\) and \(B\) relative to \((D,e_{D})\), \((E,f_{E})\), and \(\phi:E\stackrel{{\sim}}{{\to}}D\)._
Proof.: This follows from Lemmas 8.5 and 8.7.
## 9 Connection with isotypies
Let \(G\) and \(H\) be finite groups, \((\mathbb{K},\mathcal{O},F)\) a \(p\)-modular system large enough for \(G\times H\), \(A\) a block of \(\mathcal{O}G\), and let \(B\) be a block of \(\mathcal{O}H\). Assume Hypotheses 8.1, and the notation set there. In this section we show that a strong isotypy between \(A\) and \(B\) restricts to an isotypy between \(A\) and \(B\). The definition of isotypy we will use is the one given in [2, Definition 15.3].
**Definition 9.1**.: Let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\), and assume Hypotheses 8.1. An _isotypy_ between \(A\) and \(B\) is a family of perfect isometries
\[\mu_{Q}\in R_{\mathbb{K}}(C_{G}(\phi(Q))\times C_{H}(Q),e_{\phi(Q)}\otimes f_{ Q}^{*}),\qquad Q\leq E\]
such that the following conditions are satisfied:
1. (Equivariance) For every \(Q\leq E\) and \((g,h)\in G\times H\) such that \[{}^{(g,h)}(\Delta(\phi(Q),\phi,Q),e_{\phi(Q)}\otimes f_{Q}^{*})\leq(\Delta(D, \phi,E),e_{D}\otimes f_{E}^{*})\] one has \({}^{(g,h)}\mu_{Q}=\mu_{{}^{h}Q}\).
2. (Compatibility) For every \(Q\leq E\) and \(v\in C_{E}(Q)\) the diagram below commutes \[\begin{CD}CF(C_{H}(Q),f_{Q};\mathbb{K})@>{\mu_{Q}\otimes_{C_{H}(Q)}-}>{}>CF(C_{G} (P),e_{P};\mathbb{K})\\ @V{d_{C_{H}(Q)}}V{}V\\ CF_{p^{\prime}}(C_{H}(Q\langle v\rangle),f_{Q\langle v\rangle};\mathbb{K}) @V{\mu_{Q\langle v\rangle}\otimes_{C_{H}(Q\langle v\rangle)}-}>{}>CF_{p^{\prime}}(C_{G} (P\langle u\rangle),e_{P\langle u\rangle};\mathbb{K})\end{CD}\] (4) where \(P=\phi(Q)\) and \(u=\phi(v)\).
**Theorem 9.2**.: _Let \(A\in\operatorname{Bl}(\mathcal{O}G)\), \(B\in\operatorname{Bl}(\mathcal{O}H)\), and assume Hypotheses 8.1. Let \(\chi_{Q}\), \(Q\leq E\), be a strong isotypy between \(A\) and \(B\). For each \(Q\leq E\) set_
\[\mu_{Q}=\operatorname{Res}_{C_{G}(\phi(Q))\times C_{H}(Q)}^{Y_{Q}}(\chi_{Q}) \in R_{\mathbb{K}}(C_{G}(\phi(Q))\times C_{H}(Q),e_{\phi(Q)}\otimes f_{Q}^{*}).\]
_Then the characters \(\mu_{Q}\), \(Q\leq E\), form an isotypy between \(A\) and \(B\)._
Proof.: Let \(Q\leq E\) and set \(P=\phi(Q)\). Then \(\mu_{Q}\) is a perfect isometry by Lemma 8.7 and [2, Proposition 11.9]. If \((g,h)\in G\times H\) is such that
\[{}^{(g,h)}(\Delta(P,\phi,Q),e_{P}\otimes f_{Q}^{*})\leq(\Delta(D,\phi,E),e_{D} \otimes f_{E}^{*})\]
then \(\phi(^{h}Q)={}^{g}P\), \({}^{g}(P,e_{P})=({}^{g}P,e_{g}{}_{P})\), \({}^{h}(Q,f_{Q})=({}^{h}Q,f_{{}^{h}Q})\), and \(c_{g}\phi=\phi c_{h}:Q\stackrel{{\sim}}{{\to}}{}^{g}P\), so by (1) of Definition 8.2 we have \({}^{(g,h)}\chi_{Q}=\chi_{{}^{h}Q}\). It follows that \({}^{(g,h)}\mu_{Q}=\mu_{{}^{h}Q}\). We have thus shown that the characters \(\mu_{Q}\), \(Q\leq E\), satisfy the equivariance condition of Definition 9.1, and all that remains to check is that the compatiblity condition also holds.
Continue to let \(Q\leq E\) and \(P=\phi(Q)\). Let \(v\in C_{E}(Q)\) and set \(u=\phi(v)\in C_{D}(P)\). It suffices to check that the compatiblity condition holds in the case where \(Q\) is fully \(\mathcal{B}\)-centralized. Indeed, let \(Q^{\prime}\) be a fully \(\mathcal{B}\)-centralized subgroup that is isomorphic in \(\mathcal{B}\) to \(Q\). Then \(Q^{\prime}\) is receptive, so there exists an element \(h\in H\) such that \({}^{h}(Q,f_{Q})=(Q^{\prime},f_{Q^{\prime}})\) and \({}^{h}(Q\langle v\rangle,f_{Q\langle v\rangle})\leq(E,f_{E})\). Notice that \(v^{\prime}:={}^{h}v\in C_{E}(Q^{\prime})\). If we set \(P^{\prime}=\phi(Q^{\prime})\) then since \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\) is an isomorphism there exists an element \(g\in G\) such that \({}^{g}(P,e_{P})=(P^{\prime},e_{P^{\prime}})\), \({}^{g}(P\langle u\rangle,e_{P\langle u\rangle})\leq(D,e_{D})\), and \(c_{g}=\phi c_{h}\phi^{-1}:P\langle u\rangle\stackrel{{\sim}}{{\to }}P^{\prime}\langle{}^{g}u\rangle\). Note that \(u^{\prime}:={}^{g}u=\phi(^{h}v)\in C_{D}(P^{\prime})\). We have
\[{}^{(g,h)}\mu_{Q}=\mu_{Q^{\prime}}\qquad\text{and}\qquad{}^{(g,h)}\mu_{Q \langle v\rangle}=\mu_{Q^{\prime}\langle v^{\prime}\rangle}\]
by equivariance. Now if
\[d_{C_{G}(P^{\prime})}^{u^{\prime},e_{P^{\prime}}(u^{\prime})}\circ(\mu_{Q^{ \prime}}\otimes_{C_{H}(Q^{\prime})}-)=(\mu_{Q^{\prime}\langle v^{\prime} \rangle}\otimes_{C_{H}(Q^{\prime}\langle v^{\prime}\rangle)}-)\circ d_{C_{H}(Q ^{\prime})}^{v^{\prime},f_{Q^{\prime}\langle v^{\prime}\rangle}}\]
then the formulas given in Lemma 3.3 and Corollary 4.4 show that the diagram (4) commutes. Thus we are reduced to the case where \(Q\) is fully \(\mathcal{B}\)-centralized. In this case \(P\) is fully \(\mathcal{A}\)-centralized, \((C_{D}(P),e_{PC_{D}(P)})\) is a maximal \(\mathcal{O}C_{G}(P)e_{P}\)-Brauer pair, and \((C_{E}(Q),f_{QC_{E}(Q)})\) is a maximal \(\mathcal{O}C_{H}(Q)f_{Q}\)-Brauer pair. If \(U\leq C_{D}(P)\) then \(e_{PU}\) is the unique block idempotent of \(\mathcal{O}C_{C_{G}(P)}(U)=\mathcal{O}C_{G}(PU)\) for which \((U,e_{PU})\leq(C_{D}(P),e_{PC_{D}(P)})\) is a containment of \(\mathcal{O}C_{G}(P)e_{P}\)-Brauer pairs. Likewise if \(V\leq C_{E}(Q)\) then \((V,f_{QV})\leq(C_{E}(Q),f_{QC_{E}(Q)})\) is a containment of \(\mathcal{O}C_{H}(Q)f_{Q}\)-Brauer pairs. By [7, Theorem 3.19(b)] we have
\[\mathcal{C}_{\mathcal{A}}(P)=\mathcal{F}_{(C_{D}(P),e_{PC_{D}(P)})}(C_{G}(P),e _{P})\]
and
\[\mathcal{C}_{\mathcal{B}}(Q)=\mathcal{F}_{(C_{E}(Q),f_{QC_{E}(Q)})}(C_{H}(Q), f_{Q}).\]
In particular, the isomorphism \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\) restricts to an isomorphism \(\phi:\mathcal{F}_{(C_{E}(Q),f_{QC_{E}(Q)})}(C_{H}(Q),f_{Q})\to\mathcal{F}_{(C _{D}(P),e_{PC_{D}(P)})}(C_{G}(P),e_{P})\). Let \(\mathcal{V}\) denote a
set of representatives for the \(\mathcal{C}_{\mathcal{B}}(Q)\)-conjugacy classes of elements of \(C_{E}(Q)\) such that \(v\in\mathcal{V}\). Set \(\mathcal{U}=\phi(\mathcal{V})\). Then \(\mathcal{U}\) is a set of representatives for the \(\mathcal{C}_{\mathcal{A}}(P)\)-conjugacy classes of \(C_{D}(P)\) and \(u\in\mathcal{U}\). By Proposition 4.7 the diagram below commutes:
(5)
where the lower horizontal map is defined by
\[\sum_{y\in\mathcal{V}}\psi_{y}\mapsto\sum_{x\in\mathcal{U}}\sum_{y\in\mathcal{ V}}d^{(x,y),e_{P(x)}\otimes f^{*}_{Q(y)}}_{C_{G}(P)\times C_{H}(Q)}(\mu_{Q}) \otimes_{C_{H}(Q\langle y\rangle)}\psi_{y}.\]
Now let \(x\in\mathcal{U}\) and let \(y\in\mathcal{V}\). Let \(\epsilon_{\langle(x,y)\rangle}\) denote the \(C_{Y_{Q}}(x,y)\)-orbit sum of \(e_{P\langle x\rangle}\otimes f^{*}_{Q\langle y\rangle}\). Then by Lemma 2.10 (\((x,y),\epsilon_{\langle(x,y)\rangle})\in\mathcal{B}\mathcal{E}_{\mathcal{O}}( Y_{Q},e_{P}\otimes f^{*}_{Q})\) and
\[(\langle(x,y)\rangle,\epsilon_{\langle(x,y)\rangle})\leq(N_{D\times E}(\Delta (P,\phi,Q)),e_{N_{D}(P)}\otimes f^{*}_{N_{E}(Q)}).\]
Therefore, by condition (2a) of Definition 8.2, if \(x=\phi(y)\) then
\[d^{(x,y),\epsilon_{\langle(x,y)\rangle}}_{Y_{Q}}(\chi_{Q})=d_{C_{Y_{Q}}(x,y)}( \mathrm{Ind}^{C_{Y_{Q}}(x,y)}_{C_{Y_{Q}\cap Y_{Q\langle y\rangle}}(x,y)}( \mathrm{Res}^{Y_{Q(y)}}_{C_{Y_{Q}\cap Y_{Q\langle y\rangle}}(x,y)}(\chi_{Q \langle y\rangle}))).\]
If \(x\neq\phi(y)\) then by condition (2b) of Definition 8.2 and Lemma 8.3 we have
\[d^{(x,y),\epsilon_{\langle(x,y)\rangle}}_{Y_{Q}}(\chi_{Q})=0.\]
Next we observe that
\[\mathrm{Res}^{C_{Y_{Q}}(x,y)}_{C_{G}(P\langle x\rangle)\times C_{H}(Q\langle y \rangle)}(d^{(x,y),\epsilon_{\langle(x,y)\rangle}}_{Y_{Q}}(\chi_{Q}))\]
is a class function in \(CF_{p^{\prime}}(C_{G}(P\langle x\rangle)\times C_{H}(Q\langle y\rangle), \epsilon_{\langle(x,y)\rangle};\mathbb{K})\) whose evaluation at a \(p^{\prime}\)-element \((s,t)\in[C_{G}(P\langle x\rangle)\times C_{H}(Q\langle y\rangle)]_{p^{\prime}}\) is equal to
\[\chi_{Q}((xs,yt)\epsilon_{\langle(x,y)\rangle}) =\sum_{e\otimes f^{*}\in\mathrm{Orb}_{C_{Y_{Q}}(x,y)}(e_{P\langle x \rangle}\otimes f^{*}_{Q\langle y\rangle})}\chi_{Q}((xs,yt)(e\otimes f^{*}))\] \[=\sum_{e\otimes f^{*}\in\mathrm{Orb}_{C_{Y_{Q}}(x,y)}(e_{P\langle x \rangle}\otimes f^{*}_{Q\langle y\rangle})}d^{(x,y),e\otimes f^{*}}_{C_{G}(P) \times C_{H}(Q)}(\mathrm{Res}^{Y_{Q}}_{C_{G}(P)\times C_{H}(Q)}(\chi_{Q}))(s,t)\] \[=\sum_{e\otimes f^{*}\in\mathrm{Orb}_{C_{Y_{Q}}(x,y)}(e_{P\langle x \rangle}\otimes f^{*}_{Q\langle y\rangle})}d^{(x,y),e\otimes f^{*}}_{C_{G}(P) \times C_{H}(Q)}(\mu_{Q})(s,t).\]
It follows that
\[(e_{P\langle x\rangle}\otimes f^{*}_{Q\langle y\rangle})\operatorname{Res}^{C_{Y_{ Q}}(x,y)}_{C_{G}(P\langle x\rangle)\times C_{H}(Q\langle y\rangle)}(d^{(x,y),e_{ \langle(x,y)\rangle}}_{Y_{Q}}(\chi_{Q}))=d^{(x,y),e_{P\langle x\rangle}\otimes f ^{*}_{Q\langle y\rangle}}_{C_{G}(P)\times C_{H}(Q)}(\mu_{Q}).\]
In particular, if \(x\neq\phi(y)\) then
\[d^{(x,y),e_{P\langle x\rangle}\otimes f^{*}_{Q\langle y\rangle}}_{C_{G}(P) \times C_{H}(Q)}(\mu_{Q})=0\]
and if \(x=\phi(y)\) then one computes via the Mackey formula that
\[d^{(x,y),e_{P\langle x\rangle}\otimes f^{*}_{Q\langle y\rangle}}_{C_{G}(P) \times C_{H}(Q)}(\mu_{Q})=d_{C_{G}(P\langle x\rangle)\times C_{H}(Q\langle y \rangle)}(\mu_{Q\langle y\rangle})\]
It follows that the lower horizontal map in the diagram (5) maps a class function \(\psi_{y}\in CF_{p^{\prime}}(C_{H}(Q\langle y\rangle),f_{Q\langle y\rangle}; \mathbb{K})\) to
\[\psi_{y} \mapsto\sum_{x\in\mathcal{U}}d^{(x,y),e_{P\langle x\rangle} \otimes f^{*}_{Q\langle y\rangle}}_{C_{G}(P)\times C_{H}(Q)}(\mu_{Q})\otimes_{ C_{H}(Q\langle y\rangle)}\psi_{y}\] \[=d_{C_{G}(P\langle\phi(y)\rangle)\times C_{H}(Q\langle y\rangle)} (\mu_{Q\langle y\rangle})\otimes\psi_{y}\] \[=\mu_{Q\langle y\rangle}\otimes_{C_{H}(Q\langle y\rangle)}\psi_{y}.\]
Letting \(x=u\) and \(y=v\), the above implies that the diagram (4) of the compatibility condition commutes. The proof is complete.
## 10 Commutative diagrams from a strong isotopy
Continue to let \(G\) and \(H\) be finite groups, \((\mathbb{K},\mathcal{O},F)\) a \(p\)-modular system large enough for \(G\times H\), \(A\) a block of \(\mathcal{O}G\), and let \(B\) be a block of \(\mathcal{O}H\). Assume Hypotheses 8.1 and the notation set there. In this section we describe a commutative diagram that is induced from a strong isotypy and which extends the commutative diagram of the compatibility condition in the definition of "isotypy."
**Lemma 10.1**.: _Let \(A\in\operatorname{Bl}(\mathcal{O}G)\) and \(B\in\operatorname{Bl}(\mathcal{O}H)\). Assume Hypotheses 8.1 and let \(\chi_{Q}\), \(Q\leq E\), be a strong isotypy between \(A\) and \(B\). Let \(v\in E\) and set \(u=\phi(v)\). Then the diagram below commutes:_
(6)
_Moreover, \(\chi_{\{1\}}\) and \(\operatorname{Res}^{Y_{\langle v\rangle}}_{C_{G}(u)\times C_{H}(v)}(\chi_{ \langle v\rangle})\) are perfect isometries._
Proof.: This is a consequence of Theorem 9.2 and an application of the compatibility condition in Definition 9.1 with \(Q=\{1\}\).
**Theorem 10.2**.: _Let \(A\in\operatorname{Bl}(\mathcal{O}G)\) and \(B\in\operatorname{Bl}(\mathcal{O}H)\). Assume Hypotheses 8.1 and let \(\chi_{Q}\), \(Q\leq E\), be a strong isotypy between \(A\) and \(B\). Fix a subgroup \(Q\leq E\) and set \(P=\phi(Q)\). For each subgroup \(U\leq N_{D}(P)\) write \(\epsilon_{U}\) for the unique block idempotent of \(\mathcal{O}C_{I_{P}}(U)\) such that \((U,\epsilon_{U})\leq(N_{D}(P),e_{N_{D}(P)})\) is a containment of \(\mathcal{O}I_{P}e_{P}\)-Brauer pairs and for each \(V\leq N_{E}(Q)\) write \(\varphi_{V}\) for the unique block idempotent of \(\mathcal{O}C_{J_{Q}}(V)\) such that \((V,\varphi_{V})\leq(N_{E}(Q),f_{N_{E}(Q)})\) is a containment of \(\mathcal{O}J_{Q}f_{Q}\)-Brauer pairs (see Lemma 2.10). Let \(v\in N_{E}(Q)\) and set \(u=\phi(v)\in N_{D}(P)\). Let \(\chi^{\prime}_{Q(v)}\) denote the character_
\[\chi^{\prime}_{Q(v)}=\operatorname{Ind}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{C_{I_{P }}(u)\times C_{J_{Q}(v)}}\operatorname{Res}_{C_{Y_{Q}\cap Y_{Q(v)}}(u,v)}^{Y_{ Q(v)}}\chi_{Q(v)}.\]
_Then \(\operatorname{Ind}_{Y_{Q}}^{I_{P}\times J_{Q}}(\chi_{Q})\) and \(\chi^{\prime}_{Q(v)}\) are perfect isometries and the diagram below commutes:_
(7)
_Moreover, if \(v\in C_{E}(Q)\) then the diagrams (4) and (7) form a commutative cube:_
Proof.: We may assume without loss of generality that \(Q\) is fully \(\mathcal{B}\)-normalized. Then \(P\) is fully \(\mathcal{A}\)-normalized. By Lemma 2.10\((N_{D}(P),e_{N_{D}(P)})\) is a maximal \(\mathcal{O}I_{P}e_{P}\)-Brauer pair and \((N_{E}(Q),f_{N_{E}(Q)})\) is a maximal \(\mathcal{O}J_{Q}f_{Q}\)-Brauer pair. Furthermore,
\[\mathcal{N}_{\mathcal{A}}(P) =\mathcal{F}_{(N_{D}(P),e_{N_{D}(P)})}(I_{P},e_{P})\] \[\mathcal{N}_{\mathcal{B}}(Q) =\mathcal{F}_{(N_{E}(Q),f_{N_{E}(Q)})}(J_{Q},f_{Q}),\]
and \(\phi:\mathcal{B}\stackrel{{\sim}}{{\to}}\mathcal{A}\) restricts to an isomorphism \(\phi:\mathcal{N}_{\mathcal{B}}(Q)\stackrel{{\sim}}{{\to}}\mathcal{N}_ {\mathcal{A}}(P)\).
Let \(\gamma\in T^{\Delta}(A,B)\) be the unique \(p\)-permutation equivalence with maximal \(\gamma\)-Brauer pair \((\Delta(D,\phi,E),e_{D}\otimes f_{E}^{*})\) mapping to \((\chi_{Q})_{Q\leq E}\) under the bijection in Theorem 8.8. Set
\[\gamma_{Q}=\operatorname{Ind}_{Y_{Q}}^{I_{P}\times J_{Q}}(\gamma(\Delta(P,\phi,Q),e_{P}\otimes f_{Q}^{*}))\in T^{\Delta}(\mathcal{O}I_{P}e_{P},\mathcal{O}J_{ Q}f_{Q}).\]
Then \(\gamma_{Q}\) is a \(p\)-permutation equivalence between \(\mathcal{O}I_{P}e_{P}\) and \(\mathcal{O}J_{Q}f_{Q}\) by [2, Theorem 11.4], and \((\Delta(N_{D}(P),\phi,N_{E}(Q)),e_{N_{D}(P)}\otimes f_{N_{E}(Q)}^{*})\) is a maximal \(\gamma_{Q}\)-Brauer pair by [2, Proposition 11.5(b)]. For each \(V\leq N_{E}(Q)\) set
\[Z_{V}=N_{I_{P}\times J_{Q}}(\Delta(\phi(V),\phi,V),\epsilon_{\phi(V)}\otimes \varphi_{V}^{*})\]
and set
\[\theta_{V}=\chi_{\gamma_{Q}(\Delta(\phi(V),\phi,V),\epsilon_{\phi(V)}\otimes \varphi_{V}^{*})}.\]
Then by Lemma 8.5 the characters \(\theta_{V}\), \(V\leq N_{E}(Q)\) form a strong isotypy between \(\mathcal{O}I_{P}e_{P}\) and \(\mathcal{O}J_{Q}f_{Q}\). Therefore by Lemma 10.1 the diagram below commutes:
The commutativity of the diagram (7) follows after noting that \(\theta_{\{1\}}=\operatorname{Ind}_{Y_{Q}}^{I_{P}\times J_{Q}}(\chi_{Q})\) and \(\operatorname{Res}_{C_{I_{P}}(u)\times C_{J_{Q}}(v)}^{Z_{(v)}}(\theta_{(v)})= \chi_{Q(v)}^{\prime}\). The final statement regarding the "commutative cube" is easy to verify.
|
2308.03308 | Synchronized CTL over One-Counter Automata | We consider the model-checking problem of Synchronized Computation-Tree Logic
(CTL+Sync) over One-Counter Automata (OCAs). CTL+Sync augments CTL with
temporal operators that require several paths to satisfy properties in a
synchronous manner, e.g., the property "all paths should eventually see $p$ at
the same time". The model-checking problem for CTL+Sync over finite-state
Kripke structures was shown to be in $\mathsf{P}^{\mathsf{NP}^{\mathsf{NP}}}$.
OCAs are labelled transition systems equipped with a non-negative counter that
can be zero-tested. Thus, they induce infinite-state systems whose computation
trees are not regular. The model-checking problem for CTL over OCAs was shown
to be $\mathsf{PSPACE}$-complete.
We show that the model-checking problem for CTL+Sync over OCAs is decidable.
However, the upper bound we give is non-elementary. We therefore proceed to
study the problem for a central fragment of CTL+Sync, extending CTL with
operators that require all paths to satisfy properties in a synchronous manner,
and show that it is in $\mathsf{EXP}^\mathsf{NEXP}$ (and in particular in
$\mathsf{EXPSPACE}$), by exhibiting a certain "segmented periodicity" in the
computation trees of OCAs. | Shaull Almagor, Daniel Assa, Udi Boker | 2023-08-07T05:27:52Z | http://arxiv.org/abs/2308.03308v3 | # Synchronized CTL over One-Counter Automata
###### Abstract
We consider the model-checking problem of _Synchronized Computation-Tree Logic_ (CTL+Sync) over One-Counter Automata (OCAs). CTL+Sync augments CTL with temporal operators that require several paths to satisfy properties in a synchronous manner, e.g., the property "all paths should eventually see \(p\) at the same time". The model-checking problem for CTL+Sync over finite-state Kripke structures was shown to be in \(\mathsf{P}^{\mathsf{NPNP}}\). OCAs are labelled transition systems equipped with a non-negative counter that can be zero-tested. Thus, they induce infinite-state systems whose computation trees are not regular. The model-checking problem for CTL over OCAs was shown to be \(\mathsf{PSPACE}\)-complete.
We show that the model-checking problem for CTL+Sync over OCAs is decidable. However, the upper bound we give is non-elementary. We therefore proceed to study the problem for a central fragment of CTL+Sync, extending CTL with operators that require _all_ paths to satisfy properties in a synchronous manner, and show that it is in \(\mathsf{EXP}^{\mathsf{REOP}}\) (and in particular in \(\mathsf{EXPSPACE}\)), by exhibiting a certain "segmented periodicity" in the computation trees of OCAs.
CTL, Synchronization, One Counter Automata, Model Checking 10.4230/LIPIcs.FSTTCS.2023.15 Shaull AlmagorSupported by the Israel Science Foundation grant 989/22 _Udi Boker_: supported by the Israel Science Foundation grant 2410/22
## 1 Introduction
Branching-time model checking is a central avenue in formal verification, as it enables reasoning about multiple computations of the system with both an existential and universal quantification. As systems become richer, the classical paradigm of e.g., CTL model checking over finite-state systems becomes insufficient. To this end, researchers have proposed extensions both of the logics [2, 6, 5, 3, 4, 1] and of the systems [7, 22, 9, 13].
In the systems' frontier, of particular interest are infinite-state models. Typically, such models can quickly lead to undecidability (e.g., two-counter machines [18]). However, some models can retain decidability while still having rich modelling power. One such model that has received a lot of attention in recent years is One Counter Automata (OCAs) [21, 15] - finite state machines equipped with a non-negative counter that can be zero-tested. Model checking CTL over OCAs was studied in [13], where it was shown to be decidable in PSPACE. The main tool used there is the fact that despite the infinite configuration space, the computations of an OCA do admit some periodic behavior, which can be exploited to exhibit a small-model property for the satisfaction of Until formulas.
In the logics' frontier, a useful extension of CTL is that of CTL with Synchronization operators (CTL+Sync), introduced in [4]. CTL+Sync extends CTL with operators that express synchronization properties of computation trees. Specifically, two new operators are introduced: _Until All_ and _Until Exists_. The former, denoted by \(\psi_{1}UA\psi_{2}\), holds in state \(s\) if there is a uniform bound \(k\in\mathbb{N}\) such that \(\psi_{2}\) holds in all paths from \(s\) after exactly \(k\) steps, and \(\psi_{1}\) holds in all paths up to step \(k\). Thus, intuitively, it requires all the computations to synchronize the satisfaction of the Until operator. The latter, somewhat less natural operator, denoted by \(\psi_{1}UE\psi_{2}\), requires that there exists a uniform bound \(k\) such that in every level \(j<k\) of the computation tree, some path satisfies \(\psi_{1}\) and can be continued to satisfy \(\psi_{2}\) at level \(k\).
In comparison, the standard CTL operators \(A\psi_{1}U\psi_{2}\) and \(E\psi_{1}U\psi_{2}\) require that all paths/some path satisfy the Until formula, but there is no requirement that the bounds coincide. We illustrate the differences between the semantics in Figure 1. As discussed in [4], CTL+Sync can describe non \(\omega\)-regular properties of trees, and hence goes beyond MSO, while retaining a decidable model-checking problem over finite Kripke structures.
In this work, we show the decidability of CTL+Sync model checking over OCAs: Given
an OCA \(\mathcal{A}\) and a CTL+Sync formula \(\varphi\), the problem of whether \(\mathcal{A}\) satisfies \(\varphi\). We thus combine the expressiveness of CTL+Sync with the rich modeling power of OCAs.
On the technical side, the approach taken in [4] to solve model-checking of CTL+Sync over Kripke structures does not seem to be very useful in our case. The solution there is based on the observation that every two levels of the computation tree that share the same set of Kripke-states must also share the same satisfaction value to every CTL+Sync formula. Hence, in that case, the algorithm can follow the computation tree of the powerset of the given Kripke structure, and terminate when encountering a level that has the same set of states as some previous level. In contrast, for OCAs, the unbounded counter prevents the ability to consider subsets of configurations.
On the other hand, the approach taken in [13] to solve model-checking of OCAs with respect to CTL is indeed useful in our case. Specifically, the algorithm in [13] is based on an analysis of the periodic behavior of the set of counter values that satisfy a given CTL formula in a given state of the OCA. We extend this approach, taking into account the additional complexity that stems from the synchronization requirements; see Section 5.
We start with establishing the decidability of CTL+Sync model checking using Presburger Arithmetic (see Section 4). This, however, yields a procedure with non-elementary runtime.
We then proceed to our main technical contribution (Section 5), providing an algorithm for model checking the central fragment of CTL+Sync that extends CTL with the \(UA\) operator, which requires _all_ paths to satisfy properties in a synchronous manner. Its running time is in \(\mathsf{EXP}^{\mathsf{NEXP}}\) (and in particular in \(\mathsf{EXPSPACE}\)), and for a fixed OCA and formulas of a fixed nesting depth, it is in \(\mathsf{P}^{\mathsf{NP}}\) (and in particular in \(\mathsf{PSPACE}\)).
Since CTL+Sync makes assertions on the behavior of different paths at the same time step (namely the same level of the computation tree), we need to reason about which configurations occur at each level of the tree. More precisely, in order to establish decidability we wish to exhibit a small-model property of the form _if the computation tree from some configuration \((s,v)\), for a state \(s\) and counter value \(v\), satisfies the formula \(\varphi\), then the computation tree from some configuration \((s,v^{\prime})\) for a small \(v^{\prime}\) satisfies \(\varphi\) as well_. Unfortunately, the computation trees of an OCA from two configurations \((s,v)\) and \((s,v^{\prime})\) cannot be easily obtained from one another using simple pumping arguments, due to the zero tests. This is in contrast to the case where one does not care about the length of a path, as in [13]. To overcome this, we show that computation trees of an OCA \(\mathcal{A}\) can be split into several segments, polynomially many in the size of \(\mathcal{A}\), and that within each segment we can find a bounded prefix that is common to all trees after a certain counter threshold, and such that the remainder of the segment is periodic. Using this, we establish the small model property above. The toolbox used for proving this, apart from careful cycle analysis, includes 2TVASS - a variant of 2VASS studied in [17] that allows for one counter of the 2VASS to be zero-tested.
We believe that this structural result (Lemma 18) is of independent interest for reasoning about multiple traces in an OCA computation tree, when the length of paths plays a role.
## 2 Preliminaries
Let \(\mathbb{N}=\{0,1,\ldots\}\) be the natural numbers. For a finite set \(A\subseteq\mathbb{N}\) we denote by \(\operatorname{lcm}(A)\) the least common multiple of the elements in \(A\).
For a finite sequence \(\xi=t_{0}t_{1}\cdots t_{h-1}\) and integers \(x,y\), such that \(0\leq x\leq y\leq h{-}1\), we write \(\xi[x..y]\) for the infix of \(\xi\) between positions \(x\) and \(y\), namely for \(t_{x}t_{x+1}\cdots t_{y}\). We also use the parentheses '(' and ')' for a non-inclusive range, e.g., \([x..y)=[x..y{-}1]\), and abbreviate \(\xi[x..x]\) by \(\xi[x]\). We denote the length of \(\xi\) by \(|\xi|=h\).
### One Counter Automata
A _One Counter Automaton_ (OCA) is a triple \(\mathcal{A}=\langle S,\Delta,L\rangle\), where \(S\) is a finite set of states, \(\Delta\subseteq(S\times\{\texttt{=0,>0}\}\times\{0,+1,-1\}\times S)\) is a transition relation, and \(L:S\to AP\), for some finite set \(AP\) of _atomic propositions_, is the state labeling.
A pair \((s,v)\in S\times\mathbb{N}\) is a _configuration_ of \(\mathcal{A}\). We write \((s,v)\rightarrow_{t}(s^{\prime},v^{\prime})\) for a transition \(t\in\Delta\) if one of the following holds:
* \(t=(s,\texttt{=0},e,s^{\prime})\), with \(e\in\{0,+1\}\), \(v=0\) and \(v^{\prime}=e\), or
* \(t=(s,\texttt{>0},e,s^{\prime})\), with \(e\in\{0,+1,-1\}\), \(v>0\) and \(v^{\prime}=v+e\).
We write \((s,v)\rightarrow(s^{\prime},v^{\prime})\) if \((s,v)\rightarrow_{t}(s^{\prime},v^{\prime})\) for some \(t\in\Delta\).
We require that \(\Delta\) is total, in the sense that for every configuration \((s,v)\) we have \((s,v)\rightarrow(s^{\prime},v^{\prime})\) for some configuration \((s^{\prime},v^{\prime})\). Note that this is a syntactic requirement -- every state should have outgoing transitions on both =0 and >0. This corresponds to the standard requirement of Kripke structures that there are no deadlocks. We denote by \(|\mathcal{A}|\) the number of states of \(\mathcal{A}\). Note that the description size of \(\mathcal{A}\) is therefore polynomial in \(|\mathcal{A}|\).
A _path_ of \(\mathcal{A}\) is a sequence of transitions \(\tau=t_{1},\ldots,t_{k}\) such that there exist states \(s_{0},\ldots,s_{k}\) where \(t_{i}=(s_{i-1},\bowtie_{i},e_{i},s_{i})\) with \(\bowtie_{i}\in\{\texttt{=0,>0}\}\) for all \(1\leq i\leq k\). We say that \(\tau\) is _valid_ from starting counter \(v_{0}\) (or from configuration \((s_{0},v_{0})\)), if there are counters \(v_{0},\ldots,v_{k}\in\mathbb{N}\) such that for all \(1\leq i<k\) we have \((s_{i-1},v_{i-1})\rightarrow_{t_{i}}(s_{i},v_{i})\). We abuse notation and refer to the sequence of configurations also as a _path_, starting in \((s_{0},v_{0})\) and ending in \((s_{k},v_{k})\). The _length_ of the path \(\tau\) is \(k\), and we define its \(\mathit{effect}(\pi)=\sum_{i=1}^{k}e_{i}\).
We also allow infinite paths, in which case there are no end configurations and the length is \(\infty\). In this case we explicitly mention that the path is infinite. We say that \(\tau\) is _balanced/positive/negative_ if \(\mathit{effect}(\tau)\) is zero/positive/negative, respectively. It is a _cycle_ if \(s_{0}=s_{k}\), and it _has a cycle_\(\beta\), if \(\beta\) is a cycle and is a contiguous infix of \(\tau\).
### CTL+Sync
A CTL+Sync formula \(\varphi\) is given by the following syntax, where \(q\) stands for an atomic proposition from a finite set \(AP\) of atomic propositions.
\[\varphi::=\underbrace{\texttt{true}\mid q\mid\varphi\land\varphi\mid\neg \varphi\mid EX\varphi\mid E\varphi U\varphi\mid A\varphi U\varphi}_{\text{ Standard CTL}}\mid\underbrace{\varphi UA\varphi\mid\varphi UE\varphi}_{\text{Sync operators}}\]
We proceed to the semantics. Consider an OCA \(\mathcal{A}=\langle S,\Delta,L\rangle\), a configuration \((s,v)\), and a CTL+Sync formula \(\varphi\). Then \(\mathcal{A}^{(s,v)}\) satisfies \(\varphi\), denoted by \(\mathcal{A}^{(s,v)}\models\varphi\), as defined below.
**Boolean Opeartors:**
* \(\mathcal{A}^{(s,v)}\models\texttt{true}\); \(\mathcal{A}^{(s,v)}\not\models\texttt{false}\) = \(\mathcal{A}^{(s,v)}\models q\) if \(q\in L(s)\).
* \(\mathcal{A}^{(s,v)}\models\neg\varphi\) if \(\mathcal{A}^{(s,v)}\not\models\varphi\). = \(\mathcal{A}^{(s,v)}\models\varphi\land\psi\) if \(\mathcal{A}^{(s,v)}\models\varphi\) and \(\mathcal{A}^{(s,v)}\models\psi\).
**CTL temporal operators:**
* \(\mathcal{A}^{(s,v)}\models EX\varphi\) if \((s,v)\rightarrow(s^{\prime},v^{\prime})\) for some configuration \((s^{\prime},v^{\prime})\) such that \(\mathcal{A}^{(s^{\prime},v^{\prime})}\models\varphi\).
* \(\mathcal{A}^{(s,v)}\models E\varphi U\psi\) if there exists a valid path \(\tau\) from \((s,v)\) and \(k\geq 0\), such that \(\mathcal{A}^{\tau[k]}\models\psi\) and for every \(j\in[0..k-1]\), we have \(\mathcal{A}^{\tau[j]}\models\varphi\).
* \(\mathcal{A}^{(s,v)}\models A\varphi U\psi\) if for every valid path \(\tau\) from \((s,v)\) there exists \(k\geq 0\), such that \(\mathcal{A}^{\tau[k]}\models\psi\) and for every \(j\in[0..k-1]\), we have \(\mathcal{A}^{\tau[j]}\models\varphi\).
**Synchronization operators:**
* \(\mathcal{A}^{(s,v)}\models\varphi UA\psi\) if there exists \(k\geq 0\), such that for every valid path \(\tau\) from \((s,v)\) of length \(k\) and for every \(j\in[0..k-1]\) we have \(\mathcal{A}^{\tau[j]}\models\varphi\) and \(\mathcal{A}^{\tau[k]}\models\psi\).
* \(\mathcal{A}^{(s,v)}\models\varphi UE\psi\) if there exists \(k\geq 0\), such that for every \(j\in[0..k-1]\) there exists a valid path \(\tau\) from \((s,v)\) of length \(k\) such that \(\mathcal{A}^{\tau[j]}\models\varphi\) and \(\mathcal{A}^{\tau[k]}\models\psi\).
[Additional operators] Standard additional Boolean and CTL operators, e.g., \(\lor,EF,EG\), etc. can be expressed by means of the given syntax. Similar shorthands can be defined for the synchronization operators, e.g., \(FE\) and \(GE\), etc. We remark that one can also consider operators such as \(XE\psi\) with the semantics "in the next step there exists a path satisfying \(\psi\)". However, the semantics of this coincides with the CTL operator \(EX\).
#### Presburger Arithmetic
Presburger Arithmetic (PA) [20] is the first-order theory \(\mathrm{Th}(\mathbb{N},0,1,+,<,=)\) of \(\mathbb{N}\) with addition and order. We briefly survey the results we need about PA, and refer the reader to [14] for a detailed survey.
For our purposes, a PA formula \(\varphi(x_{1},\ldots,x_{d})\), where \(x_{1},\ldots,x_{d}\) are free variables, is evaluated over \(\mathbb{N}^{d}\), and _defines_ the set \(\{(a_{1},\ldots,a_{d})\in\mathbb{N}^{d}\mid(a_{1},\ldots,a_{d})\models\varphi (x_{1},\ldots,x_{d})\}\). It is known that PA is decidable in \(2\)-NEXP [8, 11].
A _linear set_ is a set of the form \(\mathrm{Lin}(B,P)=\{\mathbf{b}+\lambda_{1}\mathbf{p_{1}}+\ldots\lambda_{k}\mathbf{p_{k}} \mid\mathbf{b}\in B,\;\lambda_{1}\ldots,\lambda_{k}\in\mathbb{N}\}\) where \(B\subseteq\mathbb{N}^{d}\) is a finite _basis_ and \(P=\{\mathbf{p_{1}},\ldots,\mathbf{p_{k}}\}\subseteq\mathbb{N}^{d}\) are the _periods_. A _semilinear set_ is then a finite union of linear sets. A fundamental result about PA [12] is that the sets definable in PA are exactly the semilinear sets, and moreover, one can effectively obtain from a PA formula \(\varphi\) a description of the semilinear set satisfying a formula \(\varphi\), and vice versa.
In dimension \(1\), semilinear sets are finite unions of arithmetic progressions. By taking the \(\mathrm{lcm}\) of the periods of the progressions and modifying the basis accordingly, we can assume a uniform period. That is, a semilinear set \(S\subseteq\mathbb{N}\) is \(\mathrm{Lin}(B,\{p\})\) for effectively computable \(B\subseteq\mathbb{N}\) and \(p\in\mathbb{N}\).
## 3 Periodicity and Flatness over OCAs
Recall that the configuration space of an OCA is \(S\times\mathbb{N}\). The underlying approach we take to solve CTL+Sync model checking is to show that satisfaction of CTL+Sync formulas over these configurations exhibits some periodicity. Moreover, the run tree of the OCA can be captured, to an extent, using a small number of cycles (a property called _flatness_). These properties will be relied upon in proving our main results.
### Periodicity
In this subsection we formalize our notions of ultimate periodicity, show how they suffice for model-checking, and cite important results about periodicity in CTL.
Consider a CTL+Sync formula \(\varphi\). We say that \(\varphi\) is \((\mathsf{t}(\varphi),\mathsf{p}(\varphi))\)_-periodic_ with respect to an OCA \(\mathcal{A}\) (or just _periodic_, if we do not care about the constants) if for every state \(s\in S\) and counters \(v,v^{\prime}>\mathsf{t}(\varphi)\), if \(v\equiv v^{\prime}\mod\mathsf{p}(\varphi)\) then \((s,v)\models\varphi\iff(s,v^{\prime})\models\varphi\). We think of \(\mathsf{t}(\varphi)\) as its threshold and of \(\mathsf{p}(\varphi)\) as its period. We say that \(\varphi\) is _totally_\((\mathsf{t}(\varphi),\mathsf{p}(\varphi))\)_-periodic_ with respect to \(\mathcal{A}\) if every subformula of \(\varphi\) (including \(\varphi\) itself) is \((\mathsf{t}(\varphi),\mathsf{p}(\varphi))\)-periodic with respect to \(\mathcal{A}\). We usually omit \(\mathcal{A}\), as it is clear from context.
Total-periodicity is tantamount to periodicity for each subformula, in the following sense.
**Proposition 3**.: _A CTL+Sync formula \(\varphi\) is totally \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic if and only if every subformula \(\psi\) of \(\varphi\) is \((\mathtt{t}^{\prime}(\psi),\mathtt{p}^{\prime}(\psi))\)-periodic for some constants \(\mathtt{t}^{\prime}(\psi),\mathtt{p}^{\prime}(\psi)\)._
For a totally periodic formula, model checking over OCA can be reduced to model checking over finite Kripke structures, as follows. Intuitively, we simply "unfold" the OCA and identify states with high counter values according to their modulo of \(\mathtt{p}(\varphi)\).
**Proposition 4**.: _Consider an OCA \(\mathcal{A}\) and a totally \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic CTL+Sync formula \(\varphi\), then we can effectively construct a Kripke structure \(\mathcal{K}\) of size \(|\mathcal{A}|\cdot(\mathtt{t}(\varphi)+\mathtt{p}(\varphi))\) such that \(\mathcal{A}\models\varphi\) if and only if \(\mathcal{K}\models\varphi\)._
In [13, Theorem 1] it is proved that every CTL formula \(\varphi\) over OCA is periodic. Our goal is to give a similar result for CTL+Sync, which in particular contains CTL. In order to avoid replicating the proof in [13] for CTL, we observe that the proof therein is by structural induction over \(\varphi\), and that moreover - the inductive assumption requires only periodicity of the subformulas of \(\varphi\). We can thus restate [13, Theorem 1] with the explicit inductive assumption, so that we can directly plug our results about CTL+Sync into it.
Denote by \(k=|S|\) the number of states of \(\mathcal{A}\), and let \(K=\operatorname{lcm}(\{1,\ldots,k\})\).
[Theorem 1 in [13], restated] Consider a CTL+Sync formula \(\varphi\) whose outermost operator is a CTL operator, and whose subformulas are periodic, then \(\varphi\) is periodic, and we have the following.
* If \(\varphi=\mathtt{true}\), \(\varphi=\mathtt{false}\), or \(\varphi=p\) for \(p\in AP\), then \(\varphi\) is \((0,1)\)-periodic.
* If \(\varphi=\neg\psi\) then \(\mathtt{t}(\varphi)=\mathtt{t}(\psi)\) and \(\mathtt{p}(\varphi)=\mathtt{p}(\psi)\).
* If \(\varphi=\psi_{1}\wedge\psi_{2}\) then \(\mathtt{t}(\varphi)=\max\{\mathtt{t}(\psi_{1}),\mathtt{t}(\psi_{2})\}\) and \(\mathtt{p}(\varphi)=\operatorname{lcm}(\mathtt{p}(\psi_{1}),\mathtt{p}(\psi_{ 2}))\).
* If \(\varphi=EX\psi\) then \(\mathtt{t}(\varphi)=\mathtt{t}(\psi)+\mathtt{p}(\psi)\) and \(\mathtt{p}(\varphi)=K\cdot\mathtt{p}(\psi)\)
* If \(\varphi=E\psi_{1}U\psi_{2}\) or \(\varphi=A\psi_{1}U\psi_{2}\) then1\(\mathtt{t}(\varphi)=\max\{\mathtt{t}(\psi_{1}),\mathtt{t}(\psi_{2})\}+2 \cdot k^{2}\cdot\operatorname{lcm}(K\cdot\mathtt{p}(\psi_{1}),\mathtt{p}(\psi_ {2}))\) and \(\mathtt{p}(\varphi)=\operatorname{lcm}(K\cdot\mathtt{p}(\psi_{1}),\mathtt{p}( \psi_{2}))\).
Footnote 1: Note that in [13] the case of \(A\psi_{1}U\psi_{2}\) is not stated, but rather the dual Release operator \(E\psi_{1}R\psi_{2}\), which follows the same proof.
### Linear Path Schemes
The runs of an OCA can take intricate shapes. Fortunately, however, we can use the results of [17] about _flatness_ of a variant of 2-VASS with some zero tests, referred to as 2-TVASS, to obtain a simple form that characterizes reachability, namely linear path schemes.
A _linear path scheme_ (LPS) is an expression of the form \(\pi=\alpha_{0}\beta_{1}^{*}\alpha_{1}\cdots\beta_{k}^{*}\alpha_{k}\) where each \(\alpha_{i}\in\Delta^{*}\) is a path in \(\mathcal{A}\) and each \(\beta_{i}\in\Delta^{*}\) is a cycle in \(\mathcal{A}\). The _flat length_ of \(\pi\) is \(|\pi|=|\alpha_{0}\beta_{1}\alpha_{1}\cdots\beta_{k}\alpha_{k}|\), the _size_ of \(\pi\) is \(k\).
A concrete path \(\tau\) in \(\mathcal{A}\) is \(\pi\)_-shaped_ if there exist \(e_{1},\ldots,e_{k}\) such that \(\tau=\alpha_{0}\beta_{1}^{e_{1}}\alpha_{1}\cdots\beta_{k}^{e_{k}}\alpha_{k}\).
Our first step is to use a result of [17] on 2-TVASS to show that paths of a fixed length in \(\mathcal{A}\) admit a short LPS. The idea is to transform the OCA \(\mathcal{A}\) to a 2-TVASS \(\mathcal{A}^{\prime}\) by introducing a length-counting component. That is, in every transition of \(\mathcal{A}^{\prime}\) as a 2-TVASS, the second component increments by 1.
**Lemma 6**.: _Let \((s,v)\) and \((s^{\prime},v^{\prime})\) be configurations of \(\mathcal{A}\). If there exists a path \(\tau\) of length \(\ell\) from \((s,v)\) to \((s^{\prime},v^{\prime})\), then there is also a \(\pi\)-shaped path \(\tau^{\prime}\) of length \(\ell\) from \((s,v)\) to \((s^{\prime},v^{\prime})\), where \(\pi\) is some linear path scheme of flat length \(\text{poly}(|S|)\) and size \(O(|S|^{3})\)._
By Lemma 6, we can focus our attention to \(\pi\)-shaped paths where \(\pi\) is "short". Henceforth, we call a path \(\tau\)_basic_ if it is \(\pi\)-shaped for some LPS \(\pi\) as per Lemma 6.
Using standard acceleration techniques (see e.g., [16, 10, 17]), we also get from Lemma 6 that the reachability relation of an OCA (including path length) is effectively semilinear. More precisely, we have the following.
We can effectively compute, for every \(s,s^{\prime}\in S\), a PA formula \(Path_{s,s^{\prime}}(x,y,x^{\prime},y^{\prime})\) such that \((v,\ell,v^{\prime},\ell^{\prime})\models Path_{s,s^{\prime}}(x,y,x^{\prime}, y^{\prime})\) if and only if a path of length \(\ell\) ending2 at \((s,v)\) can be continued to a path of length \(\ell^{\prime}\) ending at \((s^{\prime},v^{\prime})\).
Footnote 2: It is more natural to assume \(\ell=0\) and simply consider paths starting at \((s,v)\). However, our formulation makes things easier later on.
## 4 Model Checking CTL+Sync via Presburger Arithmetic
In this section we show that model checking CTL+Sync over OCAs is decidable, by reducing it to the satisfiability problem of a PA formula.
We start with a simple observation. Consider a totally \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic CTL+Sync formula \(\varphi\), then for every state \(s\in S\) we can compute a PA formula \(P_{\varphi,s}(x)\) such \(v\models P_{\varphi,s}(x)\) if and only if \((s,v)\models\varphi\).
Next, we show that from a PA formula as above we can obtain a threshold and a period. Consider a CTL+Sync formula \(\varphi\) and PA formulas \(P_{\varphi,s}(x)\), for every state \(s\), such that \(v\models P_{\varphi,s}(x)\) iff \((s,v)\models\varphi\). Then \(\varphi\) is \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic for computable constants \(\mathtt{t}(\varphi)\) and \(\mathtt{p}(\varphi)\).
Combining Theorem 5 and Lemma 8 we obtain that every CTL formula (without Sync) can be translated to PA formulas. We now turn to include the Sync operators.
Consider a CTL+Sync formula \(\varphi\). We construct, by induction on the structure of \(\varphi\), PA formulas \(P_{s,\varphi}(v)\), for every state \(s\in S\), such that \(P_{s,\varphi}(v)\) holds if and only if \((s,v)\models\varphi\). For the Sync operators, this utilizes the PA formulas of Corollary 7.
* If \(\varphi=p\) for an atomic proposition \(p\), then \(P_{s,\varphi}(v)=\text{True}\) if \(s\) is labeled with \(p\) and False otherwise.
* If \(\varphi=-\psi\), then \(P_{s,\varphi}(v)=-P_{s,\psi}(v)\).
* If \(\varphi=\psi_{1}\wedge\psi_{2}\), then \(P_{s,\varphi}(v)=P_{s,\psi_{1}}(v)\wedge P_{s,\psi_{2}}(v)\).
* If \(\varphi=\text{EX}\psi\), \(\varphi=\text{A}\psi_{1}\text{U}\psi_{2}\) or \(\varphi=\text{E}\psi_{1}\text{U}\psi_{2}\) then by the induction hypothesis, \(\psi\), \(\psi_{1}\) and \(\psi_{2}\) have corresponding PA formulas, and by Lemma 9 we can compute \(\mathtt{t}(\psi)\) and \(\mathtt{p}(\psi)\) (and similarly for \(\psi_{1}\) and \(\psi_{2}\)) such that \(\psi\) is \((\mathtt{t}(\psi),\mathtt{p}(\psi))\)-periodic. Then, by Theorem 5 we can compute \(\mathtt{t}(\varphi),\mathtt{p}(\varphi)\) such that \(\varphi\) is \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic. Thus, we can apply Lemma 8 to obtain PA formulas for \(\varphi\).
* If \(\varphi=\psi_{1}\text{UE}\psi_{2}\), then \(P_{s,\varphi}(v)=\exists\ell.\forall\ell^{\prime}<\ell.\bigg{(}\bigvee_{s^{ \prime}\in S}\Big{(}\exists v^{\prime}.\big{(}Path_{s,s^{\prime}}(v,0,v^{\prime },\ell^{\prime})\wedge P_{s^{\prime},\psi_{1}}(v^{\prime})\wedge\bigvee_{s^{ \prime\prime}\in S}\exists v^{\prime\prime}.Path_{s^{\prime},s^{\prime\prime}}( v^{\prime},\ell^{\prime},v^{\prime\prime},\ell)\wedge P_{s^{\prime\prime},\psi_{2}}(v^{ \prime\prime})\big{)}\bigg{)}\bigg{)}\)
* If \(\varphi=\psi_{1}\text{UA}\psi_{2}\), then \(P_{s,\varphi}(v)=\exists\ell.\bigg{(}\Big{(}\exists v^{\prime}.\bigvee_{s^{ \prime}\in S}\big{(}Path_{s,s^{\prime}}(v,0,v^{\prime},\ell)\wedge P_{s^{ \prime},\psi_{2}}(v^{\prime})\big{)}\Big{)}\ \ \wedge\) \(\Big{(}\bigwedge_{s^{\prime}\in S}\forall v^{\prime}.\big{(}Path_{s,s^{\prime}}( v,0,v^{\prime},\ell)\to P_{s^{\prime},\psi_{2}}(v^{\prime})\big{)}\Big{)}\)\(\ \
The semantics of the obtained PA formulas match the semantics of the respective Sync operators. By Lemma 9 we now have that for every CTL+Sync formula \(\varphi\) we can compute \(\mathtt{t}(\varphi),\mathtt{p}(\varphi)\) such that \(\varphi\) is \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic. By Proposition 3 we can further assume that \(\varphi\) is totally \((\mathtt{t}(\varphi),\mathtt{p}(\varphi))\)-periodic. Finally, by Proposition 4 we can decide whether \(\mathcal{A}\models\varphi\).
[Complexity] Observe that the complexity of our decision procedure via PA formulas is non-elementary. Indeed, when using a CTL subformula, we translate it, using Lemma 8, to a PA formula that may be exponential in the size of the formula and of the OCA. Thus, we might incur an exponential blowup in every step of the recursive construction, leading to a tower of exponents.
## 5 Model Checking the CTL+UA Fragment
In this section we consider the fragment of CTL+Sync induced by augmenting CTL with only the Sync operator \(\psi_{1}UA\psi_{2}\). For this fragment, we are able to obtain a much better upper bound for model checking, via careful analysis of the run tree of an OCA.
Throughout this section, we fix an OCA \(\mathcal{A}=\langle S,\Delta,L\rangle\) with \(n=|S|\geq 3\) states, and a CTL+UA formula \(\varphi\). Consider a configuration \((s,v)\) of \(\mathcal{A}\) and a CTL+UA formula \(\varphi=\psi_{1}UA\psi_{2}\) with \(UA\) being the outermost operator. The satisfaction of \(\varphi\) from \((s,v)\) is determined by the computation tree of \(\mathcal{A}\) from \((s,v)\). Specifically, we have that \(\mathcal{A}^{(s,v)}\models\varphi\) if there exists some bound \(k\in\mathbb{N}\) such that \(\psi_{2}\) holds in all configurations of level \(k\) of the computation tree, and \(\psi_{1}\) holds in all configurations of levels \(\ell\) for \(0\leq\ell<k\).
Therefore, in order to reason about the satisfaction of \(\varphi\), it is enough to know which configurations appear in each level of the computation tree. This is in contrast with UE, where we would also need to consider the paths themselves. Fortunately, it means we can use the LPS of Lemma 6 to simplify the proofs.
Given an OCA \(\mathcal{A}\) with \(n=|\mathcal{A}|\) and a CTL+UA formula \(\varphi\), we can compute a counter threshold \(\mathtt{cT}\) and a counter period \(\mathtt{P}\), both single exponential in \(n\) and in the nesting depth of \(\varphi\), such that \(\varphi\) is \((\mathtt{cT},\mathtt{P})\)-periodic with respect to \(\mathcal{A}\).
Before we delve into the proof of Theorem 4, we show how it implies our main result.
The model-checking problem for CTL+UA is decidable in \(\textsc{EXP}^{\textsc{NEXP}}\).
Proof.: Consider a CTL+UA formula \(\varphi\) and an OCA \(\mathcal{A}\). By Theorem 4 we can compute \(\mathtt{cT},\mathtt{P}\) single exponential in \(|\varphi|\) and \(|\mathcal{A}|\), such that \(\varphi\) is \((\mathtt{cT},\mathtt{P})\)-periodic. We then apply Proposition 4 to reduce the problem to model checking \(\varphi\) against a Kripke structure \(\mathcal{K}\) of size \(|\mathcal{A}|\cdot(\mathtt{cT}+\mathtt{P})\).
Finally, the proof of [4, Theorem 1] shows that model checking the CTL+UA fragment can be done in \(\mathsf{P}^{\mathsf{NP}}\) in the size of the Kripke structure and the formula, yielding an \(\textsc{EXP}^{\textsc{NEXP}}\) bound in our setting.
[Lower Bounds] The \(\mathsf{PSPACE}\)-hardness of model checking CTL over OCA from [13] immediately implies \(\mathsf{PSPACE}\)-hardness in our setting as well. However, tightening the gap between the lower and upper bounds remains an open problem.
The remainder of the paper is devoted to proving Theorem 4.
### Cycle Manipulation and Slope Manipulation.
A fundamental part of our proof involves delicately pumping and removing cycles to achieve specific counter values and/or lengths of paths. We do this with the following technical tools.
For a path \(\tau\), we define the _slope_ of \(\tau\) as \(\frac{\textit{effect}(\tau)}{|\tau|}\). Recall that a basic path is of the form \(\tau=\alpha_{0}\beta_{1}^{c_{1}}\alpha_{1}\cdots\beta_{k}^{c_{k}}\alpha_{k}\) adhering to some LPS, where \(k=O(|\mathcal{A}|)\) and each \(\alpha_{i}\) and \(\beta_{i}\) is of length \(poly(|\mathcal{A}|)\). We denote by \(b\) the maximum flat length of any LPS for a basic path. In particular, \(b\) bounds the flat length of the LPS, the size of it, and the length of any cycle or path in it.
We call a cycle _basic_ if it is of length at most \(b\). A slope of a path is _basic_ if it may be the slope of a basic cycle, namely if it equals \(\frac{x}{y}\), where \(x\in[-b..b],y\in[1..b]\) and \(|x|\leq y\). We denote the basic slopes by \(\mathbf{s}_{i}\), starting with \(\mathbf{s}_{1}\) for the smallest. For example for \(b=3\), the basic slopes are (\(\mathbf{s}_{1}\)=\(-\)1, \(\mathbf{s}_{2}\)=\(-\frac{2}{3}\), \(\mathbf{s}_{3}\)=\(-\frac{1}{2}\), \(\mathbf{s}_{4}\)=\(-\frac{1}{3}\), \(\mathbf{s}_{5}\)=\(0\), \(\mathbf{s}_{6}\)=\(\frac{1}{3}\), \(\mathbf{s}_{7}\)=\(\frac{1}{2}\), \(\mathbf{s}_{8}\)=\(\frac{2}{3}\), \(\mathbf{s}_{9}\)=\(1\)). Observe that for every \(i\), we have \(|\mathbf{s}_{i}|\geq\frac{1}{b}\), and for every \(j>i\), we have \(\mathbf{s}_{j}-\mathbf{s}_{i}\geq\frac{1}{b^{2}}\) and when they are both negative, also \(\frac{\mathbf{s}_{i}}{\mathbf{s}_{i}}\leq 1-\frac{1}{b^{2}}\).
Consider a basic path \(\tau\) and basic cycles \(c_{1},c_{2},c_{3}\) in \(\tau\) with effects \(e_{1},e_{2},e_{3}\in[-b..b]\), respectively, and lengths \(\ell_{1},\ell_{2},\ell_{3}\in[1..b]\), respectively, such that \(\frac{e_{1}}{\ell_{1}}\leq\frac{e_{2}}{\ell_{2}}\leq\frac{e_{3}}{\ell_{3}}\). Then there are numbers \(k_{1},k_{3}\in[0..b^{2}]\), such that the combination of \(k_{1}\) repetitions of \(c_{1}\) and \(k_{3}\) repetitions of \(c_{3}\) yield an effect and length whose ratio is \(\frac{e_{2}}{\ell_{2}}\).
Consider a path \(\pi\) with cycles \(c_{1},c_{2}\) with effects \(e_{1},e_{2}\in[-b..b]\) and lengths \(\ell_{1},\ell_{2}\in[1..b]\), respectively, such that \(\frac{e_{1}}{\ell_{1}}<\frac{e_{2}}{\ell_{2}}\), and a length \(x\) that is divisible by \(\mathrm{lcm}[1..2b^{2}]\). Then there are numbers \(k_{1},k_{2}\in[0..b\cdot x]\), such that the addition or removal of \(k_{1}\) repetitions of \(c_{1}\) and the addition or removal of \(k_{2}\) repetitions of \(c_{2}\) yield a path of the same effect as \(\pi\) and of a length shorter or longer, as desired, by \(x\) (provided enough cycle repetitions exist).
In order to prove Theorem 4.1, we show that every computation tree of \(\mathcal{A}\), starting from a big enough counter value \(v>\mathtt{cT}\), has a'segmented periodic' structure with respect to \(\varphi\). That is, we can divide its levels into \(poly(n)\) many _segments_, such that only the first \(\mathtt{sT}\in Exp(n,|\varphi|)\) levels in each segment are in the 'core', while every other level \(\ell\) is a sort-of repetition of the level \(\ell-\mathtt{P}\), for a _period_\(\mathtt{P}\). We further show that there is a similarity between the cores of computation trees starting with counter values \(v\) and \(v+\mathtt{P}\). We depict the segmentation in Figure 2, and formalize it as follows.
Consider a formula \(\varphi=\psi_{1}UA\psi_{2}\), where \(\psi_{1}\) and \(\psi_{2}\) are \((\mathtt{cT}(\psi_{1}),\mathtt{P}(\psi_{1}))\)- and \((\mathtt{cT}(\psi_{2}),\mathtt{P}(\psi_{2}))\)-periodic, respectively. We define several constants to use throughout the proof.
**Constants depending only on the number \(n\) of states in the OCA**
* \(b\in Poly(n)\): the bound on the length of a linear path scheme on \(\mathcal{A}\).
* \(B=\mathrm{lcm}[1..2b^{3}]\).
**Constants depending on \(n\) and the CTL+UA formula \(\varphi\)**
* the unified period of the subformulas.
* the unified counter threshold of the subformulas.
* the 'period' of \(\varphi\).
* the'segment threshold' of \(\varphi\).
* the 'counter threshold' of \(\varphi\).
Eventually, these periodicity constants are plugged into the inductive cases of Theorem 4.1, as shown in the proof of Theorem 4.1. Then, all the constants are single-exponential in \(n\) and the nesting depth of \(\varphi\). Notice that the following relationship holds between the constants.
\(\mathtt{P}(\varphi)>\mathtt{cT}_{prev}(\varphi)\) for \(b\geq 2\).
When clear from the context, we omit the parameter \(\varphi\) from \(\mathtt{P},\mathtt{sT},\mathtt{cT},\mathtt{P}_{prev},\mathtt{cT}_{prev}\).
We provide below an intuitive explanation for the choice of constants above.
### Intuition for the period P.
The period has two different roles: _level-periodicity_ within each segment of a computation tree, and _counter-value periodicity_ between two computation trees starting with different counter values.
_Level periodicity within a segment_: For lengthening or shortening a basic path by P, we add and/or remove some copies of its cycles. Adding or removing \(\texttt{P}_{prev}\) copies of the same cycle guarantees that the end counter values of the original and new paths are equivalent modulo \(\texttt{P}_{prev}\). Since the cycles in a basic path are of length in \([0..b]\), setting P to be divisible by \(\texttt{P}_{prev}\cdot\mathrm{lcm}[1..b]\), allows to add or remove \(\frac{\texttt{P}}{|c|}\) copies of a cycle \(c\), where \(\frac{\texttt{P}}{|c|}\) is divisible by \(\texttt{P}_{prev}\), as desired. Yet, we might need to add copies of one cycle and remove copies of another, thus, as per Proposition 15, we need P to be divisible by \(\texttt{P}_{prev}\cdot lcm[1..2b^{2}]\).
_Counter periodicity between computation trees_: We change a path \(\tau\) that starts with a counter value \(v\) to a path that starts with a counter value \(v+\texttt{P}\), or vice versa, by lengthening or shortening it by \(\frac{\texttt{P}}{\texttt{s}}\), respectively, where \(\texttt{s}\) is a positive basic slope. In some cases, we need to also make sure that the longer or shorter path has a drop bigger or smaller, respectively, than \(\tau\) by exactly P.
As \(\frac{\texttt{P}}{\texttt{s}}\) is bounded by \(b\cdot\texttt{P}\), if there are at least \(b\cdot\texttt{P}\) repetitions of a cycle \(c\) in \(\tau\) whose slope is \(-\texttt{s}\), we can just add or remove \(\frac{b\cdot\texttt{P}}{|c|}\) copies of \(c\), so we need P to be divisible by \(\texttt{P}_{prev}\cdot\mathrm{lcm}[1..b]\), for guaranteeing that the counter values at the end of the original and new paths are equivalent. Yet, in some cases we need to combine two cycles, as per Proposition 14. As the combination of the two cycles might be of length up to \(2b^{3}\), we need P to be divisible
Figure 2: A computation tree from \((s,v)\) above and from \((s,v+\texttt{P})\) below, demonstrating the _segmented periodicity_. Each segment \(i\) starts at \(\mathbb{S}_{i}\) and ends at \(\mathbb{S}_{i+1}-1\) (except for the last, which never ends). The _core_ of the computation tree is the union of the first sT positions from each segment. Following these sT positions, there is within each segment periodicity of length P, meaning that for every path of length \(\ell\), there exists an equivalent path of length \(\ell+\texttt{P}\), as shown for the three points with the same state \(s\) and equivalent counter values \(v_{1}\equiv v_{2}\equiv v_{3}\). Between the trees, there is equivalence between the core positions, mapped via the shift function, as shown for the two points with the same state \(s^{\prime}\) and the equivalent counter values \(v^{\prime}\equiv v^{\prime\prime}\).
by \(\mathbb{P}_{prev}\cdot\mathrm{lcm}[1..2b^{3}]\).
### Intuition for the counter threshold cT and the segment threshold sT.
In order to apply Propositions 4.1 and 4.1, we need to have in the handled path many repetitions of two cycles of different slopes. We thus choose cT and sT to be large enough so that paths in which only one (negative) cycle slope is repeated many times must hit zero within a special region called the 'core' of the tree, as defined below.
### The core of a computation tree.
For every counter value \(v>\texttt{cT}\), the 'core' with respect to a fixed formula \(\varphi\), denoted by \(\texttt{core}(v)\subseteq\mathbb{N}\), of a computation tree of \(\mathcal{A}\) that starts with a counter value \(v\) consists of \(m+1<b^{2}\) segments, with each segment corresponding to a negative basic slope and having sT consequent numbers. For every \(i\in[0..m]\), the start of Segment \(i\) depends on the initial counter value \(v\) of the computation tree, it is denoted by \(\mathbb{\hat{s}}_{i}(v)\), and it is defined as follows:
* \(\mathbb{\hat{s}}_{0}(v)=0\),
* For \(i\in[1..m]\), we set \(\mathbb{\hat{s}}_{i}(v)=\frac{-1}{\texttt{s}_{i}}(v-\texttt{cT}_{prev})-b^{ 8}\cdot\texttt{P}\).
* For convenience, we also define \(\mathbb{\hat{s}}_{m+1}(v)=\infty\).
Then, we define \(\texttt{core}(v)=\bigcup_{i=0}^{m}[\mathbb{\hat{s}}_{i}(v)..(\mathbb{\hat{s} }_{i}(v)+\texttt{sT}-1)]\).
Observe that the core of every tree is an ordered list of \(((m+1)\cdot\texttt{sT})\) numbers (levels), while just the starting level of every segment depends on the initial counter value \(v\). We can thus define a bijection \(\texttt{shift}:\texttt{core}(v)\rightarrow\texttt{core}(v+\texttt{P})\) that maps the \(i\)-th number in \(\texttt{core}(v)\) to the \(i\)-th number in \(\texttt{core}(v+\texttt{P})\) (see also Figure 2).
Recall that we define sT so that, intuitively, if a path is long enough to reach \(\mathbb{\hat{s}}_{i}(v)+\texttt{sT}\) without reaching counter value \(\texttt{cT}_{prev}\), then the path must have many cycles with a slope larger than \(\texttt{s}_{i}\), and if the path manages to reach \(\texttt{cT}_{prev}\) before \(\mathbb{\hat{s}}_{i+1}(v)\) (namely the end of Segment \(i\)), then it must have many cycles with a slope at least as small as \(\texttt{s}_{i}\). This is formalized as follows.
Let \(\tau\) be a basic path, or a prefix of it, of length \(\ell\) starting from counter value \(v>\texttt{cT}\), and let \(i\in[0..m]\).
1. If \(\ell\geq\mathbb{\hat{s}}_{i}(v)+\frac{\texttt{sT}}{2}\) and the counter values of \(\tau\) stay above \(cT_{prev}\), then \(\tau\) has a cycle with slope \(\texttt{s}_{j}\) for \(j>i\) that repeats at least \(b^{4}\cdot\texttt{P}\) times.
2. If \(\ell<\mathbb{\hat{s}}_{i+1}(v)\) and the counter values of \(\tau\) reach \(cT_{prev}\), then \(\tau\) has a cycle with slope \(\texttt{s}_{j}\) for \(j\leq i\) that repeats at least \(b^{4}\cdot\texttt{P}\) times.
As a sanity check, the lemma states that if a path \(\tau\) reaches \(\texttt{cT}_{prev}\) for the first time at length \(\ell\in[\mathbb{\hat{s}}_{1}(v)+\frac{\texttt{sT}}{2}\..\ \mathbb{\hat{s}}_{2}(v))\), then it has many cycles with slope \(\texttt{s}_{1}=-1\) (enough to decrease down to \(\texttt{cT}_{prev}\) before \(\mathbb{\hat{s}}_{2}(v)\)), as well as many cycle with slope at least \(\texttt{s}_{2}=\frac{-(b-1)}{b}\) (enough to keep above \(\texttt{cT}_{prev}\) through \(\mathbb{\hat{s}}_{1}(v)+\frac{\texttt{sT}}{2}\)). Observe also that a path cannot reach \(\texttt{cT}_{prev}\) at Segment \(0\), namely before \(\mathbb{\hat{s}}_{1}(v)\).
### The segment and shift periodicity.
Consider a threshold \(T\) and period \(P\). We say that counter values \(u,v\) are \((T,P)\)_-equivalent_, denoted by \(u\equiv_{T,P}v\) if either \(u,v\geq T\) and \(P\) divides \(|u-v|\), or \(u,v<T\) and \(u=v\). That is, either both \(u,v\) are greater than \(T\), in which case they are equivalent modulo \(P\), or they are both smaller than \(T\) and are equal.
The segment periodicity within a computation tree is then stated as Claim 1 in Lemma 18 below, while the similarity between computation trees starting from counters \(v\) and \(v+\mathtt{P}\) as Claim 2. (By \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(s^{\prime},v^{\prime})\) we mean that the computation tree starting with state \(s\) and counter value \(v\) has a path of length \(\ell\) ending in state \(s^{\prime}\) and counter value \(v^{\prime}\).)
Consider states \(s\) and \(e\), a counter value \(v>\mathtt{cT}\), an arbitrary counter value \(u\), and an arbitrary path length \(\ell\).
1. If \(\ell\not\in\mathtt{core}(v)\) then: 1. \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(e,u)\implies(s,v) \stackrel{{\ell-\mathtt{P}}}{{\rightsquigarrow}}(e,u^{\prime})\), and 2. \((s,v)\stackrel{{\ell-\mathtt{P}}}{{\rightsquigarrow}}(e,u)\implies(s,v) \stackrel{{\ell}}{{\rightsquigarrow}}(e,\tilde{u})\), for some counter values \(u^{\prime}\) and \(\tilde{u}\), such that \(u\equiv_{\mathtt{cT}_{prev},\mathtt{P}_{prev}}u^{\prime}\equiv_{\mathtt{cT}_ {prev},\mathtt{P}_{prev}}\tilde{u}\).
2. If \(\ell\in\mathtt{core}(v)\) then: 1. \((s,v+\mathtt{P})\stackrel{{\textsf{shift}(\ell)}}{{\rightsquigarrow}}(e,u)\implies(s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(e,u^{\prime})\), and 2. \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(e,u)\implies(s,v+ \mathtt{P})\stackrel{{\textsf{shift}(\ell)}}{{\rightsquigarrow}}(e,\tilde{u})\) for some counter values \(u^{\prime}\) and \(\tilde{u}\), such that \(u\equiv_{\mathtt{cT}_{prev},\mathtt{P}_{prev}}u^{\prime}\equiv_{\mathtt{cT}_ {prev},\mathtt{P}_{prev}}\tilde{u}\).
Throughout the proof, we will abbreviate \(u\equiv_{\mathtt{cT}_{prev},\mathtt{P}_{prev}}u^{\prime}\) by \(u\equiv u^{\prime}\). We split the proof into four parts, each devoted to one of the four stated implications. In each of them, we assume the existence of a path \(\tau\) that witnesses the left side of the implication, say \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(e,u)\), and show that there exists a path \(\tau^{\prime}\) that witnesses the right side of the implication, say \((s,v)\stackrel{{\ell-\mathtt{P}}}{{\rightsquigarrow}}(e,u^{\prime})\), where \(u\equiv u^{\prime}\). By Lemma 6, we assume that \(\tau\) is a basic path. We present some of the cases; the remaining parts are in the appendix.
### Proof of Lemma 18a
Let \(\tau\) be a basic path of length \(\ell\not\in\mathtt{core}(v)\) such that \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(e,u)\) via \(\tau\). We construct from \(\tau\) a path \(\tau^{\prime}\) for \((s,v)\stackrel{{\ell-\mathtt{P}}}{{\rightsquigarrow}}(e,u^{\prime})\), such that \(u\equiv u^{\prime}\). The proof is divided to two cases.
### Case 1a.1: The counter values in \(\tau\) stay above \(\mathtt{cT}_{prev}\)
If there is no position in \(\tau\) with counter value \(\mathtt{cT}_{prev}\), then in particular \(\tau\) has no zero-transitions. Since \(\ell\notin\mathtt{core}(v)\), then in particular \(\ell\geq\mathtt{sT}>3b^{5}\mathtt{P}\). Thus, there are at least \(3b^{4}\mathtt{P}\) cycle repetitions in \(\tau\).
If there is a non-positive cycle \(c\) that is repeated at least \(\mathtt{P}\) times, we can obtain \(\tau^{\prime}\) by removing \(\frac{\mathtt{P}}{|c|}\) copies of \(c\), as the counter values along \(\tau^{\prime}\) are at least as high as the corresponding ones in \(\tau\). Observe that \(\tau^{\prime}\) is of length \(\ell-\mathtt{P}\) from \((s,v)\) to \((e,u^{\prime})\) with \(u^{\prime}=u-\mathit{effect}(c)\frac{\mathtt{P}}{|c|}\). Since we have \(u^{\prime}\geq u\geq\mathtt{cT}_{prev}\), then \(u\equiv u^{\prime}\).
Otherwise, each non-positive cycle in \(\tau\) is taken at most \(\mathtt{P}\) times. Thus, the positive cycles are repeated at least \(3b^{4}\mathtt{P}-b\mathtt{P}\geq 3b^{3}\mathtt{P}\) times. In particular, there exists a positive cycle \(c\) that repeats at least \(3b^{2}\mathtt{P}\) times. By removing \(\frac{\mathtt{P}}{|c|}\) occurrences of it, we obtain a path \(\tau^{\prime}\) of length \(\ell-\mathtt{P}\). Notice first that this path is valid. Indeed, up until the cycle \(c\) is taken, the path \(\tau^{\prime}\) coincides with \(\tau\), so the counter remains above \(\mathtt{cT}_{prev}\). Since \(c\) is a positive cycle, after completing its iterations, the counter value becomes at least \(3b^{2}\mathtt{P}-\mathtt{P}+\mathtt{cT}_{prev}\). Then, even if all remaining transitions in the negative cycles have effect \(-1\), the counter value is reduced by at most \(b^{2}\mathtt{P}\) (as there are at most \((b-1)\mathtt{P}\) remaining cycles, each of effect at least \(-b\), and the simple paths in \(\tau\) can reduce by another \(b\) at most). Thus, the value of the counter remains at least \(3b^{2}\mathtt{P}-\mathtt{P}+\mathtt{cT}_{prev}-b^{2}\mathtt{P}>\mathtt{cT}_{ prev}\). Finally, let \((e,u^{\prime})\) be the configuration reached at the end of \(\tau^{\prime}\), then \(u-u^{\prime}=\mathit{effect}(c)\frac{\mathtt{P}}{|c|}\), so \(u\equiv u^{\prime}\).
### Case 1a.2: \(\tau\) reaches counter value \(\mathtt{cT}_{prev}\).
Let \(0\leq z_{f}\leq z_{u}\leq\ell\) be the first and ultimate positions in \(\tau\) where the counter value is exactly \(\mathtt{cT}_{prev}\). We split \(\tau\) into three parts: \(\tau_{1}=\tau[0\..\ z_{f}),\tau_{2}=\tau[z_{f}\..\ z_{u}),\tau_{3}=\tau[z_{u}\..\ \ell]\) (it could be that \(z_{f}=z_{u}\), in which case the middle part is empty). Since \(\tau\) is of length \(\ell\geq\mathtt{sT}\geq b^{9}\cdot\mathtt{P}\), then at least one of the parts above is of length at least \(b^{8}\cdot\mathtt{P}\) (recall \(b\geq 3\)). We split according to which part that is. For simplicity, we start with the cases that \(\tau_{2}\) or \(\tau_{3}\) are long, and only then handle the case of a long \(\tau_{1}\).
1. _The middle part \(\tau_{2}=\tau[z_{f}\..\ z_{u}]\) is of length at least \(b^{8}\mathtt{P}\)._ As \(\tau_{2}\) is of length at least \(b^{8}\cdot\mathtt{P}\), some cycle \(c\) in it must repeat at least \(b^{6}\cdot\mathtt{P}\) times. If \(c\) is balanced, we can obtain \(\tau^{\prime}\) by removing \(\frac{\mathtt{P}}{[c]}\) of its repetitions. If \(c\) is positive, starting at position \(x\) with counter value \(v_{x}\), then the counter value at position \(y\) where \(c\)'s repetitions end is at least \(v_{x}+b^{6}\cdot\mathtt{P}\). As \(\tau_{2}\) eventually gets down to \(\mathtt{cT}_{prev}<\mathtt{P}\), there must be a negative cycle \(c_{-}\) that repeats at least \(b\cdot\mathtt{P}\) times between position \(y\) and the first position after \(y\) that has the counter value \(v_{x}+\frac{b^{8}\cdot\mathtt{P}}{2}\). Hence, we can obtain \(\tau^{\prime}\) by removing repetitions of \(c\) and \(c_{-}\), as per Proposition 15, ensuring that the only affected values are above \(v_{x}\). If \(c\) is negative, starting at position \(x\) with counter value \(v_{x}\), then \(v_{x}\geq b^{6}\cdot\mathtt{P}\). As \(\tau_{2}\) starts with counter value \(\mathtt{cT}_{prev}\), and \(\mathtt{cT}_{prev}<P\) (Proposition 16), there must be a positive cycle \(c_{+}\) that repeats at least \(b\cdot\mathtt{P}\) times between the last position with counter value \(\frac{b^{6}\cdot\mathtt{P}}{2}\) and \(x\). Hence, we can obtain \(\tau^{\prime}\) by removing repetitions of \(c\) and \(c_{+}\), as per Proposition 15, ensuring that the only affected values are above \(b^{5}\cdot\mathtt{P}\).
2. _The last part \(\tau_{3}=\tau[z_{u}\..\ \ell]\) is of length at least \(b^{8}\mathtt{P}\)._ As in the previous case, a cycle \(c\) must repeat in \(\tau_{3}\) at least \(b^{6}\cdot\mathtt{P}\) times. If \(c\) is balanced, we can remove \(\frac{\mathtt{P}}{[c]}\) of its repetitions, getting the desired path \(\tau^{\prime}\). Otherwise, it must be that \(\tau_{3}\) stays above \(\mathtt{cT}_{prev}\), and reaches a value at least \(b^{6}\cdot\mathtt{P}\). Indeed, if \(c\) is positive then its repetitions end at some position \(x\) with a counter value at least that high, and if it is negative it starts at some position \(x\) with a counter value at least that high. If the counter value also drops to \(\frac{b^{6}\cdot\mathtt{P}}{2}\) after position \(x\), then we can remove positive and negative cycles exactly as in the previous case. Otherwise, we can just remove \(\frac{\mathtt{P}}{[c]}\) repetitions of \(c\), guaranteed that the counter value at the end of \(\tau_{3}\) is above \(\mathtt{cT}_{prev}\).
3. _Only the first part \(\tau_{1}=\tau[0\..\ z_{f})\) is of length at least \(b^{8}\mathtt{P}\). If any of the other parts is long, we shorten them. Otherwise, their combined length is less than \(2b^{8}\cdot\mathtt{P}<\frac{\mathtt{sT}}{2}\), implying that the first part \(\tau_{1}\) is longer than \(\lx@sectionsign_{i}(v)+\frac{\mathtt{sT}}{2}\). Hence, by Lemma 17, there are 'fast' and'slow' cycles \(c_{f}\) and \(c_{s}\), respectively, of slopes \(\mathbf{s}_{f}<\mathbf{s}_{s}\), such that each of them repeats at least \(b^{4}\cdot\mathtt{P}\) times in \(\tau_{1}\). Thus, by Proposition 15, we can add and/or remove some repetitions of \(c_{f}\) and \(c_{s}\), such that \(\tau_{1}\) is shorten by exactly \(\mathtt{P}\). Yet, we should ensure that the resulting path \(\tau^{\prime}\) is valid, in the sense that its corresponding first part \(\tau^{\prime}_{1}\) cannot get the counter value to \(0\). We show it by cases: * If \(c_{f}\) or \(c_{s}\) are balanced cycles, then we can remove the balanced cycle only, without changing the remaining counter values. * If there is a positive cycle \(c_{+}\) that repeats at least \(2b^{2}\cdot\mathtt{P}\) times, then the counter value climbs by at least \(2b^{2}\cdot\mathtt{P}\) from its value \(v_{x}\) at position \(x\) where \(c_{+}\) starts and the position \(y\) where its repetitions end. As the counter gets down to \(\mathtt{cT}_{prev}\) at the end of \(\tau_{1}\), and \(\mathtt{cT}_{prev}<P\) (Proposition 16), there must be a negative cycle \(c_{-}\) that repeats at least \(b\cdot\mathtt{P}\) times between position \(y\) and the first position after \(y\) that has
the counter value \(v_{x}+b^{2}\cdot\mathtt{P}\). Hence, we can remove repetitions of \(c_{+}\) and \(c_{-}\), as per Proposition 15, ensuring that the only affected values are above \(v_{x}\).
* Otherwise, both \(c_{f}\) and \(c_{s}\) are negative, implying that we add some repetitions of \(c_{f}\) and remove some repetitions of \(c_{s}\). We further split into two subcases:
* If \(c_{s}\) appears before \(c_{f}\) then there is no problem, as the only change of values will be their increase, and all the values were nonzero to begin with (as we are before \(z_{f}\)).
* If \(c_{f}\) appears first, ending at some position \(x\), while \(c_{s}\) starts at some later position \(y\), then a-priori it might be that repeating \(c_{f}\) up to \(b\cdot\mathtt{P}\) more times, as per Proposition 15, will take the counter value to \(0\). However, observe that since there are at most \(b-2\) positive cycles, and each of them can repeat at most \(2b^{2}\cdot\mathtt{P}-1\) times, the counter value \(v_{x}\) at position \(x\), and along the way until position \(y\), is at least \(v_{y}-(b-2)2b^{2}\cdot\mathtt{P}\), where \(v_{y}\) is the counter value at position \(y\). As \(c_{s}\) repeats at least \(b^{4}\cdot\mathtt{P}\) times, we have \(v_{y}\geq b^{4}\cdot\mathtt{P}\). Thus \(v_{x}\geq b^{4}\cdot\mathtt{P}-2(b-2)b^{2}\cdot\mathtt{P}>b^{2}\mathtt{P}\). Hence, repeating \(c_{f}\) up to \(b\cdot\mathtt{P}\) more times at position \(x\) cannot take the counter value to \(0\), until position \(y\), as required.
### Proof of Lemma 18.2a
#### The case of Segment \(\mathbb{S}_{0}\).
For a path of length \(\ell\), we have in Segment \(0\) that \(\mathsf{shift}(\ell)=\ell\), and indeed a path from \(v\) is valid from \(v+\mathtt{P}\) and vise versa, as they do not hit \(\mathtt{cT}_{prev}\): Their maximal drop is \(\mathtt{sT}\), while \(v\geq\mathtt{cT}>\mathtt{P}+\mathtt{sT}>\mathtt{cT}_{prev}+\mathtt{sT}\).
We turn to the \(i\)th segment, for \(i\geq 1\). Consider a basic path \(\tau\) for \((s,v+\mathtt{P})\stackrel{{\mathsf{shift}(\ell)}}{{\leadsto}}(e,u)\). Recall that \(\mathsf{shift}(\ell)=\ell+\frac{\mathtt{P}}{-\mathtt{s}_{i}}\in[\mathbb{S}_{i} (v+\mathtt{P})\..\ \mathbb{S}_{i}(v+\mathtt{P})+\mathtt{sT})]\). We construct from \(\tau\) a path \(\tau^{\prime}\) for \((s,v)\stackrel{{\ell}}{{\leadsto}}(e,u^{\prime})\), such that \(u\equiv u^{\prime}\), along the following cases.
#### Case 2a.1: The counter values in \(\tau\) stay above \(\mathtt{cT}_{prev}\)
As there is no position in \(\tau\) with counter value \(\mathtt{cT}_{prev}\), then in particular \(\tau\) has no zero-transitions. We further split into two subcases:
1. If \(\tau\) does not have \(b\cdot\mathtt{P}\) repetitions of a'relatively fast' cycle with slope \(\mathtt{s}_{j}\) for \(j\leq i\), then the drop of \(\tau\), and of every prefix of it, is at most \(X+Y\), where \(X\) stands for the drop outside'slow' cycles of slope \(\mathtt{s}_{h}\) for \(h>i\), and \(Y\) for the rest of the drop. We have \(X<b^{3}\cdot\mathtt{P}\) and \(Y<(\ell+\frac{\mathtt{P}}{\mathtt{s}_{i}})(-\mathtt{s}_{i+1})\). We claim that we can obtain \(\tau^{\prime}\) by removing \(\frac{\mathtt{P}}{-\mathtt{s}_{i}\cdot|c|}\) repetitions of any cycle \(c\), which repeats enough in \(\tau\), having that the drop \(D\) of \(\tau^{\prime}\) is less than \(v-\mathtt{cT}_{prev}\). Indeed, the maximal such drop \(D\) might be the result of removing only cycles of slope \((+1)\), whose total effect is \(\frac{\mathtt{P}}{-\mathtt{s}_{i}}\), having \(D\leq\frac{\mathtt{P}}{-\mathtt{s}_{i}}+X+Y=\frac{\mathtt{P}}{-\mathtt{s}_{i}} +b^{3}\cdot\mathtt{P}+(\ell+\frac{\mathtt{P}}{-\mathtt{s}_{i}})(-\mathtt{s}_{ i+1})\leq b\cdot\mathtt{P}+b^{3}\cdot\mathtt{P}+(\ell+\frac{\mathtt{P}}{- \mathtt{s}_{i}})(-\mathtt{s}_{i+1})\). Since \(\ell<\mathbb{S}_{i+1}(v+\mathtt{P})=\frac{\mathtt{P}}{-\mathtt{s}_{i+1}}(v+ \mathtt{P}-\mathtt{cT}_{prev})-b^{8}\cdot\mathtt{P}\), we have \(D\leq(b^{3}+b)\cdot\mathtt{P}+(\frac{1}{-\mathtt{s}_{i+1}}(v+\mathtt{P}- \mathtt{cT}_{prev})-b^{8}\cdot\mathtt{P})+\frac{\mathtt{P}}{-\mathtt{s}_{i}} )(-\mathtt{s}_{i+1})=\\ (b^{3}+b)\cdot\mathtt{P}+v+\mathtt{P}-\mathtt{cT}_{prev}-(-\mathtt{s}_{i+1}) \cdot b^{8}\cdot\mathtt{P}+\frac{(-\mathtt{s}_{i+1})}{-\mathtt{s}_{i}}\cdot \mathtt{P})=(b^{3}+b+1+\frac{(-\mathtt{s}_{i+1})}{-\mathtt{s}_{i}})\cdot \mathtt{P}+v-\mathtt{cT}_{prev}-(-\mathtt{s}_{i+1})\cdot b^{8}\cdot\mathtt{P}) <b^{4}\cdot\mathtt{P}-b^{7}\cdot\mathtt{P}+v-\mathtt{cT}_{prev}.\) It is thus left to show that \(b^{4}\cdot\mathtt{P}<b^{7}\cdot\mathtt{P}\), which obviously holds.
2. Otherwise, namely when \(\tau\) does have \(b\cdot\mathtt{P}\) repetitions of a'relatively fast' cycle with slope \(\mathtt{s}_{j}\) for \(j\leq i\), let \(c\) be the first such cycle in \(\tau\). We can obtain \(\tau^{\prime}\) by removing \(\frac{\mathtt{P}}{-\mathtt{s}_{i}\cdot|c|}\) repetitions of \(c\): The counter value in \(\tau^{\prime}\), which starts with counter value \(v\), at the position after the repetitions of \(c\) will be at least as high as the counter value in \(\tau\), which starts
with counter value \(v+\mathtt{P}\), after the repetitions of \(c\). Notice that the counter value cannot hit \(\mathtt{cT}_{prev}\) before arriving to the repetitions of \(c\) by the argument of the previous subcase.
#### Case 2a.2: \(\tau\) reaches counter value \(\mathtt{cT}_{prev}\)
Again let \(\tau_{1}=\tau[0\..\ z_{f}),\tau_{2}=\tau[z_{f}\..\ z_{u}),\tau_{3}=\tau[z_{u}\..\ \mathtt{ shift}(\ell)]\) as in 1a.
In order to handle possible zero transitions, we shorten \(\tau_{1}\), such that the resulting first part \(\tau_{1}^{\prime}\) of \(\tau^{\prime}\), which starts with counter value \(v\), also ends with counter value exactly \(\mathtt{cT}_{prev}\). Since \(\tau_{1}\) reaches \(\mathtt{cT}_{prev}\) and is shorter than \(\mathbb{S}_{i+1}(v+\mathtt{P})\), it has by Lemma 17.2 at least \(b^{4}\cdot\mathtt{P}\) repetitions of a 'fast' cycle of slope \(\mathtt{s}_{f}\leq\mathtt{s}_{i}\). Let \(c_{f}\) be the first such cycle. We split to cases.
1. If \(\mathtt{s}_{f}=\mathtt{s}_{i}\) or \(\tau_{2}\) or \(\tau_{3}\) are of length at least \(b^{5}\cdot\mathtt{P}\), we can remove \(\frac{\mathtt{p}}{-\mathtt{s}_{f}\cdot|c_{f}|}\) repetitions of \(c_{f}\) in \(\tau_{1}\). Note that the resulting first part \(\tau_{1}^{\prime}\) of \(\tau^{\prime}\) indeed ends with counter value \(\mathtt{cT}_{prev}\). However, while when \(\mathtt{s}_{f}=\mathtt{s}_{i}\) the resulting length of \(\tau^{\prime}\) will be \(\ell\), as required, in the case that \(\mathtt{s}_{f}<\mathtt{s}_{i}\), we have that \(\tau^{\prime}\) will be longer than \(\ell\). Nevertheless, in this case, as \(\tau_{2}\) or \(\tau_{3}\) are of length at least \(b^{5}\cdot\mathtt{P}\), we can further shorten \(\tau_{2}\) or \(\tau_{3}\) without changing their effect, as per Proposition 15, analogously to 1 or 2, respectively, in the proof of Lemma 18.1a.2.
2. Otherwise, we are in the case that \(\tau_{1}\) has a'really fast' cycle of slope \(\mathtt{s}_{f}<\mathtt{s}_{i}\) that repeats at least \(b^{4}\cdot\mathtt{P}\) times, and both \(\tau_{2}\) or \(\tau_{3}\) are of length less than \(b^{5}\cdot\mathtt{P}\). We claim that in this case \(\tau_{1}\) must also have \(b^{4}\cdot\mathtt{P}\) repetitions of a'relatively slow' cycle \(c_{s}\) of slope \(\mathtt{s}_{s}\geq\mathtt{s}_{i}\). Indeed, assume toward contradiction that \(\tau_{1}\) has less than \(b^{4}\cdot\mathtt{P}\) repetitions of a cycle with slope \(\mathtt{s}_{s}\) for \(s\geq i\). Then the longest such path has less than \(b\) transitions of (+1) out of cycles, \(b^{6}\cdot\mathtt{P}\) such transitions in cycles, and the rest of it consists of 'fast' cycles with slope indexed lower than \(i\). Thus its length is at most \(X+L\), where \(X=b^{6}\cdot\mathtt{P}\) is the (+1) transitions, and \(L\) is the longest length to drop from counter value \(v+\mathtt{P}+X\) to \(\mathtt{cT}_{prev}\) with 'fast' cycles. Thus, \(L\leq\frac{1}{-\mathtt{s}_{i-1}}(v+\mathtt{P}+X-\mathtt{cT}_{prev})\). Now, we have that the length of \(\tau\) is at least \(\mathbb{S}_{i}(v+\mathtt{P})=\frac{1}{-\mathtt{s}_{i}}(v-\mathtt{cT}_{prev}) -b^{8}\cdot\mathtt{P}\). Thus, parts \(\tau_{2}\) and \(\tau_{3}\) of \(\tau\) are of length at least \(Z=\frac{1}{-\mathtt{s}_{i}}(v-\mathtt{cT}_{prev})-b^{8}\cdot\mathtt{P}-\frac {1}{-\mathtt{s}_{i-1}}(v+\mathtt{P}+X-\mathtt{cT}_{prev})-X=(\frac{1}{-\mathtt{ s}_{i}}-\frac{1}{-\mathtt{s}_{i+1}})(v-\mathtt{cT}_{prev})-b^{8}\cdot\mathtt{P}-(1+ \frac{1}{-\mathtt{s}_{i+1}})b^{6}\cdot\mathtt{P}\). Since \((\frac{1}{-\mathtt{s}_{i}}-\frac{1}{-\mathtt{s}_{i+1}})\geq\frac{1}{b^{8}},(1+ \frac{1}{-\mathtt{s}_{i+1}}\leq b+1)\), and \(v-\mathtt{cT}_{prev}\geq\mathtt{cT}-\mathtt{cT}_{prev}>\mathtt{cT}-\mathtt{P}>3 b^{10}\mathtt{P}\), we have \(Z\geq\frac{1}{b^{2}}(3b^{10}\cdot\mathtt{P})-b^{8}\cdot\mathtt{P}-(b+1)b^{6} \cdot\mathtt{P}=2b^{8}\cdot\mathtt{P}-(b+1)b^{6}\cdot\mathtt{P}\). Therefore, at least one of \(\tau_{2}\) and \(\tau_{3}\) is of length at least \(b^{7}\cdot\mathtt{P}\), leading to a contradiction. So, we are in the case that \(\tau_{1}\) has at least \(b^{4}\cdot\mathtt{P}\) repetitions of a'really fast' cycle \(c_{f}\) of slope \(\mathtt{s}_{f}<\mathtt{s}_{i}\) as well as \(b^{4}\cdot\mathtt{P}\) repetitions of a'relatively slow' cycle \(c_{s}\) of slope \(\mathtt{s}_{s}\geq\mathtt{s}_{i}\). By analyzing the different possible orders of \(c_{s}\) and \(c_{f}\), we can cut and repeat the cycles far enough from 0 so as to construct valid paths. See the Appendix for details.
### Proof of Theorem 11
The proof is by induction over the structure of \(\varphi\), where Theorem 5 already provides the periodicity for all CTL operators.
It remains to plug \(UA\) into the induction by showing (1) the (\(\mathtt{cT},\mathtt{P}\))-periodicity of a formula \(\varphi=\psi_{1}UA\psi_{2}\) with respect to an OCA \(\mathcal{A}\), provided that its subformulas are (\(\mathtt{cT}_{prev},\mathtt{p}_{prev}\))-periodic; and (2) by showing that the period \(\mathtt{P}\) and threshold \(\mathtt{cT}\) are single-exponential in \(n=|\mathcal{A}|\) and in the nesting depth of \(\varphi\).
1. We show that for every state \(s\in S\) and counters \(v,v^{\prime}>\mathtt{cT}\), if \(v\equiv v^{\prime}\mod\mathtt{P}\) then \((s,v)\models\varphi\iff(s,v^{\prime})\models\varphi\). Without loss of generality, write \(v^{\prime}=v+z\cdot\mathtt{P}\), for some \(z\in\mathbb{N}\)
If \((s,v)\models\varphi\) then by the definition of the \(AU\) operator and the completeness of \(\mathcal{A}\) we have i) there is a level \(\ell\), such that for every state \(e\) and counter value \(u\), if \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(e,u)\) then \((e,u)\models\psi_{2}\), and ii) for every level \(m<\ell\), state \(h\) and counter value \(x\), if \((s,v)\stackrel{{ m}}{{\rightsquigarrow}}(h,x)\) then \((h,x)\models\psi_{1}\). Observe first that if \(\ell\not\in\mathsf{core}(v)\), then there also exists a level \(\hat{\ell}<\ell\) witnessing \((s,v)\models\varphi\), such that \(\hat{\ell}\in\mathsf{core}(v)\). Indeed, we obtain \(\hat{\ell}\), by choosing the largest level \(\hat{\ell}\) in the core of \(\ell\)'s segment, such that \(\ell\equiv\hat{\ell}\mod\mathsf{P}\). As \(\hat{\ell}<\ell\), it directly follows that for every level \(\hat{m}<\hat{\ell}\), state \(\hat{h}\) and counter value \(\hat{x}\), if \((s,v)\stackrel{{\hat{m}}}{{\rightsquigarrow}}(\hat{h},\hat{x})\) then \((\hat{h},\hat{x})\models\psi_{1}\). Now, assume toward contradiction that there is a state \(\hat{e}\) and a counter value \(\hat{u}\), such that \((s,v)\stackrel{{\hat{\ell}}}{{\rightsquigarrow}}(\hat{e},\hat{u})\) and \((\hat{e},\hat{u})\not\models\psi_{2}\). Then by (possibly several applications of) Lemma 18.1b, there is also a counter value \(\hat{\hat{u}}\equiv_{\mathsf{ct}_{prev},\mathsf{p}_{prev}}\hat{u}\), such that \((s,v)\stackrel{{\ell}}{{\rightsquigarrow}}(\hat{e},\hat{\hat{u}})\). As \(\psi_{2}\) is \((\mathsf{ct}_{prev},\mathsf{p}_{prev})\)-periodic, we have \((\hat{e},\hat{\hat{u}})\not\models\psi_{2}\), leading to a contradiction. Next, we claim that the level \(\ell^{\prime}=\mathsf{shift}^{z}(\hat{\ell})\), obtained by \(z\) (recall that \(z=\frac{v^{\prime}-v}{\mathsf{P}}\)) applications of the shift function on \(\hat{\ell}\), witnesses \((s,v^{\prime})\models\varphi\), namely that i) for every state \(e^{\prime}\) and counter value \(u^{\prime}\), if \((s,v^{\prime})\stackrel{{\ell^{\prime}}}{{\rightsquigarrow}}(e^{ \prime},u^{\prime})\) then \((e^{\prime},u^{\prime})\models\psi_{2}\), and ii) for every level \(m^{\prime}<\ell^{\prime}\), state \(h^{\prime}\) and counter value \(x^{\prime}\), if \((s,v^{\prime})\stackrel{{ m^{\prime}}}{{\rightsquigarrow}}(h^{ \prime},x^{\prime})\) then \((h^{\prime},x^{\prime})\models\psi_{1}\). Indeed, i) were it the case that \((e^{\prime},u^{\prime})\not\models\psi_{2}\) then by (\(z\) applications of) Lemma 18.2a, there was also a counter value \(u^{\prime\prime}\equiv_{\mathsf{ct}_{prev},\mathsf{p}_{prev}}u^{\prime}\), such that \((s,v)\stackrel{{\hat{\ell}}}{{\rightsquigarrow}}(e^{ \prime},u^{\prime\prime})\) and therefore \((e^{\prime},u^{\prime\prime})\not\models\psi_{2}\), leading to a contradiction; and ii) were it the case that \((s,v^{\prime})\stackrel{{ m^{\prime}}}{{\rightsquigarrow}}(h^{ \prime},x^{\prime})\) and \((h^{\prime},x^{\prime})\not\models\psi_{1}\) then a) by Lemma 18.1a, as in the argument above, there is also a level \(\tilde{m}\leq m^{\prime}\), such that \(\tilde{m}\) is in the core of \(m^{\prime}\)'s segment and \((s,v^{\prime})\stackrel{{\tilde{m}}}{{\rightsquigarrow}}(h^{ \prime},\tilde{x})\) where \(\tilde{x}\equiv_{\mathsf{ct}_{prev},\mathsf{p}_{prev}}x\), and b) by Lemma 18.2a there is a level \(\hat{m}<\hat{\ell}\), such that \((s,v)\stackrel{{\tilde{m}}}{{\rightsquigarrow}}(h^{\prime},\hat{x})\) where \(\tilde{x}\equiv_{\mathsf{ct}_{prev},\mathsf{p}_{prev}}x\) and therefore \((h^{\prime},\hat{x})\not\models\psi_{1}\), leading to a contradiction. \(\mathsf{=}\) If \((s,v^{\prime})\models\varphi\) then we have \((s,v)\models\varphi\) by an argument analogous to the above, while using Lemma 18.2b instead of Lemma 18.2a. 2. The threshold cT and period P are calculated along the induction on the structure of the formula \(\varphi\). They start with threshold \(0\) and period \(1\), and their increase in each step of the induction depends on the outermost operator. Observe first that we can take as the worst case the same increase in every step, that of the UA case, since it guarantees the others. Namely, its required threshold, based on the threshold and period of the subformulas, is bigger than the threshold required in the other cases, and its required period is divisible by the periods required for the other cases. Next, notice that both the threshold and period in the UA case only depend on the periods of the subformulas (i.e., not on their thresholds), so it is enough to show that the period is singly exponential in \(n\) and the nesting depth of \(\varphi\). The period in the UA case is defined to be \(\mathsf{P}(\varphi)=B\cdot\mathrm{lcm}(\mathsf{P}(\psi_{1}),\mathsf{P}(\psi_{2}))\), where \(B=\mathrm{lcm}[1..2b^{3}]\), and \(b\) is the bound on the length of a linear path scheme for \(\mathcal{A}\). By [17], \(b\) is polynomial in \(n\), and as \(\mathrm{lcm}[1..2b^{3}]<4^{2b^{3}}\), we get that \(B\) is singly exponential in \(n\). Considering \(\mathrm{lcm}(\mathsf{P}(\psi_{1}),\mathsf{P}(\psi_{2})\), while in general \(\mathrm{lcm}(x,y)\) of two numbers \(x\) and \(y\) might be equal to their multiplication, in our case, as both \(\psi_{1}\) and \(\psi_{2}\) are calculated along the induction via the same scheme above, they are both an exponent of \(B\). Hence, \(\mathrm{lcm}(\mathsf{P}(\psi_{1}),\mathsf{P}(\psi_{2})=\mathrm{max}(\mathsf{P}( \psi_{1}),\mathsf{P}(\psi_{2})\). Thus, we get that \(\mathsf{P}(\varphi)\leq B^{x}\), where \(x\) is bounded by the nesting depth of \(\varphi\). |
2310.17137 | Large-Scale Gaussian Processes via Alternating Projection | Training and inference in Gaussian processes (GPs) require solving linear
systems with $n\times n$ kernel matrices. To address the prohibitive
$\mathcal{O}(n^3)$ time complexity, recent work has employed fast iterative
methods, like conjugate gradients (CG). However, as datasets increase in
magnitude, the kernel matrices become increasingly ill-conditioned and still
require $\mathcal{O}(n^2)$ space without partitioning. Thus, while CG increases
the size of datasets GPs can be trained on, modern datasets reach scales beyond
its applicability. In this work, we propose an iterative method which only
accesses subblocks of the kernel matrix, effectively enabling mini-batching.
Our algorithm, based on alternating projection, has $\mathcal{O}(n)$
per-iteration time and space complexity, solving many of the practical
challenges of scaling GPs to very large datasets. Theoretically, we prove the
method enjoys linear convergence. Empirically, we demonstrate its fast
convergence in practice and robustness to ill-conditioning. On large-scale
benchmark datasets with up to four million data points, our approach
accelerates GP training and inference by speed-up factors up to $27\times$ and
$72 \times$, respectively, compared to CG. | Kaiwen Wu, Jonathan Wenger, Haydn Jones, Geoff Pleiss, Jacob R. Gardner | 2023-10-26T04:20:36Z | http://arxiv.org/abs/2310.17137v2 | # Large-Scale Gaussian Processes via Alternating Projection
###### Abstract
Gaussian process (GP) hyperparameter optimization requires repeatedly solving linear systems with \(n\times n\) kernel matrices. To address the prohibitive \(\mathcal{O}(n^{3})\) time complexity, recent work has employed fast iterative numerical methods, like conjugate gradients (CG). However, as datasets increase in magnitude, the corresponding kernel matrices become increasingly ill-conditioned and still require \(\mathcal{O}(n^{2})\) space without partitioning. Thus, while CG increases the size of datasets GPs can be trained on, modern datasets reach scales beyond its applicability. In this work, we propose an iterative method which only accesses subblocks of the kernel matrix, effectively enabling _mini-batching_. Our algorithm, based on alternating projection, has \(\mathcal{O}(n)\) per-iteration time and space complexity, solving many of the practical challenges of scaling GPs to very large datasets. Theoretically, we prove our method enjoys linear convergence and empirically we demonstrate its robustness to ill-conditioning. On large-scale benchmark datasets up to four million datapoints our approach accelerates training by a factor of \(2\times\) to \(27\times\) compared to CG.
## 1 Introduction
Scaling Gaussian process (GP) models to large datasets has been a central research topic in probabilistic machine learning for nearly two decades. The primary challenge is the cubic complexity of computing both the marginal log likelihood (MLL) during training and the predictive distribution at test time. Over the years, this problem has been addressed both from a modeling perspective (_e.g._, Hensman et al., 2013, 2015; Titsias, 2009; Snelson and Ghahramani, 2005; Salimbeni et al., 2018; Jankowiak et al., 2020; Katzfuss and Guinness, 2021) and from a numerical methods perspective (_e.g._, Cutajar et al., 2016; Pleiss et al., 2018; Gardner et al., 2018; Wang et al., 2019; Maddox et al., 2022), and contemporary work even unifies these perspectives to a degree (Artemev et al., 2021; Wenger et al., 2022). In recent years, numerical methods have increasingly relied on matrix-free iterative methods, which access the kernel matrix through matrix-vector multiplications. These iterations are suitable for GPU acceleration (Gardner et al., 2018) and have shown success on medium to moderately large datasets (Wang et al., 2019), outperforming modeling-based approaches such as stochastic variational GPs (SVGP) (Hensman et al., 2013).
Most GP training and inference approaches based on iterative methods use classic general-purpose al
Figure 1: Comparison of the convergence of alternating projection and (preconditioned) conjugate gradient. Both algorithms are _initialized at zero_, but CG increases the residual after the first iteration. **Left:** While the asymptotic convergence rate of CG can be faster than alternating projection, CG does not find a better solution than alternating projection in the first 1000 iterations. **Right:** CG struggles with convergence due to ill-conditioning and does not reach the tolerance \(\delta\). In contrast, alternating projection convergences. See §4 for more details.
gorithms for matrix solves, such as conjugate gradients (CG) (Cutajar et al., 2016; Gardner et al., 2018; Wang et al., 2019), MINRES (Pleiss et al., 2020), or (stochastic) gradient descent (Lin et al., 2023). There is reason to believe that such algorithms are sub-optimal for modern hardware-accelerated Gaussian processes. For example, CG was purpose-built for sparse linear systems that require high-precision solutions. Neither of these properties applies to GP regression: the necessary solves involve dense covariance matrices, and tasks such as hyperparameter optimization can be performed with extremely coarse-grained solves (Wang et al., 2019; Maddox et al., 2022). These characteristics of large-scale dense operations and low precision amenability are in line with existing trends in machine learning (Courbariaux et al., 2015; Micikevicius et al., 2018), but ultimately place Gaussian processes at odds with much of the literature on numerical methods.
Much in the way that deep learning has been revolutionized by purpose-built optimizers that exploit properties of neural networks (Kingma and Ba, 2015; Loshchilov and Hutter, 2019), this paper aims to accelerate GPs with a purpose-built method leveraging (coarse-grained) covariance matrix solves on modern hardware. We introduce an iterative method to compute gradients of the marginal log-likelihood (MLL) and the posterior mean, that improves over CG in the following ways: 1) It requires \(\mathcal{O}(n)\) computation per iteration (rather than CG's \(\mathcal{O}(n^{2})\)); 2) It converges rapidly and monotonically in its early stages (but does not necessarily obtain higher precision than CG); and 3) It demonstrates improved numerical stability in floating point arithmetic.
In summary, we make the following contributions:
* We propose an iterative method to train Gaussian processes, which computes the MLL derivatives and posterior mean via _alternating projection_. Each update accesses only subblocks of the kernel matrix, has linear complexity, and decreases the residual near-monotonically.
* We prove that our algorithm converges linearly at a rate no slower than gradient descent, despite never operating on the full kernel matrix. Empirically, our method achieves a 2-27\(\times\) speed-up over CG on a wide range of datasets.
* As a demonstration of its scalability and robustness to ill-conditioning, we are able to train a GP on 4 million data points, the largest dataset reported in the literature to-date without using inducing points or similar modeling approximations--to the best of our knowledge. We find that our method outperforms SVGP by a significant margin at this scale.
## 2 Setup and Background
**Notation.** Let \((\mathbf{X},\mathbf{y})\) be a training set of \(n\) training inputs \(\mathbf{X}=(\mathbf{x}_{1}\quad\cdots\quad\mathbf{x}_{n})^{\top}\in\mathcal{X }\subseteq\mathbb{R}^{n\times d}\) and labels \(\mathbf{y}=(y_{1}\quad\cdots\quad y_{n})^{\top}\in\mathbb{R}^{n}\). Let the set \(\{1,2,\ldots,n\}\) be denoted by \([n]\). Given a matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and an index set \(I\subseteq[n]\), \(\mathbf{A}_{I}=\mathbf{A}_{I,:}\) is the \(|I|\times n\) row-indexed submatrix, \(\mathbf{A}_{\cdot,I}\) the \(n\times|I|\) column-indexed submatrix, and \(\mathbf{A}_{I,I}\) is the \(|I|\times|I|\) principal submatrix. We use similar indexing notations for vectors.
Now, let \(f:\mathcal{X}\rightarrow\mathbb{R}\) be a latent function, and let \(k_{\mathbf{\theta}}:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) be a (known) positive definite kernel function with hyperparameters \(\mathbf{\theta}\). We write \(\mathbf{f}=f(\mathbf{X})=(f(\mathbf{x}_{1})\quad\cdots\quad f(\mathbf{x}_{n} ))^{\top}\in\mathbb{R}^{n}\). Similarly, \(k_{\mathbf{\theta}}(\mathbf{X}_{\cdot,\cdot}):\mathcal{X}\rightarrow\mathbb{R}^{n}\) denotes the vector-valued function given by \((k(\mathbf{x}_{1},\cdot)\quad\cdots\quad k(\mathbf{x}_{n},\cdot))^{\top}\in \mathbb{R}^{n}\), and \(\mathbf{K}_{\mathbf{\theta}}\in\mathbb{R}^{n\times n}\) is the Gram matrix with \([\mathbf{K}_{\mathbf{\theta}}]_{ij}=k_{\mathbf{\theta}}(\mathbf{x}_{i},\mathbf{x}_{j})\). We omit the subscript \(\mathbf{\theta}\) unless the context needs it.
**Gaussian Process Regression.** In supervised GP regression, we assume a response-generating function \(f\) that is Gaussian process distributed a priori--_i.e._\(f\sim\mathcal{GP}\big{(}\mu,k_{\mathbf{\theta}}\big{)}\). For simplicity of presentation, we assume without loss of generality an exact observation model--_i.e._\(\mathbf{y}=f(\mathbf{X})\).1 Given a finite test dataset \(\mathbf{x}_{1}^{*},\ldots,\mathbf{x}_{M}^{*}\), we can obtain a posterior distribution over \(f(\mathbf{x}_{1}^{*}),\ldots,f(\mathbf{x}_{M}^{*})\) using standard Gaussian conditioning rules with the posterior mean and covariance:
Footnote 1: Note that we can easily recover an observational noise model by setting \(k_{\mathbf{\theta}}(\mathbf{x},\mathbf{x}^{\prime})=k_{\rm base}(\mathbf{x}, \mathbf{x}^{\prime})+\sigma^{2}\mathbb{I}[\mathbf{x}=\mathbf{x}^{\prime}, \mathbf{x}\in\mathbf{X}]\) for some \(k_{\rm base}\) and \(\sigma>0\).
\[\mathbb{E}[\mathbf{f}^{*}\mid\mathbf{f}] =\mathbf{\mu}+\mathbf{K}_{*\mathbf{f}}\mathbf{K}^{-1}(\mathbf{y}-\bm {\mu}),\] \[\mathbb{C}[\mathbf{f}^{*}\mid\mathbf{f}] =\mathbf{K}_{**}-\mathbf{K}_{*\mathbf{f}}\mathbf{K}^{-1}\mathbf{K} \mathbf{f}_{*}.\]
We refer the reader to Rasmussen and Williams (2006, Ch. 2) for more details.
**Hyperparameter Training.** The hyperparameters \(\mathbf{\theta}\) of the GP are learned by minimizing the negative marginal log likelihood (MLL) \(\ell(\mathbf{\theta}):=-\log p(\mathbf{y};\mathbf{\theta})\). With a Gaussian process prior on \(f\), we have \(p(\mathbf{y};\mathbf{\theta})=\mathcal{N}(\mathbf{y};\mathbf{\mu},\mathbf{K}_{\mathbf{ \theta}})\), yielding the following minimization:
\[\operatorname*{minimize}_{\mathbf{\theta}}\ell(\mathbf{\theta})\stackrel{{ \mathrm{c}}}{{=}}\tfrac{1}{2}\left(\mathbf{y}^{\top}\mathbf{K}_{\mathbf{ \theta}}^{-1}\mathbf{y}+\log\det(\mathbf{K}_{\mathbf{\theta}})\right) \tag{1}\]
Equation (1) is commonly optimized with first-order methods, which require an (unbiased) estimate of \(\frac{\partial\ell(\mathbf{\theta})}{\partial\theta}\). Unfortunately, as (1) cannot be written in the usual \(\sum_{i=1}^{n}\ell(\mathbf{x}_{i},y_{i})\) form common to many machine learning algorithms, standard minibatching strategies are not readily applicable. Following prior work (_e.g._ Cutajar et al., 2016; Gardner et al., 2018; Wenger et al., 2022), we use the following unbiased estimate:
\[-\tfrac{1}{2}\mathbf{y}^{\top}\mathbf{K}_{\mathbf{\theta}}^{-1}\frac{\partial \mathbf{K}_{\mathbf{\theta}}}{\partial\theta}\mathbf{K}_{\mathbf{\theta}}^{-1} \mathbf{y}+\tfrac{1}{2i}\sum_{i=1}^{l}\left(\mathbf{z}_{i}^{\top}\mathbf{K}_{ \mathbf{\theta}}^{-1}\right)\frac{\partial\mathbf{K}_{\mathbf{\theta}}}{\partial\theta }\mathbf{z}_{i}, \tag{2}\]
where \(\mathbf{z}_{i}\) are i.i.d. random vectors with \(\mathbb{E}\left[\mathbf{z}_{i}\right]=\mathbf{0}\) and \(\mathbb{E}\left[\mathbf{z}_{i}\mathbf{z}_{i}^{\top}\right]=\mathbf{I}\). Note that the first term is an unbiased approximation of \(\mathbf{tr}\left(\mathbf{K}_{-\theta}^{-1}\frac{\partial\mathbf{K}_{\theta}}{ \partial\theta}\right)\). Crucially, computing (2) primarily involves computing solves with \(\mathbf{K}_{\theta}\).
**Linear Solves via Iterative Methods.** When \(\mathbf{K}\) is large, direct methods for solving \(\mathbf{K}\mathbf{w}=\mathbf{b}\) are prohibitively slow. Iterative methods, such as conjugate gradients (CG), offer reduced asymptotic complexity (Cutajar et al., 2016), significant GPU acceleration (Gardner et al., 2018), and memory savings if \(\mathbf{K}\) is accessed in a map-reduce fashion (Wang et al., 2019; Charlier et al., 2021).
CG minimizes the quadratic objective \(\frac{1}{2}\mathbf{w}^{\top}\mathbf{K}\mathbf{w}-\mathbf{b}^{\top}\mathbf{w}\) by iteratively searching along conjugated directions. Each iteration requires a \(\mathcal{O}(n^{2})\) matrix-vector multiplication with \(\mathbf{K}\). In exact arithmetic, CG returns an exact solution after \(n\) iterations. In practice for ill-conditioned problems, CG is terminated once the residual \(\mathbf{r}=\mathbf{b}-\mathbf{K}\mathbf{w}\) is small enough, _e.g._, \(\|\mathbf{r}\|\leq\delta\|\mathbf{b}\|\) for some predefined tolerance parameter \(\delta\).
For GP hyperparameter learning often large values of the tolerance \(\delta\) are used despite the potential for overfitting (Potapczynski et al., 2021), for example \(\delta=1\) is used in practice (Wang et al., 2019; Maddox et al., 2022) and has been the default setting of CG during training in popular GP software packages (_e.g._, GPyTorch2 and GPflow3).
Footnote 2: GPyTorch setting [https://rb.gy/qi8er](https://rb.gy/qi8er)
Footnote 3: GPflow setting [https://rb.gy/mozif](https://rb.gy/mozif)
For hyperparameter training, each MLL derivative evaluation requires a batched linear solve \(\mathbf{K}\mathbf{W}=\mathbf{B}\), where \(\mathbf{B}=(\mathbf{y}\quad\mathbf{z}_{1}\quad\ldots\quad\mathbf{z}_{l})\) with \(\mathbf{z}_{i}\) are random samples for stochastic MLL derivative estimation in (2).
**RKHS.** Every kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) induces a space of functions \(\mathcal{H}:=\overline{\text{span}}\{k(\mathbf{x},\cdot):\mathbf{x}\in \mathcal{X}\}\subset\mathbb{R}^{\mathcal{X}}\), known as a reproducing kernel Hilbert space (RKHS) where the inner product \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) is defined as \(\langle k(\mathbf{x},\cdot),k(\mathbf{x}^{\prime},\cdot)\rangle_{\mathcal{H}} =k(\mathbf{x},\mathbf{x}^{\prime})\) for all \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\).
**RKHS Projection.** Define the following finite dimensional linear subspaces of \(\mathcal{H}\) for indices \(I\subseteq[n]\):
\[\begin{split} V_{[n]}&=\text{span}\{k(\mathbf{x}_{i}, \cdot):i=1,2,\cdots,n\}\subset\mathcal{H},\\ V_{I}&=\text{span}\{k(\mathbf{x}_{i},\cdot):i\in I \}\subseteq V_{[n]},\end{split} \tag{3}\]
By definition these subspaces contain functions of the form \(f(\cdot)=\sum_{i=1}^{n}\alpha_{i}k(\mathbf{x}_{i},\cdot)\) and \(f(\cdot)=\sum_{i\in I}\alpha_{i}k(\mathbf{x}_{i},\cdot)\) respectively. We can map any \(f\in\mathcal{H}\) onto these subspaces using the projection operator.
**Definition 1** (Projection Operator).: _Let \(V\subseteq\mathcal{H}\) be a closed linear subspace. The projection of any \(f\in\mathcal{H}\) onto \(V\) is given by the projection operator_
\[\operatorname{proj}_{V}(f)=\operatorname*{argmin}_{g\in V}\quad\tfrac{1}{2}\|f -g\|_{\mathcal{H}}^{2},\]
_which is well-defined, i.e. the unique minimizer exists._
Intuitively, the projection operator finds the best approximation of \(f\) in \(V\), where approximation error is measured by the norm \(\|\cdot\|_{\mathcal{H}}\). For \(V=V_{[n]}\) and \(V=V_{I}\), the projection operator has a simple form:
\[\begin{split}\operatorname{proj}_{V_{[n]}}(f)&=f( \mathbf{X})^{\top}\mathbf{K}^{-1}k(\mathbf{X},\cdot),\\ \operatorname{proj}_{V_{I}}(f)&=f(\mathbf{X})^{\top }\mathbf{E}_{I}^{\top}\mathbf{K}_{I,I}^{-1}\mathbf{E}_{I}k(\mathbf{X},\cdot). \end{split} \tag{4}\]
Importantly, these projections only evaluate \(f\) and the kernel \(k\) on the data \(\mathbf{X}\) (or subset \(\mathbf{X}_{I}\)). In other words, it is unnecessary to evaluate \(f\) or \(k\) outside of \(\mathbf{X}\) (or \(\mathbf{X}_{I}\)). The complexity of computing the projection \(\operatorname{proj}_{V}(f)\) depends on the dimension of \(V\): \(\operatorname{proj}_{V_{[n]}}(f)\) takes \(\mathcal{O}(n^{3})\) time and \(\operatorname{proj}_{V_{I}}(f)\) takes \(\mathcal{O}(|I|^{3})\) time.
## 3 Method
In this section, we develop an iterative method for computing solves \(\mathbf{K}^{-1}\mathbf{b}\) by alternating projection. The method supports batch linear solves with multiple right-hand sides, as required by estimating the MLL derivative (2), and is amenable to GPU parallelism. We cast the linear solve as a projection in the RHKS \(\mathcal{H}\) and decompose the projection into a sequence of small-scale subproblems. Each subproblem is solved in \(\mathcal{O}(n)\) time, allowing frequent updates. Alternating projection typically makes rapid progress in the early stage and finds a medium-precision solution quickly.
**High Level Approach.** Assume \(k\) is strictly positive definite and there is no duplicate data, then there exists \(g\in\mathcal{H}\) interpolating \(\mathbf{b}\), i.e. \(g(\mathbf{X})=\mathbf{b}\). The exact
Figure 2: **Left:** Illustration of alternating projection. \(s_{1}\) is the projection of \(g=r_{0}\) onto the subspace spanned by \(k(\mathbf{x}_{1},\cdot)\) and \(k(\mathbf{x}_{2},\cdot)\). The residual \(r_{1}=g-s_{1}\) will be projected to other coordinates in the next iteration. **Right:** Gauss-Southwell block selection results in faster convergence than random/cyclic.
form of \(g\) is not important (or unique for that matter); rather, we are interested in its projection onto the subspace \(V_{[n]}\), which by (4) is
\[\operatorname{proj}_{V_{[n]}}(g)=\mathbf{b}^{\top}\mathbf{K}^{-1}k(\mathbf{X}, \cdot),\]
Thus the linear solve can be obtained from the coefficients of the projection \(\operatorname{proj}_{V_{[n]}}(g)\).
Directly projecting \(g\) onto \(V_{[n]}\) is computationally infeasible, as the time complexity is cubic in \(n\). Instead, we partition \([n]\) into subsets \(\mathcal{P}=\{I_{1},I_{2},\cdots,I_{m}\}\). For each subset \(I\in\mathcal{P}\), the projection to the linear subspace \(V_{I}\subseteq V_{[n]}\) is cheap, provided that \(|I|\) is small. Thus, we construct the (full) projection \(\operatorname{proj}_{V_{[n]}}(g)\) by iteratively computing the projection onto the linear subspaces \(V_{I}\) where \(I\in\mathcal{P}\).
Starting from \(r_{0}=g\) and \(s_{0}=0\), the \(j\)-th iteration selects an index set \(I\subseteq[n]\) and updates as follows
\[s_{j+1} =s_{j}+\operatorname{proj}_{V_{I}}(r_{j}) \tag{5}\] \[r_{j+1} =r_{j}-\operatorname{proj}_{V_{I}}(r_{j}) \tag{6}\]
Intuitively, \(s_{j}\) progressively approximates the true projection \(\operatorname{proj}_{V_{[n]}}(g)\), since (5) iteratively adds the projection onto subspaces \(V_{I}\) to the current approximation \(s_{j}\). Meanwhile, (6) consistently updates the residual. As \(j\to\infty\), \(s_{j}\) converges to the true projection \(\mathbf{b}^{\top}\mathbf{K}^{-1}k(\mathbf{X},\cdot)\)(Wendland, 2004). See Figure 2 (left panel) for an illustration of alternating projection.
**Implicit Representation of \(r_{j}(\cdot)\)** Crucially, in the updates (5) and (6), the function \(r_{i}\) is only ever accessed through its evaluation on \(\mathbf{X}\) (recall the projection formula (4)). Therefore, we only need to maintain the vector \(\mathbf{r}_{i}=r_{i}(\mathbf{X})\in\mathbb{R}^{n}\) instead of the entire function. The update (6) thus reduces to
\[\mathbf{r}_{j+1} :=\mathbf{r}_{j}-\operatorname{proj}_{V_{I}}(r_{j})(\mathbf{X})\] \[=\mathbf{r}_{j}-\mathbf{KE}_{I}^{\top}\mathbf{K}_{I,I}^{-1} \mathbf{E}_{I}\mathbf{r}_{j} \tag{7}\] \[=\mathbf{r}_{j}-\mathbf{K}_{\cdot,I}\mathbf{K}_{I,I}^{-1}[ \mathbf{r}_{j}]_{I}, \tag{8}\]
where \(\mathbf{E}_{I}\) denotes the rows of the identity matrix corresponding to \(I\). The final line comes from the right multiplication \(\mathbf{KE}_{I}^{\top}\) and left multiplication \(\mathbf{E}_{I}\mathbf{r}_{j}\).
**Representing \(s_{i}(\cdot)\) via Kernel Functions.** Every \(s_{i}\) is in \(V_{[n]}\) and can thus be written as a linear combination \(\mathbf{w}_{i}^{\top}k(\mathbf{X},\cdot)\) for some \(\mathbf{w}_{i}\in\mathbb{R}^{n}\), which is proved by induction. At the 0-th iteration, we see that \(s_{0}(\cdot)\) is the zero function, which can be written as \(\mathbf{0}^{\top}k(\mathbf{X},\cdot)\). For the \(j\)-th iteration, assuming \(I\subseteq[n]\) is selected and \(s_{j}=\mathbf{w}_{j}^{\top}k(\mathbf{X},\cdot)\), then we have
\[s_{j+1} =s_{j}+\operatorname{proj}_{V_{I}}(r_{j})\] \[=\underbrace{\left(\mathbf{w}_{j}^{\top}+r_{j}(\mathbf{X})^{\top }\mathbf{E}_{I}^{\top}\mathbf{K}_{I,I}^{-1}\mathbf{E}_{I}\right)}_{\mathbf{w }_{j+1}}k(\mathbf{X},\cdot),\]
where the last line gives an explicit update on \(\mathbf{w}_{j}\):
\[\mathbf{w}_{j+1}=\mathbf{w}_{j}+\mathbf{E}_{I}^{\top}\mathbf{K}_{I,I}^{-1} \mathbf{E}_{I}\mathbf{r}_{j}. \tag{9}\]
Recall that \(\mathbf{E}_{I}\) simply selects rows/columns. Only entries in \(\mathbf{w}_{j}\) indexed by \(I\) need to be updated, while keeping the entries outside \(I\) unchanged:
\[\begin{split}[\mathbf{w}_{j+1}]_{I}&=[\mathbf{w}_{ j}]_{I}+\mathbf{K}_{I,I}^{-1}[\mathbf{r}_{j}]_{I},\\ [\mathbf{w}_{j+1}]_{[n]\setminus I}&=[\mathbf{w}_{ j}]_{[n]\setminus I}.\end{split} \tag{10}\]
**Summary.** (8) and (10) yield an iteration on \(s_{i}(\cdot)\!=\!\mathbf{w}_{i}^{\top}k(\mathbf{X},\cdot)\) where the \(\mathbf{w}_{i}\) are obtained through simple matrix operations. Since the \(s_{i}\) are produced by alternating projections, we have \(s_{i}\to\operatorname{proj}_{V_{[n]}}(g)\) and thus \(\mathbf{w}_{i}\to\mathbf{K}^{-1}\mathbf{b}\). We summarize this approach in Algorithm 1. Note that the algorithm can be adapted to perform multiple right-hand solves in parallel by replacing \(\mathbf{w}_{i},\mathbf{r}_{i},\mathbf{b}_{i}\) vectors with matrices \(\mathbf{W},\mathbf{R},\mathbf{B}\).
**Block Selection.** Selecting which block to update is crucial for fast convergence. The simplest block selection rules are random selection (sample \(I\) uniformly from \(\mathcal{P}\)) and cyclic selection (\(I=I_{j}\)), which usually converge slowly (see Figure 2). Instead, we select the block \(I\) with the largest residual norm
\[I=\operatorname*{argmax}_{I\in\mathcal{P}}\;\|\mathbf{R}_{I,:}\|_{\mathrm{F}}^ {2}. \tag{11}\]
In the special case that \(\mathbf{R}\) is an \(n\times 1\) vector, (11) reduces to the Gauss-Southwell (GS) rule (Nutini et al., 2015). (11) is a modification adapted to our setting.
**Cached Cholesky.** Line 4 requires solving a linear system with the submatrix \(\mathbf{K}_{I,I}\). To avoid repeatedly inverting the same matrices, we compute and cache the Cholesky factors of all principal submatrices \(\{\mathbf{K}_{I,I}:I\in\mathcal{P}\}\) once whenever the GP hyperparameters are updated (e.g., once per gradient computation). To facilitate parallelism, we partition the blocks evenly so that every block has the same size \(|I|=b\) and factorize
all matrices in a single batch Cholesky call, which takes \(\mathcal{O}(nb^{2})\) time and \(\mathcal{O}(nb)\) memory.
**Complexity.** The block selection takes \(\mathcal{O}(n)\) time. Updating the weights \(\mathbf{W}\) takes \(\mathcal{O}(b^{2})\) time. Updating the residual \(\mathbf{R}\) takes \(\mathcal{O}(nb)\) time. Each epoch runs \(m=n/b\) inner loops and thus takes \(\mathcal{O}(nb+n^{2})\) time in total. Thus, the complexity of each epoch has the same quadratic complexity as a single CG iteration. A more fine-grained analysis in Appendix F shows that each epoch requires \((2+\frac{3}{b})n^{2}+(2b+1)n\) FLOPs. Thus, for typical batch sizes \(1\ll b\ll n\), each epoch requires roughly \(2n^{2}\) FLOPs, the same number as a single CG iteration. We note that every update in Algorithm 1 has linear (in terms of \(n\)) time and memory complexity.
**Connection with Coordinate Descent.** Interestingly, we can show that Algorithm 1 produces iterates equivalent to coordinate descent on the quadratic form (see SSA for details). We will exploit this connection to prove the rate of convergence of Algorithm 1. We introduce this algorithm as alternating projection for two reasons: (a) unlike in coordinate descent, the update rules based on alternating projection maintain the residual \(\mathbf{R}\), which enables efficient block selection strategies like the GS rule without re-evaluating the residual; (b) alternating projection can be easily extended to different settings. For instance, a parallel coordinate descent algorithm was discovered via the connection with (Dykstra's) alternating projection (Boyle and Dykstra, 1986; Tibshirani, 2017) in the setting of regularized least-squares, which hints that Algorithm 1 may be distributed.
## 4 Convergence
Let \(\lambda_{\max}\) and \(\lambda_{\min}\) be the largest and smallest eigenvalues of \(\mathbf{K}\), \(\kappa=\lambda_{\max}/\lambda_{\min}\) its condition number, and define \(\lambda^{\prime}_{\max}=\max_{I\in\mathcal{P}}\lambda_{\max}(\mathbf{K}_{I,I})\) as the maximum of the largest eigenvalues of the principal submatrices \(\{K_{I,I}:I\in\mathcal{P}\}\). By leveraging the connection with coordinate descent (Nutini et al., 2022), we can prove an explicit convergence rate for Algorithm 1 when applied to a linear system with multiple right-hand sides.
**Theorem 1**.: _Let \(\mathbf{W}^{*}\) be the (unique) solution of the linear system \(\mathbf{K}\mathbf{W}=\mathbf{B}\) and \(\mathbf{W}^{(t)}\) its approximation after \(t\) epochs of Algorithm 1 using the modified GS rule (11). Then it holds that_
\[\|\mathbf{W}^{(t)}-\mathbf{W}^{*}\|_{\mathbf{K}}^{2}\leq\exp\big{(}-t/\kappa^ {\prime}\big{)}\|\mathbf{W}^{(0)}-\mathbf{W}^{*}\|_{\mathbf{K}}^{2},\]
_where \(\kappa^{\prime}=\lambda^{\prime}_{\max}/\lambda_{\min}\leq\kappa\)._
The rate in Theorem 1 improves over gradient descent despite only needing sub matrices, for which the above holds with \(\exp(-t/\kappa)\), since generally \(\kappa^{\prime}\leq\kappa\). For comparison, the convergence rate of (batched) CG is \(4\big{(}(\sqrt{\kappa}-1)/(\sqrt{\kappa}+1)\big{)}^{2t}\approx 4\exp\big{(}-4t/ \sqrt{\kappa}\big{)}\) for a sufficiently large condition number \(\kappa\gg 1\). The convergence rate of alternating projection is asymptotically faster than that of CG if \(\kappa^{\prime}\leq\frac{1}{4}\sqrt{\kappa}\). In general, we do not expect this condition to hold. However, alternating projection has practical advantages despite a slower asymptotic convergence rate. First, alternating projection performs \(m\) times more updates than CG with the same number of FLOPs. Second, alternating projection generally decreases the residual in every epoch, while the CG residual is not monotonic. We empirically observe that CG often increases the residual dramatically in the early stage and it takes time for CG to enter the "linear convergence phase". In addition, the dependency on \(\kappa^{\prime}\) suggests that alternating projection implicitly works on better-conditioned matrices, which may imply robustness against ill-conditioning.
Figure 1 shows the above two points in practice. The figure is plotted on two checkpoints at the 50 epoch GP training on the 3droad and house electric datasets respectively. The (batched) linear system \(\mathbf{K}^{-1}\mathbf{B}\) has 16 right-hand sides, where \(\mathbf{b}_{0}=\mathbf{y}\) is the training labels and \(\{\mathbf{b}_{i}\}_{i=1}^{15}\) are _i.i.d._ samples from a Gaussian. We can prove that the random selection strategy in Figure 2 (right panel) achieves a similar rate in Theorem 1, but only in expectation. In practice, the GS rule converges faster than random selection.
The batch size \(b\) affects the rate in Theorem 1 through the condition number \(\kappa^{\prime}=\lambda^{\prime}_{\max}/\lambda_{\min}\). Note that the largest eigenvalue of the principal submatrix is bounded by its trace \(\lambda_{\max}(\mathbf{K}_{I,I})\leq\mathbf{tr}\left(\mathbf{K}_{I,I}\right)\), where the trace grows linearly in \(|I|\). A small batch size \(b=|I|\) is likely to have a small \(\lambda^{\prime}_{\max}\) and a faster convergence rate. We compare the convergence of different batch sizes in Figure 3. Although small batch sizes lead to faster convergence, they generally have a longer running time due to more sequential updates. Therefore, in practice, we recommend using the largest batch
Figure 3: Convergence of alternating projection with different batch sizes \(b\) on 3droad. **Left:** Smaller batch sizes converge faster within the same epochs. **Right:** However, smaller batch sizes result in more sequential updates on the GPU and thus longer wall-clock time.
size possible subject to memory constraints. In addition, we note that the convergence rate in Theorem 1 is loose for large batch sizes \(b\). In the extreme case where \(b=n\), Algorithm 1 is equivalent to the Cholesky decomposition on the entire matrix \(\mathbf{K}\) and thus converges to the exact solution in one update. However, Theorem 1 does not reflect that. The convergence rate in practice may be much faster than the theory predicts.
## 5 Experiments
We evaluate the efficacy of our alternating projections solver in a GP regression task. Our evaluation includes a training dataset of \(n=4M\), which, to the best of our knowledge, is considerably larger than any other dataset where a GP has been applied without inducing points or employing modeling approximations.
All experiments are performed on a single 24 GB NVIDIA RTX A5000 GPUs with single precision floating point, and all numerical algorithms/GP models are implemented in PyTorch/GPyTorch (Gardner et al., 2018). We use the KeOps library (Charlier et al., 2021) to implement all matrix-free numerical algorithms in a map-reduce fashion, thus eliminating the need to store large \(n\times n\) kernel matrices in memory.
### Main Result: GP Regression
We first evaluate our method on large-scale GP training tasks. We compare against GPs trained with CG, which is the predominant matrix-free GP training approach (Gardner et al., 2018; Wang et al., 2019; Maddox et al., 2022).
**Metrics.** Our primary desiderata for GPs are 1) low computational costs for training and 2) generalization. Therefore, we compare the different training methods using the following metrics: 1) the total number of floating point operations (**FLOPs**) normalized by \(2n^{2}\) (the FLOPs of a single matmul), 2) the wall clock **training time**, and 3/4) the trained model's **RMSE** and **NLL** measured on the test set.
**Datasets and Models.** We conduct experiments on UCI regression datasets, whose statistics are shown in Table 4. Each dataset is split into 80% training and 20% test. The labels are normalized so that they have zero mean and unit variance. Almost all experiments are averaged over 5 runs. Because of resource constraints, we limit the two largest datasets--House Electic and Gas Sensors--to 3 and 1 run respectively.
We train GP regression models with \(\nu=2.5\) Matern kernels and a constant prior mean. We optimize the following hyperparameters: a scalar constant for the prior mean, a \(d\)-dimensional kernel lengthscale, a scalar outputscale, and a scalar observational noise parameter \(\sigma^{2}\). We include experiments with \(\nu=1.5\) Matern kernels in Appendix E.
**MLL Optimization.** To compute the stochastic MLL gradient (2), we use \(l=15\) random samples \(\mathbf{z}_{i}\). Thus, all matrix-free methods solve a batched linear system with 16 right-hand sides \(\mathbf{y}\) and \(\{\mathbf{z}_{i}\}_{i=1}^{15}\) in each training iteration. On the first five datasets, the GPs are trained by 50 iterations of Adam with a step size 0.1. On house electric and gas sensors, the GPs are trained by 100 iterations of Adam with a step size 0.1.
**Alternating Projection Details.** As discussed in SS4, a large batch size is preferred empirically. We use the largest batch size that we can fit on a 24 GB GPU. The batch sizes \(b\) are set as: 6000 on SGEMM, air quality and 3droad; 4000 on song and buzz; 1000 on house electric; 500 on gas sensors. We use the sequential partition \(\mathcal{P}\): the data points from \((j-1)b+1\) to \(jb\) belong to the \(j\)-th block \(I_{j}\) for \(j=1,2,\cdots n/b\).
The maximum CG iterations and the maximum number of alternating projection epoch is set to 1000. Following GPyTorch's CG stopping criteria, we terminate the alternating projection solves after (a) the average relative residual norm is strictly smaller than the tolerance \(\delta=1\) or (b) 1000 total epochs, whichever comes first. However, we ensure that at least 11 epochs of alternating projections have been run before termination (again following GPyTorch). We define the average relative residual norm as \(\frac{1}{l+1}\sum_{i=0}^{l}\lVert\mathbf{r}_{i}\rVert/\lVert\mathbf{b}_{i}\rVert\) when there are \(l+1\) right hand sides (\(\mathbf{b}_{0}\quad\mathbf{b}_{1}\quad\cdots\quad\mathbf{b}_{l}\)).
**CG Details.** We use GPyTorch's implementation of CG, which uses the same stopping criteria as our alternating projections implementation. Following Wang et al. (2019); Wenger et al. (2022), we use a pivoted Cholesky preconditioner of size 500 on all datasets except: house electric uses a size 300 and gas sensors uses a size 150 due to GPU memory overflow.
**Prediction.** At test time, the predictive mean is computed by the same iterative method used for training (e.g., CG for the CG trained GP, alternating projection for the AP trained GP). A limitation of our method is that it does not easily result in a cache for variances. Therefore, we use 1000 Lanczos iterations as in Pleiss et al. (2018); Wang et al. (2019).
**Results on \(10^{5}<n<10^{6}\) datasets.** Table 4 compares the predictive performance and the training speed of CG-based versus alternating projection-based GPs. Both training procedures produce GPs with similar RMSE and NLL. We conjecture that this similarity occurs because both approaches solve linear systems up to the same tolerance, and thus find similar hyper
parameters. One exception is the buzz dataset: CG struggles to converge while training on this dataset, resulting in considerably worse RMSE and NLL.
The primary difference between the two methods is training time. Alternating projection-based training is up to \(27\times\) faster than CG. The only exception is SGEMM GPU, which seems to be a well-conditioned dataset since CG converges quickly.
For reference, we also report the training/test performance of stochastic variational Gaussian processes (SVGP) (see Appendix E for experimental design details). GPs trained by alternating projection achieve substantially lower RMSE and comparable NLL compared with SVGP. We do note that SVGPs have lower NLL on 3droad and house electric, which we suspect is a limitation of the Lanczos predictive variance estimates used on the alternating projections models. (Note that SVGP's predictive variances can be computed exactly and do not make use of the Lanczos estimator.) Indeed, in Appendix E we find that the NLL gap shrinks as we increase the rank of the Lanczos variance estimator, suggesting that this gap is not a fundamental limitation of the alternating projections training methodology.
**Results on \(n\geq 10^{6}\) datasets.** Previous attempts to train GPs using iterative methods on datasets with \(n\geq 10^{6}\) examples have used a large noise constraint \(\sigma^{2}\geq 0.1\) to improve the conditioning of the kernel matrix (_e.g._, Wang et al., 2019; Maddox et al., 2022). Since alternating projection is much less conditioning-sensitive than CG (see SS5.2), for the first time, we are able to train the model with a much smaller noise constraint \(\sigma^{2}\geq 10^{-4}\), _i.e._ the default in GPyTorch for the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & Method & RMSE & NLL & FLOPs/\(2n^{2}\) & Training time & Speed up \\ \hline SGEMM & CG & \(0.048\pm 0.000\) & \(\mathbf{-1.037\pm 0.001}\) & \(551\pm 1\) & \(9.1\)m \(\pm 0.0\) & \\ \(n=241,600\) & Alt. Proj. & \(\mathbf{0.046\pm 0.000}\) & \(-0.999\pm 0.001\) & \(550\pm 0\) & \(12.2\)m \(\pm 0.2\) & \(0.7\times\) \\ \(d=14\) & SVGP & \(0.086\pm 0.000\) & \(-0.934\pm 0.003\) & NA & \(14.8\)m \(\pm 0.1\) & \\ \hline air quality & CG & \(\mathbf{0.261\pm 0.001}\) & \(0.143\pm 0.004\) & \(2965\pm 19\) & \(33.5\)m \(\pm 1.5\) & \\ \(n=382,168\) & Alt. Proj. & \(\mathbf{0.262\pm 0.001}\) & \(\mathbf{0.137\pm 0.003}\) & \(550\pm 0\) & \(16.9\)m \(\pm 0.5\) & \(2.0\times\) \\ \(d=13\) & SVGP & \(0.363\pm 0.003\) & \(0.399\pm 0.006\) & NA & \(23.4\)m \(\pm 0.1\) & \\ \hline
3droad & CG & \(\mathbf{0.069\pm 0.000}\) & \(1.324\pm 0.002\) & \(5128\pm 114\) & \(53.2\)m \(\pm 2.8\) & \\ \(n=434,874\) & Alt. Proj. & \(0.076\pm 0.000\) & \(1.203\pm 0.001\) & \(676\pm 1\) & \(21.1\)m \(\pm 0.5\) & \(2.5\times\) \\ \(d=3\) & SVGP & \(0.327\pm 0.002\) & \(\mathbf{0.320\pm 0.005}\) & NA & \(26.1\)m \(\pm 0.1\) & \\ \hline song & CG & \(\mathbf{0.747\pm 0.002}\) & \(1.140\pm 0.003\) & \(4431\pm 110\) & \(13.8\)m \(\pm 0.8\) & \\ \(n=515,345\) & Alt. Proj. & \(\mathbf{0.749\pm 0.002}\) & \(\mathbf{1.132\pm 0.002}\) & \(550\pm 0\) & \(2.7\)m \(\pm 0.1\) & \(5.1\times\) \\ \(d=90\) & SVGP & \(0.790\pm 0.002\) & \(1.184\pm 0.002\) & NA & \(0.5\)m \(\pm 0.0\) & \\ \hline buzz & CG & \(0.321^{*}\pm 0.144\) & \(0.669^{*}\pm 1.152\) & \(16726\pm 2724\) & \(31.1\)m \(\pm 5.4\) & \\ \(n=583,250\) & Alt. Proj. & \(\mathbf{0.239\pm 0.001}\) & \(\mathbf{0.018\pm 0.003}\) & \(550\pm 0\) & \(2.0\)m \(\pm 0.1\) & \(15.6\times\) \\ \(d=77\) & SVGP & \(0.259\pm 0.002\) & \(0.066\pm 0.006\) & NA & \(0.6\)m \(\pm 0.0\) & \\ \hline house electric & CG & - & - & \(\geq 50441\) & \(\geq 11d\) & \\ \(n=2,049,280\) & Alt. Proj. & \(\mathbf{0.030\pm 0.000}\) & \(-1.148\pm 0.001\) & \(1100\pm 0\) & \(9.8\)m \(\pm 0.4\) & \(\geq 26.9\times\) \\ \(d=11\) & SVGP & \(0.050\pm 0.000\) & \(\mathbf{-1.549\pm 0.001}\) & NA & \(2.1\)m \(\pm 0.0\) & \\ \hline gas sensors & CG & - & - & - & - & - \\ \(n=4,178,504\) & Alt. Proj. & \(\mathbf{0.203}\) & \(\mathbf{0.070^{\dagger}}\) & \(1100\) & \(84.5\)h & \\ \(d=17\) & SVGP & \(0.330\pm 0.001\) & \(0.339\pm 0.003\) & NA & \(8.7\)m \(\pm 0.03\) & \\ \hline \hline \end{tabular}
* *: At test time, CG does not reach the tolerance \(\delta=0.01\) after 4000 iterations on some checkpoints.
* : CG does not finish GP training.
* : This predictive variance is calculated using only 500 Lanczos iterations to save time and avoid numerical instability.
\end{table}
Table 1: _Gaussian process training on UCI benchmark datasets._ Metrics are computed across multiple runs and reported with \(\pm\) one standard deviation.
Figure 4: GP training on air quality dataset. **Left:** Because the likelihood noise \(\sigma^{2}\) decreases during training, the matrix \(\mathbf{K}\) gets more ill-conditioned. **Right:** CG is sensitive to this increased ill-conditioning, while alternating projections is robust.
Gaussian likelihood.4 Removing the noise constraint yields much better predictive performance: the RMSE 0.030 is significantly lower than what can be achieved with high-noise constraint models (see Appendix E).
Footnote 4: GPyTorch likelihood setting [https://rb.gy/fv4iw](https://rb.gy/fv4iw)
We additionally train a GP on the gas sensors dataset with 4 million data points. To the best of our knowledge, this is the largest dataset trained on using GPs without the use of inducing point or other modeling approximations. CG-based training appears to be intractable on such a large dataset, requiring over a week to train. In contrast, the alternating projections method required 84.5 hours.
### Effect of Kernel Matrix Conditioning
As implied by our theoretical dependence on \(\lambda^{\prime}_{\text{max}}\) rather than \(\lambda_{\text{max}}\), we observe that our alternating projections method is less sensitive to ill-conditioning than CG. We demonstrate this phenomenon in Figure 4, which depicts training on the \(n\approx 400K\) air quality dataset. Over the course of training, the noise parameter \(\sigma^{2}\) decreases for both methods, resulting in an increasingly ill-conditioned kernel matrix (as \(\lambda_{\text{min}}\approx\sigma^{2}\)). At the end of training, when \(\sigma^{2}\approx 0.01\), CG requires over 120 iterations to converge--10\(\times\) as many iterations as the beginning of training. In contrast, alternating projection consistently converges in 11 iterations despite the decreasing noise and increasing condition number. See more datasets in Appendix E.
### Alternating Projection at Test Time
Any linear solver \(\mathbf{K}^{-1}\mathbf{b}\) can be used to compute the posterior mean on the test data. We explore alternating projection at test time, as shown in Figure 5 and Table 3 in Appendix E. With a test-time tolerance \(\delta=0.01\), the posterior mean computed by alternating projection is practically the same as CG: the RMSE of both methods are the same up to the 3rd digit after the decimal point. While alternating projection is slightly slower on medium-size datasets such air quality and 3broad, we observe strong speed up on large datasets such as buzz and house electric. Our method computes the posterior mean \(17.2\times\) faster in wall-clock time than CG on buzz, and requires only 5 min to compute the posterior mean on house electric.
## 6 Related Work
The early usage of conjugate gradients in GPs dates back at least to Yang et al. (2004); Shen et al. (2005). They proposed methods speeding up CG by approximate matrix-vector multiplications. More recently, CG has been revisited by Davies (2015); Cutajar et al. (2016). Then, a series of work Gardner et al. (2018); Wang et al. (2019); Artemev et al. (2021) and software such as GPyTorch Gardner et al. (2018) and GPflow Matthews et al. (2017) have popularized CG for GPs.
Alternating projection (Von Neumann, 1949) is a general algorithm finding a point in the intersection of convex sets, enjoying applications in convex optimization (Agmon, 1954) and scattered data approximation (Wendland, 2004). An early work applying coordinate descent with greedy block selection for GP inference is done by Bo and Sminchisescu (2008). However, the algorithm is not parallelizable on modern hardware like GPUs due to the inherent sequential nature of the greedy selection, and lacks an explicit convergence rate with explicit constants. Lin et al. (2023) recently have applied stochastic gradient descent for approximate GP posterior sampling. They also observe CG struggles with convergence in ill-conditioned settings.
## 7 Conclusion
In this work we proposed an alternating projection method with provable linear convergence for solving dense kernel linear systems and applied it to GP training and inference. Our method quickly reaches commonly used tolerances faster than CG, requires only linear time per iteration, and is highly robust to ill-conditioning. Experiments on several large-scale benchmark datasets show that we achieve a 2-27\(\times\) speed-up over CG-based training and a 2-17\(\times\) speed up over CG-based inference with an _increase in predictive performance_. This includes results on datasets as large as 4 million data points which is state-of-the-art for GPs trained with iterative methods without artificially inflating observation noise for stability.
Figure 5: Running CG and alternating projection on test-time solves \(\mathbf{K}^{-1}(\mathbf{y}-\boldsymbol{\mu})\). For alternating projection, the x-axis is the number of epochs. **Left:** CG has faster convergence rate, but CG does not reach the test-time tolerance \(\delta=0.01\) much faster. **Right:** Alternating projection reaches the tolerance \(\delta=0.01\) faster despite its slower asymptotic rate.
## Acknowledgements
JW was supported by the Gatsby Charitable Foundation (GAT3708), the Simons Foundation (542963) and the Kavli Foundation.
|
2302.10449 | Efficient phase-space generation for hadron collider event simulation | We present a simple yet efficient algorithm for phase-space integration at
hadron colliders. Individual mappings consist of a single t-channel combined
with any number of s-channel decays, and are constructed using diagrammatic
information. The factorial growth in the number of channels is tamed by
providing an option to limit the number of s-channel topologies. We provide a
publicly available, parallelized code in C++ and test its performance in
typical LHC scenarios. | Enrico Bothmann, Taylor Childers, Walter Giele, Florian Herren, Stefan Hoeche, Joshua Isaacson, Max Knobbe, Rui Wang | 2023-02-21T05:26:59Z | http://arxiv.org/abs/2302.10449v2 | # Efficient phase-space generation for hadron collider event simulation
###### Abstract
We present a simple yet efficient algorithm for phase-space integration at hadron colliders. Individual mappings consist of a single t-channel combined with any number of s-channel decays, and are constructed using diagrammatic information. The factorial growth in the number of channels is tamed by providing an option to limit the number of s-channel topologies. We provide a publicly available, parallelized code in C++ and test its performance in typical LHC scenarios.
+
Footnote †: preprint: FERMILAB-PUB-23-032-T, MCnet-23-02
## I Introduction
The problem of phase-space integration is omnipresent in particle physics. Efficient methods to evaluate phase-space integrals are needed in order to predict cross sections and decay rates for a variety of experiments, and they are required for both theoretical calculations and event simulation. In many cases, the integrand to be evaluated features a number of narrow peaks, corresponding to the resonant production of unstable massive particles. In other cases, the integrand has intricate discontinuities, arising from cuts to avoid the singular regions of scattering matrix elements in theories with massless force carriers, such as QED and QCD. In most interesting scenarios, the phase space is high dimensional, such that analytic integration is ruled out, and Monte-Carlo (MC) integration becomes the only viable option.
Many techniques have been devised to deal with this problem [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Among the most successful ones are factorization based approaches [1; 2; 3] and multi-channel integration techniques [11]. They allow to map the structure of the integral to the diagrammatic structure of the integrand. For scalar theories, and ignoring the effect of phase-space cuts, this corresponds to an ideal variable transformation. Realistic multi-particle production processes are much more complex, both because of the non-scalar nature of most of the elementary particles, and because of phase-space restrictions. Adaptive Monte-Carlo methods [12; 13; 14; 15; 16; 17] are therefore used by most theoretical calculations and event generators to map out structures of the integrand which are difficult to predict. More recently, neural networks have emerged as a promising tool for this particular task [18; 19; 20; 21; 22; 23; 24; 25].
In this letter, we introduce a novel phase-space integrator which combines several desirable features of different existing approaches. In particular, we address the computational challenges discussed in a number of reports of the HEP Software Foundation [26; 27; 28] and the recent Snowmass community study [29]. Our algorithm is based on the highly successful integration techniques employed in MCFM [30; 31; 32; 33], combined with a standard recursive approach for s-channel topologies as used in many modern simulation programs. We provide a stand-alone implementation, which we call Chili (Common High-energy Integration LIbrary)1, which includes the Vegas algorithm [12] and MPI parallelization. We also implement Python bindings via nanohol [34] and to Tensorflow [35], providing an interface the normalizing-flow based neural network integration frameworks iFlow [20] and MadNIS [22]. To assess the performance of our new code, we combine it with the matrix-element generators in the general-purpose event generator Sherpa[8; 36] and devise a proof of concept for the computation of real-emission next-to-leading order corrections by adding a forward branching generator which makes use of the phase-space mappings of the Catani-Seymour dipole subtraction formalism [37; 38].
Footnote 1: The source code can be found at [https://gitlab.com/spice-mc/chili](https://gitlab.com/spice-mc/chili).
The outline of the paper is as follows: Section II discusses the algorithms used in our new generator. Section III presents performance measures obtained in combination with Comix[8], and Amegic [39], and Sec. IV includes a summary and outlook.
## II The algorithm
One of the most versatile approaches to phase-space integration for high-energy collider experiments is to employ the factorization properties of the \(n\)-particle phase-space integral [3]. Consider a \(2\to n\) scattering process, where we
label the incoming particles by \(a\) and \(b\) and outgoing particles by \(1\ldots n\). The corresponding \(n\)-particle differential phase-space element reads
\[\mathrm{d}\Phi_{n}(a,b;1,\ldots,n)=\Bigg{[}\prod_{i=1}^{n}\frac{\mathrm{d}^{3} \vec{p}_{i}}{(2\pi)^{3}\,2E_{i}}\,\Bigg{]}\;(2\pi)^{4}\delta^{(4)}\bigg{(}p_{a} +p_{b}-\sum_{i=1}^{n}p_{i}\bigg{)}\;. \tag{1}\]
Following Ref. [1], the full differential phase-space element can be reduced to lower-multiplicity differential phase-space elements as follows:
\[\mathrm{d}\Phi_{n}(a,b;1,\ldots,n)=\mathrm{d}\Phi_{n-m+1}(a,b;\pi,m+1,\ldots,n )\,\frac{\mathrm{d}s_{\pi}}{2\pi}\,\mathrm{d}\Phi_{m}(\pi;1,\ldots,m)\;, \tag{2}\]
where \(\pi\) indicates an intermediate pseudo-particle of virtuality \(s_{\pi}=p_{\pi}^{2}\). Equation (2) allows to compose the full differential phase-space element from building blocks which correspond to a single t-channel production process and a number of s-channel decays, as depicted in Fig. 1. By repeated application of Eq. (2), all decays can be reduced to two-particle decays, with differential phase-space elements \(\mathrm{d}\Phi_{2}\). This allows to match the structure of the phase-space integral onto the structure of the Feynman diagrams in the integrand at hand, a technique that is known as diagram-based integration.
### The t- and s-channel building blocks
In this subsection, we first describe the techniques to perform the integration using a pure t-channel differential phase-space element, \(\mathrm{d}\Phi_{n}(a,b;1,\ldots,n)\). The final-state momenta \(p_{1}\) through \(p_{n}\) can be associated with on-shell particles, or they can correspond to intermediate pseudo-particles whose virtuality is an additional integration variable. We start with the single-particle differential phase-space element in Eq. (1). It can be written in the form
\[\frac{\mathrm{d}^{3}\vec{p}_{i}}{(2\pi)^{3}\,2E_{i}}=\frac{1}{16\pi^{2}}\, \mathrm{d}p_{i,\perp}^{2}\,\mathrm{d}y_{i}\,\frac{\mathrm{d}\phi_{\mathrm{i} }}{2\pi}\;, \tag{3}\]
where \(p_{i,\perp}\), \(y_{i}\) and \(\phi_{i}\) are the transverse momentum, rapidity and azimuthal angle of momentum \(i\) in the laboratory frame, respectively. Many experimental analyses at hadron colliders require cuts on the transverse momentum and rapidity of jets and other analysis objects, which are easily implemented in this parametrization, leading to an excellent efficiency of the integration algorithm.
The remaining task is to implement the delta function in Eq. (1). This is achieved by combining the integral over one of the momenta, say \(p_{n}\), with the integration over the light-cone momentum fractions used to convolute the partonic cross section with the PDFs. We obtain
\[\begin{split}\mathrm{d}x_{a}\mathrm{d}x_{b}\,\mathrm{d}\Phi_{n}( a,b;1,\ldots,n)=&\;\frac{\mathrm{d}P_{+}\mathrm{d}P_{-}}{s}\, \Bigg{[}\prod_{i=1}^{n-1}\frac{1}{16\pi^{2}}\,\mathrm{d}p_{i,\perp}^{2}\, \mathrm{d}y_{i}\,\frac{\mathrm{d}\phi_{\mathrm{i}}}{2\pi}\,\Bigg{]}\\ &\times\frac{\mathrm{d}^{4}p_{n}}{(2\pi)^{3}}\,\delta(p_{n}-s_{n}) \Theta(E_{n})\;(2\pi)^{4}\delta^{(4)}\bigg{(}p_{a}+p_{b}-\sum_{i=1}^{n-1}p_{i} -p_{n}\bigg{)}\;,\end{split} \tag{4}\]
Figure 1: Example application of the phase-space factorization formula, Eq. (2). Particles 1 through 7 are produced in the collision of particles \(a\) and \(b\). Figure (a) represents a pure t-channel configuration, cf. Sec. II.1. In Fig. (b), the differential 7-particle phase-space element is factorized into the production of four particles, two of which are the pseudo-particles \(\{1,2\}\) and \(\{3,4,5\}\), which subsequently decay. In Fig. (c), the decay of \(\{3,4,5\}\) is again factorized into two consecutive decays.
where \(s\) is the hadronic center-of-mass energy, and \(P_{\pm}=P_{0}\pm P_{z}\) is defined using \(P=\sum_{i=1}^{n-1}p_{i}\). Changing the integration variables from \(P_{+}\) and \(P_{-}\) to \(s_{n}\) and \(y_{n}\), it is straightforward to evaluate the delta functions, and we obtain the final expression
\[\mathrm{d}x_{a}\mathrm{d}x_{b}\,\mathrm{d}\Phi_{n}(a,b;1,\ldots,n)=\frac{2\pi} {s}\left[\prod_{i=1}^{n-1}\frac{1}{16\pi^{2}}\,\mathrm{d}p_{i,\perp}^{2}\, \mathrm{d}y_{i}\,\frac{\mathrm{d}\phi_{1}}{2\pi}\right]\,\mathrm{d}y_{n}\;. \tag{5}\]
This form of the differential phase-space element is particularly suited for the production of electroweak vector bosons (\(W\), \(Z\) and \(\gamma\)) in association with any number of jets. However, it may not be optimal for phase-space generation when there are strong hierarchies in transverse momenta of the jets, that may be better described by phase-space mappings similar to Fig. 1 (c).
The differential decay phase-space elements occurring in Fig. 1 (b) and (c) are easily composed from the corresponding expressions for two-body decays. In the frame of a time-like momentum \(P\), this differential phase-space element can be written as
\[\mathrm{d}\Phi_{2}(\{1,2\};1,2)=\frac{1}{16\pi^{2}}\frac{\sqrt{(p_{1}P)^{2}-p _{1}^{2}P^{2}}}{(p_{1}+p_{2})P}\,\mathrm{d}\cos\theta_{1}^{(P)}\mathrm{d}\phi_ {1}^{(P)}\;. \tag{6}\]
Typically, this is evaluated in the center-of-mass frame of the combined momentum, \(p_{1}+p_{2}\), where it simplifies to
\[\mathrm{d}\Phi_{2}(\{1,2\};1,2)=\frac{1}{16\pi^{2}}\frac{\sqrt{(p_{1}p_{2})^{ 2}-p_{1}^{2}p_{2}^{2}}}{(p_{1}+p_{2})^{2}}\,\mathrm{d}\cos\theta_{1}^{\{1,2\}} \,\mathrm{d}\phi_{1}^{\{1,2\}}\;. \tag{7}\]
Equations (5) and (7) form the basic building blocks of our algorithm.
### The multi-channel
An optimal integrator for a particular squared Feynman diagram would be composed of a combination of the t-channel map in Eq. (5) and potentially a number of s-channel maps in Eq. (7), as sketched for various configurations in Fig. 1. The complete integrand will almost never consist of a single Feynman diagram squared, and it is therefore more appropriate to combine various such integrators in order to map out different structures in the full integrand.2 Each of those mappings is conventionally called a phase-space "channel", and each channel is a valid phase-space integrator in it's own right. They can be combined using the multi-channel technique, which was introduced in [11]. We refer the reader to the original publication for the details of this method. Here we will briefly describe how the individual channels are constructed in our integrator.
Footnote 2: An alternative option is to partition the integrand into terms which exhibit the structure of an individual diagram [6].
We begin by extracting the three-particle vertices from the interaction model. Given a set of external flavors, we can use the vertex information to construct all possible topologies of Feynman diagrams with the maximum number of propagators. For each topology, we apply the following algorithm: If an s-channel propagator is found, we use the factorization formula, Eq. (2) to split the differential phase-space element into a production and a decay part. This procedure starts with the external states and it is repeated until no more factorization is possible. As the number of possible s-channel topologies grows factorially in many cases, our algorithm provides an option to limit the maximum number of s-channels that are implemented. This helps to tailor the integrator to the problem at hand and allows to control the computational complexity.
Following standard practice, we generate the virtuality of the intermediate s-channel pseudo-particles using a Breit-Wigner distribution if the particle has a mass and width, or following a \(\mathrm{d}s/s^{\alpha}\) distribution (\(\alpha<1\)), if the particle is massless. The transverse momenta in Eq. (5) are generated according to \(\mathrm{d}p_{\perp}^{2}/(2p_{\perp,c}+p_{\perp})^{2}\), where \(p_{\perp,c}\) is an adjustable parameter that can be used to maximize efficiency, e.g. by setting it to the jet transverse momentum cut. The rapidities in Eq. (5) and the angles in Eq. (7) are generated using a flat prior distribution.
### Next-to-leading order calculations and dipole mappings
The integration of real-emission corrections in next-to-leading order QCD or QED calculations poses additional challenges for a phase-space integration algorithm. In order to achieve a local cancellation of singularities, subtraction
methods are typically employed in these calculations [37; 40]. This makes the behavior of the integrand less predictable than at leading order, and therefore complicates the construction of integration channels. Various approaches have been devised to deal with the problem. We adopt a solution that is based on the on-shell momentum mapping technique used in the Catani-Seymour dipole subtraction scheme [37; 38] and that has long been used in generators such as MCFM [32; 33; 41] and MUNICH [42].3
Footnote 3: We make this feature available only for use within Sherpa, but a future version of our stand-alone code will support it as well.
Following Ref. [37], there are four different types of local infrared subtraction terms that are used to make real-emission corrections and virtual corrections in NLO calculations separately infrared finite. They are classified according to the type of collinear divergence (initial state or final state) and the type of color spectator parton (initial state or final state). The massless on-shell phase-space mapping for the final-final configuration (FF) reads
\[\mathrm{d}\Phi_{n}^{(\mathrm{FF})}(a,b;1,\ldots,n)=\mathrm{d}\Phi_{n-1}(a,b;1, \ldots,\widetilde{\imath}\mathfrak{j},\ldots,\tilde{k},\ldots,n)\,\frac{2 \tilde{p}_{ij}\tilde{p}_{k}}{16\pi^{2}}\,\mathrm{d}y_{ij,k}\mathrm{d}\tilde{z} _{i}\,\frac{\mathrm{d}\phi}{2\pi}\,(1-y_{ij,k})\;. \tag{8}\]
where
\[p_{i}^{\mu}=\tilde{z}_{i}\,\tilde{p}_{ij}^{\mu}+(1-\tilde{z}_{i})\,y_{ij,k} \,\tilde{p}_{k}^{\mu}+k_{\perp}^{\mu}\;,\qquad p_{k}^{\mu}=(1-y_{ij,k})\,\tilde {p}_{k}^{\mu}\;,\qquad p_{j}^{\mu}=\tilde{p}_{ij}+\tilde{p}_{k}-p_{i}-p_{k}\;, \tag{9}\]
and where \(k_{\perp}^{2}=-\tilde{z}_{i}(1-\tilde{z}_{i})y_{ij,k}\,2\tilde{p}_{ij}\tilde{p }_{k}\) is determined by the on-shell conditions.
The massless on-shell phase-space mapping for the final-initial and initial-final configurations (FI/IF) reads
\[\mathrm{d}\Phi_{n}^{(\mathrm{FI/IF})}(a,b;1,\ldots,n)=\mathrm{d}\Phi_{n-1}( \tilde{a},b;1,\ldots,\widetilde{\imath}\mathfrak{j},\ldots,n)\,\frac{2\tilde{p }_{ij}p_{a}}{16\pi^{2}}\,\mathrm{d}\tilde{z}_{i}\mathrm{d}x_{ij,a}\,\frac{ \mathrm{d}\phi}{2\pi}\;. \tag{10}\]
where
\[p_{i}^{\mu}=\tilde{z}_{i}\,\tilde{p}_{ij}^{\mu}+(1-\tilde{z}_{i})\,\frac{1-x _{ij,a}}{x_{ij,a}}\;\tilde{p}_{a}^{\mu}+k_{\perp}^{\mu}\;,\qquad p_{a}^{\mu} =\frac{1}{x_{ij,a}}\,\tilde{p}_{a}^{\mu}\;,\qquad p_{j}^{\mu}=\tilde{p}_{ij}- \tilde{p}_{a}+\tilde{p}_{a}-\tilde{p}_{i}\;, \tag{11}\]
and where \(k_{\perp}^{2}=-\tilde{z}_{i}(1-\tilde{z}_{i})(1-x_{ij,a})/x_{ij,a}\,2\tilde{p} _{ij}\tilde{p}_{a}\).
The massless on-shell phase-space mapping for the initial-initial configurations (II) reads
\[\mathrm{d}\Phi_{n}^{(\mathrm{II})}(a,b;1,\ldots,n)=\mathrm{d}\Phi_{n-1}( \widetilde{a},b;1,\ldots,\tilde{n})\,\frac{2p_{a}p_{b}}{16\pi^{2}}\,\mathrm{d }\tilde{v}_{i}\mathrm{d}x_{i,ab}\,\frac{\mathrm{d}\phi}{2\pi}\;. \tag{12}\]
where
\[p_{i}^{\mu}=\frac{1-x_{i,ab}-\tilde{v}_{i}}{x_{i,ab}}\,\tilde{p}_{a}^{\mu}+ \tilde{v}_{i}\,p_{b}^{\mu}+k_{\perp}^{\mu}\;,\qquad p_{a}^{\mu}=\frac{1}{x_{i, ab}}\,\tilde{p}_{ai}^{\mu}\;,\qquad p_{j}^{\mu}=\Lambda_{\,\nu}^{\mu}(K,\tilde{K}) \tilde{p}_{j}^{\nu}\quad\forall j\in\{1,\ldots,n\},j\neq i\;, \tag{13}\]
and where \(k_{\perp}^{2}=-(1-x_{i,ab}-\tilde{v})/x_{ij,a}\,\tilde{v}_{i}\,2\tilde{p}_{ai}p _{b}\). The transformation, \(\Lambda_{\,\nu}^{\mu}(K,\tilde{K})\), is defined in Sec. 5.5 of Ref. [37]. The three above mappings are sufficient to treat any real-emission correction in massless QCD. We infer the possible dipole configurations from the flavor structure of the process and combine all possible mappings into a multi-channel integrator [11].
### Combination with normalizing-flow based integrators
With the development of modern machine learning methods, new techniques for adaptive Monte-Carlo integration have emerged, which are based on the extension [43; 44] of a nonlinear independent components estimation technique [45; 46], also known as a normalizing flow. They have been used to develop integration algorithms based on existing multi-channel approaches [19; 21; 22; 25]. One of the main obstacles to scaling such approaches to high multiplicity has been the fact that the underlying phase-space mappings are indeed multi channels, which induces hyperparameters that increase the dimensionality of the optimization problem. Here we propose a different strategy. We observe that the basic t-channel integration algorithm implementing Eq. (5) requires the minimal amount of random numbers, and shows a good efficiency (cf. Sec. III). It is therefore ideally suited to provide a basic mapping of the \(n\)-particle phase space at hadron colliders into a \(3n-4+2\) dimensional unit hypercube, required for combination with normalizing-flow based integrators. We provide Python bindings in Chili via nanobind [34] and a dedicated Tensorflow [35] interface. This allows the use of the iFlow [20] and MadNIS [22] frameworks to test this idea, and to evaluate the performance of this novel algorithm.
## III Performance benchmarks
In this section we present first numerical results obtained with our new integrator, Chili. We have interfaced the new framework with the general-purpose event generator Sherpa[36, 47, 48], which is used to compute the partonic matrix elements and the parton luminosity with the help of Comix[8] and Amegic [39]. To allow performance tests from low to high particle multiplicity, we perform color sampling. This affects the convergence rate, and we note that better MC uncertainties could in principle be obtained for color-summed computations, but at the cost of much larger computing time at high multiplicity. The performance comparison between Sherpa and Chili would, however, be unaffected. We use the NNPDF 3.0 PDF set [49] at NNLO precision, and the corresponding settings of the strong coupling, i.e. \(\alpha_{s}(m_{z})=0.118\) and running to 3-loop order. Light quarks, charm and bottom quarks are assumed to be massless, and we set \(m_{t}=173.21\). The electroweak parameters are determined in the complex mass scheme using the inputs \(\alpha(m_{Z})=1/128.8\), \(m_{W}=80.385\), \(m_{Z}=91.1876\), \(m_{h}=125\) and \(\Gamma_{W}=2.085\), \(\Gamma_{Z}=2.4952\). We assume incoming proton beams at a hadronic center-of-mass energy of \(\sqrt{s}=14\) TeV. To implement basic phase-space cuts, we reconstruct jets using the anti-\(k_{T}\) jet algorithm [50] with \(R=0.4\) in the implementation of FastJet [51] and require \(p_{\perp,j}\geq 30\) GeV and \(|y_{j}|\leq 6\). Photons are isolated from QCD activity based on Ref. [52] with \(\delta_{0}\)=0.4, \(n\)=2 and \(\epsilon_{\gamma}\)=2.5% and are required to have \(p_{\perp,\gamma}\geq 30\) GeV. All results presented in this section are obtained with a scalable version of our new integrator using parallel execution on CPUs with the help of MPI.
Table 1 shows a comparison between MC uncertainties and event generation efficiencies in leading-order calculations, obtained with the recursive phase-space generator in Comix and with Chili. To improve the convergence of the integrals we use the Vegas [12] algorithm, which is implemented independently in both Sherpa and Chili. The MC uncertainties are given after optimizing the adaptive integrator with 1.2 million non-zero phase-space points and evaluation of the integral with 6 million non-zero phase-space points. We employ the definition of event generation efficiency in Ref. [21], and we evaluate it using 100 replicas of datasets leading to 100 unweighted events each. We test the production of \(W^{+}\) and \(Z\) bosons with leptonic decay, on-shell Higgs boson production, top-quark pair production, direct photon production and pure jet production. These processes are omnipresent in background simulations at the Large Hadron Collider (LHC), and are typically associated with additional light jet activity due to the large phase space. Accordingly, we test the basic process with up to four additional light jets. In single boson production we do not include the trivial process without any light jets. We observe that the performance of our new integrator is well comparable and in many cases slightly better than the performance of the recursive phase-space generator
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} Process & \multicolumn{2}{c|}{Sherpa} & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c|}{Chili (basic)} & \multicolumn{2}{c|}{Process} & \multicolumn{2}{c|}{Sherpa} & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c}{Chili (basic)} \\ & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(W^{+}\)+1j & 0.5\% & 7\(\times 10^{-2}\) & 0.6\% & 9\(\times 10^{-2}\) & 0.6\% & 9\(\times 10^{-2}\) & 2\(\times 10^{-2}\) & 0.5\% & 1\(\times 10^{-1}\) & 0.5\% & 1\(\times 10^{-1}\) \\ \(W^{+}\)+2j & 1.2\% & 9\(\times 10^{-3}\) & 1.1\% & 2\(\times 10^{-2}\) & 1.2\% & 1\(\times 10^{-2}\) & 2\(\times 10^{-2}\) & 0.8\% & 2\(\times 10^{-2}\) & 0.8\% & 3\(\times 10^{-2}\) & 1.0\% & 2\(\times 10^{-2}\) \\ \(W^{+}\)+3j & 2.0\% & 1\(\times 10^{-3}\) & 2.0\% & 4\(\times 10^{-3}\) & 2.9\% & 2\(\times 10^{-3}\) & 2\(\times 10^{-3}\) & 2\(\times 10^{-3}\) & 1.3\% & 4\(\times 10^{-3}\) & 1.6\% & 7\(\times 10^{-3}\) & 2.5\% & 4\(\times 10^{-3}\) \\ \(W^{+}\)+4j & 3.7\% & 2\(\times 10^{-4}\) & 4.9\% & 7\(\times 10^{-4}\) & 6.0\% & 3\(\times 10^{-4}\) & 2\(\times 10^{-3}\) & 3.6\% & 1\(\times 10^{-3}\) & 5.0\% & 6\(\times 10^{-4}\) \\ \(W^{+}\)+5j & 7.2\% & 4\(\times 10^{-5}\) & 22\% & 1\(\times 10^{-5}\) & 26\% & 1\(\times 10^{-5}\) & 2\(\times 5\) & 3.7\% & 1\(\times 10^{-4}\) & 11\% & 1\(\times 10^{-4}\) & 13\% & 2\(\times 10^{-4}\) \\ Process & \multicolumn{2}{c|}{Sherpa} & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c|}{Chili (basic)} & \multicolumn{2}{c|}{Process} & \multicolumn{2}{c|}{Sherpa} & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c}{Chili (basic)} \\ & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(h\)+1j & 0.4\% & 2\(\times 10^{-1}\) & 0.4\% & 2\(\times 10^{-1}\) & 0.4\% & 2\(\times 10^{-1}\) & 0.4\% & 2\(\times 10^{-1}\) & 0.6\% & 1\(\times 10^{-1}\) & 0.6\% & 1\(\times 10^{-1}\) \\ \(h\)+2j & 0.8\% & 2\(\times 10^{-2}\) & 0.6\% & 5\(\times 10^{-2}\) & 0.6\% & 5\(\times 10^{-2}\) & 0.6\% & 5\(\times 10^{-2}\) & 0.9\% & 2\(\times 10^{-2}\) & 0.6\% & 6\(\times 10^{-2}\) & 0.9\% & 3\(\times 10^{-2}\) \\ \(h\)+3j & 1.4\% & 3\(\times 10^{-3}\) & 0.9\% & 2\(\times 10^{-2}\) & 0.9\% & 2\(\times 10^{-2}\) & 0.9\% & 2\(\times 10^{-2}\) & 1.4\% & 4\(\times 10^{-3}\) & 0.9\% & 2\(\times 10^{-2}\) & 1.4\% & 1\(\times 10^{-2}\) \\ \(h\)+4j & 2.4\% & 6\(\times 10^{-4}\) & 1.6\% & 6\(\times 10^{-3}\) & 1.7\% & 7\(\times 10^{-3}\) & 4\(\times 10^{-3}\) & 2.6\% & 7\(\times 10^{-4}\) & 1.5\% & 7\(\times 10^{-3}\) & 2.9\% & 2\(\times 10^{-3}\) \\ \(h\)+5j & 4.5\% & 1\(\times 10^{-4}\) & 3.2\% & 1\(\times 10^{-3}\) & 3.6\% & 1\(\times 10^{-3}\) & 1.0\% & 4\(\times 10^{-4}\) & 3.2\% & 1\(\times 10^{-3}\) & 3.5\% & 8\(\times 10^{-4}\) \\ Process & \multicolumn{2}{c|}{Sherpa} & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c|}{Chili (basic)} & \multicolumn{2}{c|}{Process} & \multicolumn{2}{c|}{Sherpa} & \multicolumn{2}{c|
in Sherpa. This is both encouraging and somewhat surprising, given the relative simplicity of our new approach, which does not make use of repeated t-channel factorization. Due to the uniform jet cuts, we even obtain similar performance when using the minimal number of s-channel parametrizations. This setup is labeled as Chili (basic) in Tab. 1. The results suggest that a single phase-space parametrization may in many cases be sufficient to compute cross sections and generate events at high precision, which is advantageous in terms of computing time and helps to scale the computation to higher multiplicity processes. Moreover, it circumvents the problems related to multi-channel integration discussed in [21; 22] when combining our integrator with neural network based adaptive random number mapping techniques. We note that this configuration is also used by MCFM [30].
Table 2 shows a similar comparison as in Tab. 1, but in addition we apply a cut on the leading jet, requiring
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} Process & \multicolumn{3}{c|}{Sherpa} & \multicolumn{3}{c|}{Chili} & \multicolumn{3}{c|}{Chili (basic)} & \multicolumn{3}{c|}{Process} & \multicolumn{3}{c|}{Sherpa} & \multicolumn{3}{c|}{Chili} & \multicolumn{3}{c|}{Chili (basic)} \\ & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & boosted & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(h\)+2j & 1.1\% & \(8\times 10^{-3}\) & 0.7\% & \(4\times 10^{-2}\) & 0.7\% & \(3\times 10^{-2}\) & 0.7\% & \(3\times 10^{-2}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) \\ \(h\)+5j & 4.8\% & \(9\times 10^{-5}\) & 4.2\% & \(7\times 10^{-4}\) & 3.1\% & \(1\times 10^{-4}\) & 3.1\% & \(1\times 10^{-4}\) & 0.7\% & \(2\times 10^{-4}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) \\ Process & \multicolumn{3}{c|}{Sherpa} & \multicolumn{3}{c|}{Chili} & \multicolumn{3}{c|}{Chili (basic)} & \multicolumn{3}{c|}{Process} & \multicolumn{3}{c|}{Sherpa} & \multicolumn{3}{c|}{Chili} & \multicolumn{3}{c|}{Chili} & \multicolumn{3}{c|}{basic} \\ & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) & boosted & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 6M pts & 100 evts & 100 evts \\ \hline \(h\)+2j & 1.1\% & \(8\times 10^{-3}\) & 0.7\% & \(4\times 10^{-2}\) & 0.7\% & \(3\times 10^{-2}\) & 0.7\% & \(3\times 10^{-2}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) \\ \(h\)+3j & 1.8\% & \(2\times 10^{-3}\) & 1.0\% & \(1\times 10^{-2}\) & 1.1\% & \(1\times 10^{-2}\) & 0.7\% & \(1\times 10^{-2}\) & 0.7\% & \(1\times 10^{-3}\) & 0.7\% & \(2\times 10^{-3}\) & 0.7\% & \(4.3\%\) & \(4\times 10^{-4}\) & 0.7\% & \(1\times 10^{-4}\) & 0.7\% & \(1\times 10^{-4}\) & 0.7\% & \(1\times 10^{-4}\) & 0.7\% & \(1\times 10^{-4}\) & 0.7\% & \(1\times 10^{-4}\) & 0.7\% & \(1\times 10^{-4}\) & 0.7\% & \(2\times 10^{-3}\) & 0.
\(p_{\perp,j1}>300\) GeV. This configuration tests the regime where the hard system receives a large boost, and there is usually a strong hierarchy between the jet transverse momenta. In these scenarios we expect the complete Chili integrator to outperform the basic configuration with a t-channel only, which is confirmed by the comparison in Tab. 2. The lower right sub-table shows a configuration where we do not apply the additional transverse momentum cut, but instead use a large di-jet invariant mass cut, typical for VBF searches and measurements, \(m_{j1,j2}\geq 600\) GeV.
Table 3 shows a comparison of MC uncertainties and cut efficiencies for various next-to-leading order QCD computations. The shorthand B-like stands for the Born plus virtual plus integrated IR counterterms in the Catani-Seymour dipole subtraction method. The shorthand R-like stands for the IR-subtracted real-emission corrections using Catani-Seymour dipole subtraction. These calculations exhibit slightly different structures than at leading order in QCD, cf. [41]. The real-emission integrals in particular test the efficiency of the dipole mapping described in Sec. II.3. It can be seen that our new algorithm has a much better cut efficiency than the recursive phase-space generator in Sherpa, which is again advantageous in terms of overall computing time. The MC uncertainty for a given number of phase-space points is reduced at low jet multiplicity, and generally comparable to the recursive phase-space generator. Given the simplicity of the Chili approach, this is a very encouraging result for the development of NLO simulations on simpler computing architectures. If a speedup of the matrix-element calculation is obtained, for example through analytic expressions [53], accelerated numerical evaluation [54; 55; 56; 57] or the usage of surrogate methods [58; 59], then the linear scaling of the basic Chili generator at leading order, and the polynomial scaling of the dipole-based generator, will become an important feature.
Table 4 shows a comparison of the Vegas-based Chili integrator and the neural-network assisted integrator for color summed matrix elements. We use the single channel configuration of MadNIS [22] (which is consistent with iFlow [20]) in combination with Chili. The network is setup with 6 rational quadratic spline coupling layers [60] with random permutations, each consisting of a neural network with 2 layers with 16 nodes each using a leaky ReLU activation function. The network is trained using 20 epochs of training with 100 batches of 1000 events per epoch with the variance as the loss term as in Ref. [22]. The learning rate starts at 0.001 and decays each epoch by \(l_{0}/(1+l_{d}s/d_{s})\), where \(l_{0}\) is the initial learning rate, \(l_{d}=0.01\) is the decay rate, \(s\) is the number of steps, and \(d_{s}=100\) is the number of steps before applying the decay. Optimizing these parameters to achieve peak performance is beyond the scope of this project and can be done in a similar fashion as in Ref. [21].
Figures 2 and 3 show the weight distributions from 6 million phase-space points after training for the simplest and next to simplest of our test processes. We compare the recursive integrator of Comix, Chili with Vegas and Chili in combination with MadNIS. All results have been computed using color summed matrix elements. It can be seen that the normalizing flow based integrator yields a very narrow weight distribution in most cases, leading to the excellent unweighting efficiencies shown in Tab. 4. However, the default Comix integrator leads to a sharper upper edge of the weight distribution in the more complex scenarios of Fig. 3, which is favorable for unweighting. This indicates that the multi-channel approach with additional s-channels is favorable at high multiplicities. We will investigate further this effect using the technology developed in Ref. [22]. Furthermore, while the variance loss is optimal for achieving a narrow weight distribution, larger weight terms are not significantly penalized. This in turn leads to a less sharp upper edge in the weight distribution. Additionally, the number of points required to reach optimal performance for the normalizing flow is significantly higher than the Vegas based approaches, as demonstrated in Ref. [20]. A study of
\begin{table}
\begin{tabular}{l|c|c|c|c} Process & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c}{Chili (Basic)+NF} \\ (color sum) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(W^{+}\)+1j & 0.4\% & \(2\times 10^{-1}\) & 0.2\% & \(4\times 10^{-1}\) \\ \(W^{+}\)+2j & 0.7\% & \(4\times 10^{-2}\) & 0.7\% & \(5\times 10^{-2}\) \\ Process & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c}{Chili (Basic)+NF} \\ (color sum) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(h\)+1j & 0.2\% & \(5\times 10^{-1}\) & 0.05\% & \(8\times 10^{-1}\) \\ \(h\)+2j & 0.3\% & \(1\times 10^{-1}\) & 0.3\% & \(2\times 10^{-1}\) \\ Process & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c}{Chili (Basic)+NF} \\ (color sum) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(\gamma\)+1j & 0.6\% & \(2\times 10^{-1}\) & 0.1\% & \(5\times 10^{-1}\) \\ \(\gamma\)+2j & 1.8\% & \(5\times 10^{-3}\) & 1.4\% & \(9\times 10^{-2}\) \\ \end{tabular}
\begin{tabular}{l|c|c|c|c} Process & \multicolumn{2}{c}{Chili} & \multicolumn{2}{c}{Chili (Basic)+NF} \\ (color sum) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline \(\overline{t}\)+0j & 0.1\% & \(6\times 10^{-1}\) & 0.05\% & \(7\times 10^{-1}\) \\ \(t\)+1j & 0.2\% & \(3\times 10^{-1}\) & 0.3\% & \(2\times 10^{-1}\) \\ Process & \multicolumn{2}{c|}{Chili} & \multicolumn{2}{c}{Chili} & \multicolumn{2}{c}{Chili (Basic)+NF} \\ (color sum) & \(\Delta\sigma/\sigma\) & \(\eta\) & \(\Delta\sigma/\sigma\) & \(\eta\) \\ & 6M pts & 100 evts & 6M pts & 100 evts \\ \hline
2jets & 0.2\% & \(4\times 10^{-1}\) & 0.08\% & \(6\times 10^{-1}\) \\
3jets & 0.5\% & \(6\times 10^{-2}\) & 0.7\% & \(3\times 10^{-2}\) \\ \end{tabular}
\end{table}
Table 4: Relative Monte-Carlo uncertainties, \(\Delta\sigma/\sigma\), and unweighting efficiencies, \(\eta\), in leading-order calculations. The center-of-mass energy is \(\sqrt{s}=14\) TeV, jets are defined using the anti-\(k_{T}\) algorithm with \(p_{\perp,j}=30\) GeV and \(|y_{j}|\leq 6\). For details see the main text.
Figure 2: Weight distribution for the lowest multiplicity processes found in Tab. 4. Each curve contains 6 million events. The Comix integrator is shown in red, the Chili with Vegas is shown in blue, and Chili with normalizing flows is shown in green. The results for \(W+1j\) is in the upper right, \(Z+1j\) in the upper left, the middle row consists of \(h+1j\) and \(t\bar{t}+0j\), and the bottom row has \(\gamma+1j\) and diets respectively.
Figure 3: Same as Fig. 2, but with an additional jet for each process.
the effect on the choice of loss function and other hyper-parameters involved in the normalizing flow approach is left to a future work to improve the unweighting efficiency at higher multiplicities and the convergence of the integrator.
## IV Outlook
We have presented a new phase-space generator that combines various existing techniques for hadron collider phase-space integration into a simple and efficient algorithm. We have implemented these techniques in a scalable framework for CPU computing. Several extensions of this framework are in order: It should be ported to allow the usage of GPUs. Computing platforms other than CPUs and GPUs could be enabled with the help of Kokkos [61] or similar computing models. This becomes particularly relevant in light of recent advances in computing matrix elements on GPUs using portable programming models [54; 55; 56; 57]. In addition, the techniques for real-emission corrections should be extended beyond Sherpa, in order to make our generator applicable to a wider range of problems. Lastly, we plan to further explore the combination of our new techniques with existing neural-network based integration methods.
###### Acknowledgements.
We thank John Campbell for many stimulating discussions and his support of the project. This research was supported by the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. The work of F.H., S.H. and J.I. was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program, grant "HPC framework for event generation at colliders". F.H. acknowledges support by the Alexander von Humboldt foundation. E.B. and M.K. acknowledge support from BMBF (contract 05H21MGCAB). Their research is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 456104544; 510810461.
|
2301.09448 | Quantum Configuration and Phase Spaces: Finsler and Hamilton Geometries | In this paper, we review two approaches that can describe, in a geometrical
way, the kinematics of particles that are affected by Planck-scale departures,
named Finsler and Hamilton geometries. By relying on maps that connect the
spaces of velocities and momenta, we discuss the properties of configuration
and phase spaces induced by these two distinct geometries. In particular, we
exemplify this approach by considering the so-called $q$-de Sitter-inspired
modified dispersion relation as a laboratory for this study. We finalize with
some points that we consider as positive and negative ones of each approach for
the description of quantum configuration and phases spaces. | Saulo Albuquerque, Valdir B. Bezerra, Iarley P. Lobo, Gabriel Macedo, Pedro H. Morais, Ernesto Rodrigues, Luis C. N. Santos, Gislaine Varão | 2023-01-23T14:16:50Z | http://arxiv.org/abs/2301.09448v1 | # Quantum Configuration and Phase Spaces: Finsler and Hamilton Geometries
###### Abstract
In this paper, we review two approaches that can describe, in a geometrical way, the kinematics of particles that are affected by Planck-scale departures, named Finsler and Hamilton geometries. By relying on maps that connect the spaces of velocities and momenta, we discuss the properties of configuration and phase spaces induced by these two distinct geometries. In particular, we exemplify this approach by considering the so-called \(q\)-de Sitter-inspired modified dispersion relation as a laboratory for this study. We finalize with some points that we consider as positive and negative ones of each approach for the description of quantum configuration and phases spaces.
## I Introduction
Since the original works by Bronstein [1] that demonstrated uncertainty in the localization of events when geometrical degrees of freedom are quantized, it has been argued that attempts to formulate quantum gravity in a differentiable manifold endowed with smooth geometric quantities would not be an interesting path to follow if one aims to pursue a fundamental approach to this problem. Attempts in this direction have accumulated over the years, having prominent representatives such as loop quantum gravity (LQG) [2] and causal dynamical triangulation [3]. These approaches to quantum gravity predict or describe several effects that should be manifest at the Planckian regime of length and energy, such as the discretization of geometry, which requires a language that obviously departs from the usual Riemannian construction of general relativity. Despite the elegance of such approaches, with current technology we are far from being able to concretely address the regime in which such discretization would become evident. Nevertheless, the notion that spacetime could effectively behave like a medium formed by "atoms of space" has led to a rich phenomenological approach to quantum gravity, which by encoding generic departures from relativistic equations, can describe common predictions expected to be present at an intermediate stage between classical and quantum gravity. Such an approach is encompassed in the area of quantum gravity phenomenology, which addresses a myriad of effects beyond the one described in this paragraph, as can be seen in Ref. [4], and in particular, has found in multimessenger astronomy a fruitful environment to be explored [5].
Usually, the regime, in which this idea is considered, is the regime, in which the test particle approximation is valid consisting of the approximation, in which one would have simultaneously faint gravitational and quantum effects, described by the limits of the gravitational constant, \(G\to 0\), and the reduced Planck's constant, \(\hbar\to 0\), however, with the Planck energy, \(E_{P}=\sqrt{c^{5}\hbar/G}\), being finite, with \(c\) the speed of light. This deformed "Minkowski limit," which presents departures from Minskowski spacetime's structure has been suggested by various quantum gravity proposals, such as the linearization of the hypersurface deformation algebra inspired by LQG [6; 7; 8] and noncommutative geometry [9; 10; 11; 12] (for more details on this Miskowski limit, see Section 3.1.1 of Ref. [4], and for more references on other theoretical approaches in which such limit emerges, please refer to Section 2.2 of Ref. [5]). It is expected that the path between the differentiable Riemannian description of special (and general) relativity and the complete quantum gravity theory should pass through an intermediate regime, in which one has departures from the Riemannian character of spacetime but still has geometric features that could describe a bottom-up phenomenology.
Furthermore, geometry plays an important role in the description of principles that have guided the developments of relativistic theories; for example, the principle of covariance is manifest through the use of tensorial equations of motion, the local relativity principle is a physical manifestation of having local equations of motion invariant under the Poincare group (which is the group of isometries of Minkowski space), the equivalence principle of general relativity is manifest in the fact that the motion of free particles is realized through geodesics, and the clock postulate can be expressed by stating that an observer measures its proper time by the arc-length of its own trajectory.
An important part of quantum gravity phenomenology is devoted to the question of whether, in the aforementioned Minkowski limit, the Lorentz invariance, and consequently, the local relativity principle, is preserved or broken due to Planck-scale effects [13]. As is known, a length/energy scale is not invariant under Lorentz transformations, which implies that either a quantum gravity scale breaks the equivalence of inertial frames in the aforementioned Minkowski limit, or the Lorentz or Poincare group only describes a low energy/large distance approximation of a deeper transformation between inertial frames. The former possibility is known as a Lorentz invariance violation (LIV) scenario [14; 15], and the latter is known as doubly (or deformed) special relativity (DSR) [16; 17]. As the geometrization of special relativity, due to Minkowski, paved the way to more fundamental descriptions of nature, we shall follow a similar path, but of geometrizing DSR.
Geometric descriptions of DSR through non-commutative geometry are known [9; 10; 11; 12], but we revise some continuous, differentiable ways of exploring non-Riemannian degrees of freedom and the possibilities for preserving the aforementioned principles. This way, we critically analyze two extensions of Riemannian geometry that are capable of describing aspects of an emergent "quantum configuration and phase spaces" that preserve the intuition of those principles: they are Finsler and Hamilton geometries. Finsler geometry originally is related to the space of events and velocities (for this reason we refer to a quantum configuration space), and Hamilton geometry originally described the space of events and momenta (for this reason, we call it a quantum phase space). In this paper, we revise the phenomenological opportunities that emerge from these approaches and the interplay between them. We also condensate the utility of each of these geometries and their limitations in the current scenario.
We should also stress that the approaches described in this review, refer to configuration and phase spaces probed by a single particle. The geometry probed by a multi-particle system and its interplay with Finsler and Hamilton languages (or even geometries that go beyond them) should still be further explored, in which, possibly the intuition gained from the relative locality framework [18] would play a prominent role in this approach.
The paper is organized as follows. Section II revisits the origin of the idea of describing the effective spacetime probed by a particle that propagates through a modified dispersion relation (MDR) by the proposal of rainbow metrics.
Section III revisits how this general idea is realized by the use of Finsler geometry in the tangent bundle, whose dual version in the cotangent bundle is discussed in Section IV, which is illustarated by considering the curved non-trivial case of \(q\)-de Sitter-inspired Finsler geometry. Section V considers the situation of deriving the geometry of the cotangent bundle, and, in Section VI, its dual tangent bundle formalization defined by Hamilton geometry is considered, which is illustrated by the \(q\)-de Sitter case. In Section VII, we comparatively discuss these two approaches and highlight points that we consider as useful as well as their limitations. Finally, some important remarks are drawn in Section VIII. Throughout the paper, a system of units with \(c=\hbar=1\) is used, so that the Planck length is the inverse of the Planck energy: \(\sqrt{G}=\ell=E_{\mathrm{P}}^{-1}\).
## II Preliminaries on rainbow geometries
As described above, over the years, the intuition that spacetime would behave like material media, where instead of atoms of matter, one would have atoms of spacetime, has been solidified through some approaches of quantum gravity. Just as occurs in matter, in which one does not need to know the specific details of the granular structure of a given medium to study the propagation of particles through it, in spacetime, one can build phenomenology-inspired ways of modeling how elementary particles interact with discrete gravitational degrees of freedom while traveling through space, a so-called "in-vaccuum dispersion." One could say that the most popular way of doing this is through the assumption that particles would obey a modified dispersion relation, whose corrections are given perturbatively by powers of the quantum gravity scale, which we could assume as being in the order of Planck units. The dispersion relation furnishes the group velocity of waves and defines the trajectory that on-shell particles follow from the Hamilton equations. Actually, when the interplay between the presence of amplifiers of observables and the uncertainties of observations allows us to constrain this parameter at a level close to its Planckian version, we say that we are at Planck-scale sensitivity [4].
Such behavior also happens in meta-materials [19], in which it is possible to describe the motion of particles through it by geodesics in a given geometry; it also appears in the motion of a charged particle in a pre-metric formulation of electromagnetism [20], in the description of seismic waves [21], etc.; for a review, see Ref. [22]. Additionally, one could wonder if the motion of particles, determined by Planck-scale modified dispersion relations, could also be described by geodesics of a non-Riemannian geometry. Besides, the dispersion relation itself is usually determined by the norm of the 4-momentum measured by a Riemannian metric, which also determines the symmetries observed by measurements in that spacetime.
This intuition was early realized by the so-called "rainbow geometries" [23], idealized by Joao Magueijo and Lee Smolin which aimed to extend the DSR formulation proposed by them in Ref. [17] to curved spacetimes. In that case, the way found to express local modified dispersion relations through a norm, consisted in absorbing functions
of the particle's energy divided by Planck energy, \(\epsilon=E/E_{\rm P}\), such as \(f(\epsilon)\) and \(g(\epsilon)\), which would appear in the MDR that follows:
\[m^{2}=f^{2}(\epsilon)E^{2}-g^{2}(\epsilon)|\vec{p}|^{2}\,, \tag{1}\]
(with the three-momentum \(\vec{p}\)) into the definition of new spacetime tetrads, \(\tilde{e}_{(0)}^{\ \
of Finsler spaces (for a historical perspective on Finsler geometry, we refer the reader to the Preface of Ref. [38] and references therein). The case of pseudo-Finsler geometries, as an arena for describing spacetime, has been recently discussed [39; 40], where, for instance, different definitions are presented and important theorems regarding its causal structure among other issues are being derived [41].
In Section II, a glimpse of the non-Riemannian nature of spacetime was notified emerging as a manifestation of the quantization of gravitational degrees of freedom. Actually, as one can anticipate, the non-quadratic, i.e., non-Pythagorean nature of a dispersion relation is connected to a possible Finslerian nature of spacetime through an intermediate step that connects the kinematics of particles in a Hamiltonian to a Lagrangian formulation. Actually, the MDR corresponds to a Hamiltonian constraint, which physical particles supposedly obey, the way that the trajectories of free particles, induced by such a deformed Hamiltonian, capture the propagation of a particle through a quantized spacetime. For this reason, the Helmholtz action, associated with such a particle, is naturally given by the functional,
\[S[x,p,\lambda]=\int d\mu(\dot{x}^{\alpha}p_{\alpha}-\lambda f(H(x,p),m))\,, \tag{4}\]
where the dot denotes differentiation with respect to the parameter \(\mu\), \(p_{\mu}\) is the particle's momenta, \(f\) is a function that is null if the dispersion relation is satisfied, namely, \(H(x,p)=m\), and \(\lambda\) is a Lagrange multiplier. This is a premetric formulation that is actually defined in the space \(T^{*}M\times\mathbb{R}\), where \(T^{*}M\) is the phase space of analytical mechanics or cotangent bundle. In order to find an arc-length, and consequently, a geometric structure, one needs to calculate an equivalent Lagrangian defined in the configuration space or the tangent bundle \(TM\) described by points and velocities (such an observation was firstly presented in Ref. [42]). The algorithm for doing so is as follows [43]:
1. variation with respect to \(\lambda\) enforces the dispersion relation;
2. variation with respect to \(p_{\mu}\) yields an equation \(\dot{x}^{\mu}=\dot{x}^{\mu}(p,\lambda)\), which must be inverted to obtain \(p_{\mu}(x,\dot{x},\lambda)\) to eliminate the momenta \(p_{\mu}\) from the action;
3. using \(p_{\mu}(x,\dot{x},\lambda)\) in the dispersion relation, one can solve for \(\lambda(x,\dot{x})\); and
4. finally, the desired length measure is obtained as \(S[x]=S[x,p(x,\dot{x},\lambda(x,\dot{x})),\lambda(x,\dot{x})]_{H}\).
This is a Legendre transformation, whose conditions of existence and capability of providing a physical framework are discussed in Refs. [44; 45]. These formal conditions are always guaranteed when one considers deformations at the perturbative level. This is crucial because the following algorithm cannot be applied in practice if it is not possible to invert the velocity function to find the momenta as a function of the other variables. In general, this cannot be done, especially for complicated dispersion relations, such as those that depend on sums of hyperbolic functions [46]. Anyway, since quantum gravity phenomenology is usually concerned with first order effects, which are those attainable by experiments nowadays, we shall concentrate on the perturbative level in order to derive our conclusions.
For example, if this algorithm is applied to a Hamiltonian of the form,
\[H(x,p)=g(p,p)+\varepsilon h(x,p)\,, \tag{5}\]
where \(g(p,p)=g^{ab}(x)p_{a}p_{b}\) is an undeformed dispersion relation, \(h(x,p)\) is a function of spacetime points and momenta that depends on the model under consideration, and \(\varepsilon\) is the perturbation parameter that is usually a function of the energy scale of the deformation (such as the Planck or quantum gravity length scale). As shown in Ref. [43], after the Legendre transformation, the equivalent action takes the form,
\[S[x]=m\int d\mu\sqrt{g(\dot{x},\dot{x})}\left(1-\varepsilon\frac{h(x,\bar{p}( x,\dot{x}))}{2m^{2}}\right)\,, \tag{6}\]
where \(\bar{p}_{a}(x,\dot{x})=m\dot{x}_{a}/\sqrt{g(\dot{x},\dot{x})}\). In particular, when \(h\) is a polynomial function of momenta as (the index is shifted: \(n\to n+2\), in comparison with Ref. [43], such that now \(n\) corresponds to the power of Planck length in the MDR),
\[h(x,p)=h^{\mu_{1}\mu_{2}....\mu_{n+2}}(x)p_{\mu_{1}}p_{\mu_{2}}...p_{\mu_{n+2 }}\,, \tag{7}\]
and \(\varepsilon=\ell^{n}\), one finds an action of the form,
\[S[x]=m\int d\mu\sqrt{g(\dot{x},\dot{x})}\left(1-(\ell m)^{n}\frac{h_{\mu_{1} \mu_{2}....\mu_{n+2}}(x)\dot{x}^{\mu_{1}}\dot{x}^{\mu_{2}}...\dot{x}^{\mu_{n+2 }}}{2g(\dot{x},\dot{x})^{\frac{n+2}{2}}}\right)\,, \tag{8}\]
where we lowered the indices of \(h\) with the components of \(g\). The connection between the mechanics of free particle and geometry takes place when the above expression is identified with the arc-length functional, \(s[x]\), of a given geometry, i.e., \(s[x]=S[x]/m\). Such an identification makes sense if we want to state that the trajectories of free particles are extremizing curves or geodesics in a given geometry, it is related to the preservation of the equivalence principle even in this Planck-scale deformed scenario.
In this case, the spacetime in which a particle propagates by a MDR is described by an arc-length functional that generalizes the one of Riemannian geometry and is given by a function \(F(x,\dot{x})\) that is 1-homogeneous in the velocity \(\dot{x}\), such that the arc-length is indeed parametrization invariant, as it must be:
\[s[x]=\int F(x,\dot{x})d\mu\,. \tag{9}\]
Actually, this is the kind of scenario envisaged by Riemann in his dissertation, and explored by Finsler, that emerges here quite naturally. There are some definitions of a pseudo-Finsler spacetime in the literature, but we rely on that given in Ref. [39] (the differences in comparison to other definitions are discussed in Ref. [39]). First of all, we are going to work with a smooth manifold, \(M\), endowed with a real valued positive function \(L\) that takes values on the tangent bundle \(TM\), described by coordinates \((x,y)\), where \(\{x^{\mu}\}\) are spacetime coordinates and \(\{y^{\mu}\}\) refer to vector or velocity coordinates. Actually, we shall need the slit tangent bundle \(\widetilde{TM}=TM/\{0\}\), in which we remove the zero section, and we also need the projection \(\pi:TM\to M\). A conic subbundle is a submanifold \(\mathcal{D}\subset\widetilde{TM}\) such that \(\pi(\mathcal{D})=M\) and with the conic property that states that if \((x,y)\in\mathcal{D}\Rightarrow(x,\lambda y)\in\mathcal{D}\), \(\forall\lambda>0\).
In a nutshell, a Finsler spacetime is a triple \((M,\mathcal{D},L)\), where \(L:\mathcal{D}\rightarrow\mathbb{R}\) is a smooth function satisfying the conditions:
1. positive 2-homogeneity: \(L(x,\alpha y)=\alpha^{2}L(x,y)\), \(\forall\alpha>0\);
2. at any \((x,y)\in\mathcal{D}\) and in any chart of \(\widetilde{TM}\), the following Hessian (metric) is non-degenerate: \[g_{\mu\nu}(x,y)=\frac{1}{2}\frac{\partial^{2}}{\partial y^{\mu}\partial y^{ \nu}}L(x,y)\,;\] (10)
3. the metric \(g_{\mu\nu}\) has a Lorentzian signature.
The function \(L\) is actually the square of the Finsler function, \(L(x,y)=F^{2}(x,y)\), and from it the Finsler arc-length is defined as given in Equation (9). Condition 1 above guarantees that Equation (9) does not depend on the parametrization used to describe the curve and that using Euler's theorem for homogeneous functions, this expression can be cast as
\[s[x]=\int\sqrt{g_{\mu\nu}(x,\dot{x})\dot{x}^{\mu}\dot{x}^{\nu}}d\mu\,. \tag{11}\]
From a coordinate transformation,
\[\tilde{x}^{\mu}=\tilde{x}^{\mu}(x)\,, \tag{12}\] \[\tilde{y}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}} y^{\nu}\,, \tag{13}\]
the functions \(g_{\mu\nu}\) transform according to
\[\tilde{g}_{\mu\nu}(\tilde{x},\tilde{y})=\frac{\partial x^{\alpha}}{\partial \tilde{x}^{\mu}}\frac{\partial x^{\beta}}{\partial\tilde{x}^{\nu}}g_{\alpha \beta}(x,y)\,. \tag{14}\]
Due the property 14, \(g_{\mu\nu}\) is referred here as the components of a distinguished tensor field (or \(d\)-tensor field) on the manifold \(\widetilde{TM}\), which follows the notation adopted in Ref. [47]. The extremization of the arc-lenght functional (9) gives the following geodesic equation,
\[\frac{d^{2}x^{\mu}}{d\mu^{2}}+2G^{\mu}(x,\dot{x})=2\frac{dF}{d\mu}\frac{ \partial F}{\partial\dot{x}^{\mu}}\,, \tag{15}\]
where \(G^{\mu}=G^{\mu}(x,\dot{x})\) are the spray coefficients [48] and are given in terms of the Christoffel symbols, \(\gamma^{\alpha}_{\mu\nu}\), of the metric \(g_{\mu\nu}\):
\[G^{\alpha}(x,\dot{x})=\frac{1}{2}\gamma^{\alpha}_{\mu\nu}(x, \dot{x})\dot{x}^{\mu}\dot{x}^{\nu}\,, \tag{16}\] \[\gamma^{\alpha}_{\mu\nu}(x,\dot{x})=\frac{1}{2}g^{\alpha\beta} \left(\frac{\partial g_{\mu\beta}}{\partial x^{\nu}}+\frac{\partial g_{\nu \beta}}{\partial x^{\mu}}-\frac{\partial g_{\mu\nu}}{\partial x^{\beta}} \right)\,. \tag{17}\]
If we choose the arc-length parametrization, i.e., the one in which \(F=1\), we have a sourceless geodesic equation. This expression means that the trajectories generated by a MDR of the form \(H(x,\dot{x})=m^{2}\) are, actually, geodesics of a Finsler metric. The presence of spray coefficients allows us to construct another quite a useful quantity, the so-called Cartan non-linear connection, given by (in this paper, we interchange the notation \(\dot{x}\leftrightarrow y\) freely)
\[N^{\mu}{}_{\nu}(x,y)=\frac{\partial}{\partial y^{\nu}}G^{\mu}(x,y)\,, \tag{18}\]
that transforms according to
\[\tilde{N}^{\mu}{}_{\nu}=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\alpha}} \frac{\partial x^{\beta}}{\partial\tilde{x}^{\nu}}N^{\alpha}{}_{\beta}-\frac{ \partial^{2}\tilde{x}^{\mu}}{\partial x^{\alpha}\partial x^{\beta}}\frac{ \partial x^{\beta}}{\partial\tilde{x}^{\nu}}y^{\alpha}\,. \tag{19}\]
The introduction of this quantity allows us to introduce a useful basis of the tangent space of the tangent bundle at each point. In fact, since according to the coordinate transformation (12) and (13), the usual coordinate basis transforms as
\[\frac{\partial}{\partial\tilde{x}^{\mu}}=\frac{\partial x^{\nu}} {\partial\tilde{x}^{\mu}}\frac{\partial}{\partial x^{\nu}}+\frac{\partial^{ 2}x^{\nu}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\alpha}}\frac{\partial \tilde{x}^{\alpha}}{\partial x^{\beta}}y^{\beta}\frac{\partial}{\partial y^{ \nu}}\,, \tag{20}\] \[\frac{\partial}{\partial\tilde{y}^{\mu}}=\frac{\partial y^{\nu}} {\partial\tilde{y}^{\mu}}\frac{\partial}{\partial y^{\nu}}\,. \tag{21}\]
In addition, a non-linear connection allows us to define the following frame:
\[\frac{\delta}{\delta x^{\mu}}=\delta_{\mu}=\frac{\partial}{ \partial x^{\mu}}-N^{\nu}{}_{\mu}\frac{\partial}{\partial y^{\nu}}\,, \tag{22}\] \[\dot{\partial}_{\mu}=\frac{\partial}{\partial y^{\mu}}\,. \tag{23}\]
Due to the transformation properties of the non-linear connection, this basis transforms as
\[\tilde{\delta}_{\mu}=\frac{\partial x^{\nu}}{\partial\tilde{x}^{ \mu}}\delta_{\nu}\,, \tag{24}\] \[\tilde{\partial}_{\mu}=\frac{\partial x^{\nu}}{\partial\tilde{x}^ {\mu}}\dot{\partial}_{\nu}\,. \tag{25}\]
This means that one is able to split the tangent space of the tangent bundle into horizontal, \(HTM=\text{span}\{\delta_{\mu}\}\), and vertical, \(VTM=\text{span}\{\dot{\partial}_{\mu}\}\), spaces, such that \(T\widehat{TM}=HTM\oplus VTM\) in each point \((x,y)\). Similarly, the same reasoning applies to the cotangent space; i.e., we split \(T^{*}\widehat{TM}=H^{*}TM\oplus V^{*}TM\) spanned as \(H^{*}TM=\text{span}\{dx^{\mu}\}\) and \(V^{*}TM=\text{span}\{\delta y^{\mu}\}\), where
\[\delta y^{\mu}=dy^{\mu}+N^{\mu}{}_{\nu}dx^{\nu}\,, \tag{26}\]
which transforms as
\[d\tilde{x}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu} }dx^{\nu}\,, \tag{27}\] \[\delta\tilde{y}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial x^ {\nu}}\delta y^{\nu}\,. \tag{28}\]
Such a decomposition of the tangent and cotangent vector spaces implies that a vector \(X\) and a 1-form \(\omega\) with horizontal and vertical terms can read as
\[X=X^{\mu}\delta_{\mu}+\dot{X}^{\mu}\dot{\partial}_{\mu}=X^{H}+X ^{V}\,, \tag{29}\] \[\omega=\omega_{\mu}dx^{\mu}+\dot{\omega}_{\mu}\delta y^{\mu}= \omega^{H}+\omega^{V}\,. \tag{30}\]
Endowed with this basis, the metric \(\mathbb{G}(x,y)\) of the configuration space is described by the so-called Sasaki-Matsumoto lift of the metric \(g_{\mu\nu}\):
\[\mathbb{G}(x,y)=g_{\mu\nu}(x,y)dx^{\mu}\otimes dx^{\nu}+g_{\mu\nu}(x,y)\delta y ^{\mu}\otimes\delta y^{\nu}\,. \tag{31}\]
**Definition III.1**: _A tensor field \(T\) of type \((m+n,p+q)\) on the manifold \(\widetilde{TM}\) is called a distinguished tensor field (or \(d\)-tensor field) if it has the property_
\[T\left(\overset{1}{\omega},...,\overset{m}{\omega},\overset{1}{ \tau},...,\overset{n}{\tau},\overset{N}{\tau},\overset{N}{\tau},...,\overset{N} {\tau},\overset{Y}{\tau},...,\overset{Y}{\tau}\right)=T\left(\overset{1}{ \omega}^{H},...,\overset{m}{\omega}^{H},\overset{1}{\tau}^{V},...,\overset{n}{ \tau},\overset{N}{\chi}_{1}^{H},...,\overset{H}{\tau},\overset{Y}{\tau},..., \overset{Y}{\tau}\right). \tag{32}\]
This definition implies that one can write a \(d\)-tensor \(T\) in the preferred frame as
\[T=T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{ p}\beta_{1}...\beta_{q}}\frac{\delta}{\delta x^{\mu_{1}}}\otimes...\otimes \frac{\delta}{\delta x^{\mu_{m}}}\otimes\frac{\partial}{\partial y^{\nu_{1}}} \otimes...\otimes\frac{\partial}{\partial y^{\nu_{n}}}\] \[\otimes dx^{\alpha_{1}}\otimes...\otimes dx^{\alpha_{p}}\otimes \delta y^{\beta_{1}}\otimes...\otimes\delta y^{\beta_{q}}\,, \tag{33}\]
and that it transforms according to the rule,
\[\widetilde{T}^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}} \tag{34}\] \[=\frac{\partial\bar{x}^{\mu_{1}}}{\partial x^{\varepsilon_{1}}} \cdots\frac{\partial\bar{x}^{\mu_{m}}}{\partial x^{\varepsilon_{m}}}\frac{ \partial\bar{x}^{\nu_{1}}}{\partial x^{\lambda_{1}}}\cdots\frac{\partial\bar{x }^{\nu_{n}}}{\partial x^{\lambda_{n}}}\frac{\partial x^{\gamma_{1}}}{\partial \bar{x}^{\alpha_{1}}}...\frac{\partial x^{\gamma_{p}}}{\partial\bar{x}^{ \alpha_{p}}}\frac{\partial x^{\rho_{1}}}{\partial\bar{x}^{\beta_{1}}}... \frac{\partial x^{\rho_{q}}}{\partial\bar{x}^{\beta_{q}}}T^{\epsilon_{1}.. \epsilon_{n}\lambda_{1}...\lambda_{m}}{}_{\gamma_{1}...\gamma_{p}\rho_{1}... \rho_{q}}\,.\]
An example of \(d\)-tensor field is the metric whose components are given by Equation (14).
### \(N\)-Linear Connection
Given a linear connection, \(D\), on the manifold \(\widetilde{TM}\), if it preserves the parallelism of the horizontal and vertical spaces, i.e., if it can be written as
\[D_{\delta_{\nu}}\delta_{\mu}=L^{\alpha}_{\mu\nu}\delta_{\alpha}\,,\qquad D_{ \delta_{\nu}}\dot{\partial}_{\alpha}=L^{\mu}_{\alpha\nu}\dot{\partial}_{\mu}\,, \tag{35}\]
\[D_{\dot{\partial}_{\nu}}\delta_{\mu}=C^{\alpha}_{\mu\nu}\delta_{ \alpha}\,,\qquad D_{\dot{\partial}_{\nu}}\dot{\partial}_{\mu}=C^{\alpha}_{\mu \nu}\dot{\partial}_{\alpha}\,, \tag{36}\]
then is called an \(N\)-linear connection. Let us consider a coordinate change; thus, the coefficients (35) and (36) transform as
\[\tilde{L}^{\alpha}_{\mu\nu}=\frac{\partial\bar{x}^{\alpha}}{ \partial x^{\beta}}\frac{\partial x^{\lambda}}{\partial\bar{x}^{\mu}}\frac{ \partial x^{\epsilon}}{\partial\bar{x}^{\nu}}L^{\beta}_{\lambda\epsilon}+ \frac{\partial^{2}x^{\beta}}{\partial\bar{x}^{\mu}\partial\bar{x}^{\nu}}\frac {\partial\bar{x}^{\alpha}}{\partial x^{\beta}}\,, \tag{37}\] \[\tilde{C}^{\alpha}_{\mu\nu}=\frac{\partial\bar{x}^{\alpha}}{ \partial x^{\beta}}\frac{\partial x^{\lambda}}{\partial\bar{x}^{\mu}}\frac{ \partial x^{\epsilon}}{\partial\bar{x}^{\nu}}C^{\beta}_{\lambda\epsilon}\,. \tag{38}\]
Endowed with these coefficients, the derivative of a \(d\)-tensor can be decomposed into a horizontal and a vertical parts, such that one can apply the covariant derivative of a tensor \(T\) of type \((m+n,p+q)\) in the direction of a vector \(X\) as a direction of a vector \(X\) as
\[D_{X}T=D_{X^{H}}T+D_{X^{V}}T\] \[=\left(T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}... \alpha_{p}\beta_{1}...\beta_{q}|\epsilon}X^{\epsilon}+T^{\mu_{1}...\mu_{m}\nu_ {1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}||\epsilon}\dot{X }^{\epsilon}\right)\frac{\delta}{\delta x^{\mu_{1}}}\otimes...\otimes\frac{ \delta}{\delta x^{\mu_{m}}}\] \[\otimes\frac{\partial}{\partial y^{\nu_{1}}}\otimes...\otimes \frac{\partial}{\partial y^{\nu_{n}}}\otimes dx^{\alpha_{1}}\otimes...\otimes dx ^{\alpha_{p}}\otimes\delta y^{\beta_{1}}\otimes...\otimes\delta y^{\beta_{q}}\,, \tag{39}\]
where
\[T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p} \beta_{1}...\beta_{q}|\epsilon} \tag{40}\] \[=\frac{\delta}{\delta x^{\epsilon}}T^{\mu_{1}...\mu_{m}\nu_{1}... \nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}+L^{\mu_{1}}_{\gamma \epsilon}T^{\gamma...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_ {1}...\beta_{q}}+...-L^{\gamma}_{\alpha_{1}\epsilon}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\gamma...\alpha_{p}\beta_{1}...\beta_{q}}\,,\] \[T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p} \beta_{1}...\beta_{q}||\epsilon}\] (41) \[=\frac{\partial}{\partial y^{\epsilon}}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}+C^{\mu_{1}}_{\gamma \epsilon}T^{\gamma...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_ {1}...\beta_{q}}+...-C^{\gamma}_{\alpha_{1}\epsilon}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\gamma...\alpha_{p}\beta_{1}...\beta_{q}}\,,\]
and the property that the covariant derivative is linear in the direction \(X\) is used. The triple \(D\Gamma(N,L,C)\) describes the parallel transport and decomposition of the tangent and cotangent spaces of the tangent bundle into horizontal and vertical spaces. At this point, we need to comment on some remarkable \(N\)-linear connections that are considered in the literature.
The first connection is the metrical Cartan connection, \(C\Gamma(N^{\mu}{}_{\nu},L^{\alpha}_{\mu\nu},C^{\alpha}_{\mu\nu})\). In this case, \(N^{\mu}{}_{\nu}\) is given by the canonical Cartan non-linear connection, defined by the spray coefficients (18). The coefficients \(L^{\alpha}_{\mu\nu}\) and \(C^{\alpha}_{\mu\nu}\) are given, respectively, by
\[L^{\alpha}_{\mu\nu} =\frac{1}{2}g^{\alpha\beta}\left(\frac{\delta g_{\mu\beta}}{ \delta x^{\nu}}+\frac{\delta g_{\nu\beta}}{\delta x^{\mu}}-\frac{\delta g_{ \mu\nu}}{\delta x^{\beta}}\right)\,, \tag{42}\] \[C^{\alpha}_{\mu\nu} =\frac{1}{2}g^{\alpha\beta}\left(\frac{\delta g_{\mu\beta}}{ \delta y^{\nu}}+\frac{\delta g_{\nu\beta}}{\delta y^{\mu}}-\frac{\delta g_{ \mu\nu}}{\delta y^{\beta}}\right)\,. \tag{43}\]
This connection is metrical (i.e., without non-metricity tensors) considering both horizontal and vertical covariant derivatives of the Finsler metric.
Besides, the Berwald connection is given by the triple \(B\Gamma(N^{\mu}{}_{\nu},\partial N^{\alpha}{}_{\mu}/\partial y^{\nu},0)\) and presents horizontal and vertical non-metricities. The Chern-Rund connection, \(R\Gamma(N^{\mu}{}_{\nu},L^{\alpha}_{\mu\nu},0)\), is horizontally metrical, but represents vertical non-metricity. Additionally, the Hashiguchi connection, \(H\Gamma(N^{\mu}{}_{\nu},\partial N^{\alpha}{}_{\mu}/\partial y^{\nu},C^{\alpha }_{\mu\nu})\), represents horizontal non-metricity, but it is vertically metrical. In these expressions, \(N\) is the canonical Cartan non-linear connection (18), \(L\) is given by Equation (42), and \(C\) is given by Equation (43).
### Symmetries
Geometrical language naturally realizes the concept of symmetry of physical equations. General relativity given in terms of Riemannian geometry encompasses the invariance under general coordinate transformations, and the isometries of the Minkowski space describe the Poincare transformations (actually, one can further apply this technique for maximally symmetric spaces, including de Sitter and anti-de Sitter ones). Finsler geometry, as we have been using, allows us to go beyond this scope and to define deformed Lorentz/Poincare transformations that present Planck scale corrections even in the presence of a local modified dispersion relation. One can see how this will naturally emerge, since the invariance of the arc-length (9) is compatible with the invariance of the action in the Hamiltonian formulation (4), from which such an arc-length was derived. This idea was firstly noticed in Ref. [42] and later explicitly explored in Refs. [49; 50]. The master equation for this purpose is the one that follows from the invariance of the Finslerian interval \(ds^{2}\), as done in Appendix A of Ref. [49]. From this invariance, the Finslerian killing equation for the killing vector was found, with components \(\xi^{\alpha}\), which should be solved in order to derive the deformed symmetries in the DSR context,
\[\xi^{\alpha}\partial_{\alpha}g_{\mu\nu}+g_{\alpha\nu}\partial_{\mu}\xi^{ \alpha}+g_{\mu\alpha}\partial_{\nu}\xi^{\alpha}+y^{\alpha}\partial_{\alpha} \xi^{\beta}\partial_{\beta}g_{\mu\nu}=0\,. \tag{44}\]
### Finsler-q-de Sitter (Tangent Bundle Case)
As an example that presents a non-trivial non-linear connection, we shall consider the case of a Finsler geometry inspired by the so-called \(q\)-de Sitter deformed relativity. This case has been previously studied in the literature, e.g., in Refs. [50; 51; 52; 53], and can be described by an algebra that deforms the one of Poincare in a way that gives the de Sitter symmetry when a quantum gravity parameter goes to zero, and on the other hand, gives the so-called \(\kappa\)-Poincare algebra (that deforms the Poincare one by an energy scale parameter, supposedly the Planck energy) when the de Sitter curvature parameter goes to zero. Therefore, it corresponds to an authentic realization of a deformed relativity scenario, even in the presence of what can be interpreted as spacetime curvature. In this subsection, we initially consider results that were originally presented in Ref. [52] in \(1+1\) dimensions.
The MDR related to this algebra (in a given basis) can be perturbed to first order in the Planck length and de Sitter curvature parameters \(\ell\) and \(H\), respectively, as
\[\mathcal{H}(x,p)=p_{0}^{2}-p_{1}^{2}(1+\ell p_{0})(1-2Hx^{0})\,. \tag{45}\]
By using the action given by Equation (4) and the algorithm that follows it, the following Finsler function can be obtained:
\[F(x,\dot{x})=\sqrt{(\dot{x}^{0})^{2}-(1-2Hx^{0})(\dot{x}^{1})^{2}}+\ell\frac{ m}{2}\frac{(1-2Hx^{0})\dot{x}^{0}(\dot{x}^{1})^{2}}{(\dot{x}^{0})^{2}-(1-2Hx^{0})( \dot{x}^{1})^{2}}\,, \tag{46}\]
from which the Finsler metric can be found from Equation (10):
\[g^{F}_{\mu\nu}(x,\dot{x})=\begin{pmatrix}1+\frac{3a^{4}m\acute{x}\acute{x}^{0}( \acute{x}^{1})^{4}}{2[(\acute{x}^{0})^{2}-a^{2}(\acute{x}^{1})^{2}]^{5/2}}& \frac{m\acute{\epsilon}a^{4}(\acute{x}^{1})^{3}[a^{2}(\acute{x}^{1})^{2}-4( \acute{x}^{0})^{2}]}{2[(\acute{x}^{0})^{2}-a^{2}(\acute{x}^{1})^{2}]^{5/2}}\\ \frac{m\acute{\epsilon}a^{4}(\acute{x}^{1})^{3}[a^{2}(\acute{x}^{1})^{4}-4( \acute{x}^{0})^{2}]}{2[(\acute{x}^{0})^{2}-a^{2}(\acute{x}^{1})^{2}]^{5/2}}&-a^{ 2}+\frac{m\acute{\epsilon}a^{4}(\acute{x}^{0})^{2}[2(\acute{x}^{0})^{2}+4( \acute{x}^{1})^{2}]}{2[(\acute{x}^{0})^{2}-a^{2}(\acute{x}^{1})^{2}]^{5/2}}\end{pmatrix}\,, \tag{47}\]
where \(a=a(t)=e^{Ht}=1+Ht+\mathcal{O}(H^{2})\) (in this paper, the terms that grow with higher orders of \(H\) and \(\ell\) are discarded). The geodesic equation is found from the extremization of the Finsler arc-length defined by \(F\), from which Christoffel symbols and spray coefficients can be calculated. Actually, the \(\gamma^{\alpha}_{\mu\nu}(x,\dot{x})\) are given, for an arbitrary parametrization, by the set of Equations (44) of Ref. [52], from which the spray coefficients are given by
\[G^{0}(x,\dot{x})=\frac{1}{8}a^{2}H(\dot{x}^{1})^{2} \left[4-\frac{\ell m\dot{x}^{0}}{[(\acute{x}^{0})^{2}-a^{2}(\acute {x}^{1})^{2}]^{7/2}}\left(-28a^{6}(\acute{x}^{1})^{6}+12a^{2}(\acute{x}^{0})^{4 }(\acute{x}^{1})^{2}\right.\right.\] \[\left.\left.+a^{2}\left(17a^{2}+28\right)(\acute{x}^{0})^{2}( \acute{x}^{1})^{4}+16(\acute{x}^{0})^{6}\right)\right]\,, \tag{48}\] \[G^{1}(x,\dot{x})=H\dot{x}^{0}\dot{x}^{1}+\ell\left[\frac{a^{2}Hm( \acute{x}^{1})^{3}\left(a^{6}(\acute{x}^{1})^{6}-6a^{4}(\acute{x}^{0})^{2}( \acute{x}^{1})^{4}+3a^{2}(\acute{x}^{0})^{4}(\acute{x}^{1})^{2}-28(\acute{x}^ {0})^{6}\right)}{4\left((\acute{x}^{0})^{2}-a^{2}(\acute{x}^{1})^{2}\right)^{7/ 2}}\right]\,. \tag{49}\]
As can be seen, these coefficients are 2-homogeneous in the velocities, as expected. The Cartan non-linear connection coefficients read:
\[{N^{0}}_{0}(x,\dot{x})= \frac{H\ell m(\dot{x}^{1})^{4}\left(-28(\acute{x}^{1})^{6}-33( \acute{x}^{1})^{4}(\acute{x}^{0})^{2}+240(\acute{x}^{1})^{2}(\acute{x}^{0})^{ 4}+136(\acute{x}^{0})^{6}\right)}{8\left((\acute{x}^{0})^{2}-(\acute{x}^{1})^{ 2}\right)^{9/2}}\,, \tag{50}\] \[{N^{0}}_{1}(x,\dot{x})= H\dot{x}^{1}-\frac{H\ell m\dot{x}^{1}\dot{x}^{0}}{8\left((\acute{x}^{0})^{ 2}-(\acute{x}^{1})^{2}\right)^{9/2}}\left(28(\acute{x}^{1})^{8}-179(\acute{x}^ {1})^{6}(\acute{x}^{0})^{2}+306(\acute{x}^{1})^{4}(\acute{x}^{0})^{4}\right.\] \[\left.+128(\acute{x}^{1})^{2}(\acute{x}^{0})^{6}+32(\acute{x}^{0} )^{8}\right),\] \[{N^{1}}_{0}(x,\dot{x})= H\dot{x}^{1}+\frac{H\ell m(\acute{x}^{1})^{3}\dot{x}^{0}\left(5( \acute{x}^{1})^{6}+18(\acute{x}^{1})^{4}(\acute{x}^{0})^{2}+159(\acute{x}^{1}) ^{2}(\acute{x}^{0})^{4}+28(\acute{x}^{0})^{6}\right)}{4\left((\acute{x}^{0})^ {2}-(\acute{x}^{1})^{2}\right)^{9/2}}\,,\] (51) \[{N^{1}}_{1}(x,\dot{x})= H\dot{x}^{0}-\frac{H\ell m(\acute{x}^{1})^{2}\left(2(\acute{x}^{1})^{8} -9(\acute{x}^{1})^{6}(\acute{x}^{0})^{2}+36(\acute{x}^{1})^{4}(\acute{x}^{0})^{ 4}+97(\acute{x}^{1})^{2}(\acute{x}^{0})^{6}+84(\acute{x}^{0})^{8}\right)}{4 \left((\acute{x}^{0})^{2}-(\acute{x}^{1})^{2}\right)^{9/2}}\,, \tag{52}\]
where the worldlines are autoparallel curves of this non-linear connection. Let us note that some terms of the connection are only present due to the coupling between the spacetime curvature parameter, \(H\), and the one that gives a non-trivial velocity space, \(\ell\). Some curvature-triggered effects in quantum gravity have been recently analyzed [54].
Endowed with these coefficients, the preferred frames that induce the horizontal and vertical decomposition can be immediately found, in addition the \(N\)-linear connection coefficients \(L^{\alpha}_{\mu\nu}\) and \(C^{\alpha}_{\mu\nu}\), as discussed in Section II. Till now, only kinematical properties were discussed, but the choice of the given connection should be given either by physical conditions imposed on the dynamics of the spacetime or by possible effective gravitational field equations for a quantum configuration space.
To finalize this Section, let us discuss the symmetries of the spacetime. A deep analysis of the killing vectors of the \(H\to 0\) limit of this Finsler framework was carried out in Ref. [51]. Even in that simplified scenario, the equations are quite lengthy which we omit here. However, some properties should be mentioned. Firstly, the transformations generated by the killing vectors seem to not exactly preserve the line element, but contribute with a term that is given by a total derivative in the action parameter; therefore, the kinematical results of these two line elements coincide. Secondly, the results found are compatible with the \(\kappa\)-Poincare scenario that inspired this approach. From the Finsler perspective, it is possible to derive more general results, but they reduces to those of the bicrossproduct basis of \(\kappa\)-Poincare by an appropriate choice of free functions and parameters. The third point is that a finite version of transformations that preserve the \(\kappa\)-Poincare dispersion relation was recently made in Ref. [55] through an alternative approach, which does not rely on the killing vectors but is determined by the Finsler function and the definition of momentum (explored in Section IV below); however, a complete integration of the finite isometry and a comparison between these approaches is still missing in the literature. To finalize, the case of \(H\neq 0\) was investigated in Ref. [50], but in conformal coordinates (which are not the ones that are considered in this application), and was not done in so much detail as the flat case, but a generator of the corresponding curved boost transformation was made explicit in Equation (25) of Ref. [50].
The cotangent bundle version of Finsler geometry
As was discussed in Ref. [42], by mapping the velocity of the particle to its momentum, it is possible to find the version of the Finsler metric defined in the cotangent bundle or phase space. Already from the definition of the 4-momentum,
\[p_{\mu}=m\frac{\partial F}{\partial y^{\mu}}\,, \tag{53}\]
when it is possible to invert this expression to find \(y=y(p)\), one can substitute this result in the Finsler metric as \(h^{F}_{\mu\nu}(x,p)=g^{F}_{\mu\nu}(x,y(p))\). This metric is defined on the slit cotangent bundle, \(\widehat{T^{*}M}=T^{*}M/\{0\}\), where we also remove the zero section in each spacetime point for the same technical reasons as discussed in Section III above. Since the quantities are now defined in the cotangent bundle, we need to also address some issues that were raised in Section III concerning the tangent bundle. This Section's notation is applied according to Ref. [47]. For instance, under a change of coordinates, the spacetime and momentum variables transformed according to
\[\tilde{x}^{\mu} =\tilde{x}^{\mu}(x)\,, \tag{54}\] \[\tilde{p}_{\mu} =\frac{\partial x^{\nu}}{\partial\tilde{x}^{\mu}}p_{\nu}\,, \tag{55}\]
which means that the frame \((\partial/\partial x^{\mu},\partial/\partial p_{\nu})\) transforms as
\[\frac{\partial}{\partial\tilde{x}^{\mu}} =\frac{\partial x^{\nu}}{\partial\tilde{x}^{\mu}}\frac{\partial }{\partial x_{\nu}}+\frac{\partial p_{\nu}}{\partial\tilde{x}^{\mu}}\frac{ \partial}{\partial p_{\nu}}\,, \tag{56}\] \[\frac{\partial}{\partial\tilde{p}_{\mu}} =\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}}\frac{\partial }{\partial p_{\nu}}\,. \tag{57}\]
On the other hand, the natural coframe \((dx^{\mu},dp_{\nu})\) changes as
\[d\tilde{x}^{\mu} =\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}}dx^{\nu}\,, \tag{58}\] \[d\tilde{p}_{\mu} =\frac{\partial x^{\nu}}{\partial\tilde{x}^{\mu}}dp_{\nu}+\frac{ \partial^{2}x^{\nu}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\lambda}}p_{\nu }d\tilde{x}^{\lambda}\,. \tag{59}\]
Simlarly to that in Section III, the presence of a nonlinear connection, \(O_{\mu\nu}\), allows one to split the cotangent bundle into a horizontal and a vertical subbundle. Inspired by the consideration of the Hamilton case considered in Ref. [56] (discussed below), we propose the following dual non-linear connection (constructed in Appendix A):
\[O_{\mu\nu}(x,p)=-m\left[N^{\alpha}{}_{\mu}\frac{(g_{\alpha\nu}-p_{\alpha}p_{ \nu}/m^{2})}{F}-\partial_{\mu}\dot{\partial}_{\nu}F\right]\Bigg{|}_{(x,y(p))}\,, \tag{60}\]
where \(p=p(y)\) is the kinematical map defined by Equation (53). By construction, these symbols have the transformation properties of a nonlinear connection,
\[\tilde{O}_{\mu\nu}=\frac{\partial x^{\lambda}}{\partial\tilde{x}^{\mu}} \frac{\partial x^{\varepsilon}}{\partial\tilde{x}^{\nu}}O_{\lambda\epsilon}+ \frac{\partial^{2}x^{\beta}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\nu}}p _{\beta}\,. \tag{61}\]
Endowed with a nonlinear connection \(O_{\mu\nu}\), one can decompose the tangent bundle of the cotangent bundle by the Whitney sum in each point \(T_{u}\widehat{T^{*}M}=O_{u}\oplus V_{u},\,\forall u\in\widehat{T^{*}M}\). The subbundle \(O_{u}\) is called horizontal space and is spanned by the frame,
\[\frac{\delta}{\delta x^{\mu}}=\delta_{\mu}=\frac{\partial}{\partial x^{\mu}}+ O_{\mu\nu}\frac{\partial}{\partial p_{\nu}}\,, \tag{62}\]
and the subbundle \(V_{u}\) is called vertical space and is spanned by the frame in each point of \(\widehat{T^{*}M}\):
\[\bar{\partial}^{\mu}=\frac{\partial}{\partial p_{\mu}}\,, \tag{63}\]
such that \(T_{u}\widetilde{T^{*}M}=\text{span}\{\delta_{\mu},\bar{\partial}^{\nu}\}\). The transformation properties of the nonlinear connection are implied in the following rule for transforming this basis:
\[\frac{\delta}{\delta\tilde{x}^{\mu}}=\tilde{\delta}_{\mu}=\frac{ \partial x^{\nu}}{\partial\tilde{x}^{\mu}}\frac{\delta}{\delta x^{\nu}}=\frac{ \partial x^{\nu}}{\partial\tilde{x}^{\mu}}\delta_{\nu}\,, \tag{64}\] \[\frac{\partial}{\partial\tilde{p}_{\mu}}=\tilde{\tilde{\phi}}^{ \mu}=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}}\frac{\partial}{\partial p _{\nu}}=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}}\tilde{\bar{\phi}}^{ \nu}\,. \tag{65}\]
Equivalently, with the nonlinear connection, we can decompose the cotangent space \(T_{u}^{*}\widetilde{T^{*}M}=\text{span}\{dx^{\mu},\delta p_{\nu}\}\), where
\[\delta p_{\mu}=dp_{\mu}-O_{\nu\mu}dx^{\nu}\,. \tag{66}\]
Therefore, the dual basis transforms as
\[d\tilde{x}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu }}dx^{\nu}\,, \tag{67}\] \[\delta\tilde{p}_{\mu}=\frac{\partial x^{\nu}}{\partial\tilde{x}^ {\mu}}\delta p_{\nu}\,. \tag{68}\]
Similarly to what has been done for the tangent bundle case, such a decomposition allows us to express a vector and a 1-form via horizontal and vertical components, where now, the vertical component is considered along momenta instead of velocities,
\[X=X^{\mu}\delta_{\mu}+\bar{X}_{\mu}\bar{\partial}^{\mu}=X^{H}+X^ {V}\,, \tag{69}\] \[\omega=\omega_{\mu}dx^{\mu}+\bar{\omega}^{\mu}\delta p_{\mu}= \omega^{H}+\omega^{V}\,. \tag{70}\]
Besides, the metric \(\mathbb{H}(x,p)\) of the configuration space is defined as follows. Given a metric \(h^{\mu\nu}(x,p)\), and the nonlinear connection \(O_{\mu\nu}(x,p)\), the quantum phase space presents metrical properties given by the tensor,
\[\mathbb{H}(x,p)=h_{\mu\nu}(x,p)dx^{\mu}\otimes dx^{\nu}+h^{\mu\nu}(x,p)\delta p _{\mu}\otimes\delta p_{\nu}\,. \tag{71}\]
We refer to the tensor \(\mathbb{H}\) as the \(N\)-lift to \(\widetilde{T^{*}M}\) of the metric \(h_{\mu\nu}\). The map between \(y\) and \(p\) cannot be done, in general, involving quantities that are parametrization-dependent because \(p\) itself is parametrization-invariant, whereas \(y\) is not. That is why one can only assume \(y(p)\) for the definition of the metric \(h^{F}_{\mu\nu}\).
Endowed with these quantities, one can just extend the definition of \(d\)-tensors III.1 to the cotangent case, in which one only needs to consider the use of the nonlinear connection \(O_{\mu\nu}\) and the adapted basis defined in this Section.
The above implies that a \(d\)-tensor \(T\) of type \((m+q,n+p)\) can be rewritten in the preferred basis as
\[T=T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha _{p}}{}^{\beta_{1}...\beta_{n}}\frac{\delta}{\delta x^{\mu_{1}}}\otimes... \otimes\frac{\delta}{\delta x^{\mu_{m}}}\otimes\frac{\partial}{\partial p_{ \nu_{1}}}\otimes...\otimes\frac{\partial}{\partial p_{\nu_{n}}}\] \[\otimes dx^{\alpha_{1}}\otimes...\otimes dx^{\alpha_{p}}\otimes \delta p_{\beta_{1}}\otimes...\otimes\delta p_{\beta_{q}}\,, \tag{72}\]
whose components transform according to usual linear transformation rules, as the one of Equation (34).
### N-Linear Connection
Equivalently, the notion of differentiation can be defined in the cotangent bundle through the \(N\)-linear connection \(D\), which has the following coefficients in the frame \((\delta_{\mu},\bar{\partial}^{\nu})\) (see Theorem 4.9.1 in Ref. [47]):
\[D_{\delta_{\nu}}\delta_{\mu}=H^{\alpha}_{\mu\nu}\delta_{\alpha}\,, \qquad D_{\delta_{\nu}}\bar{\partial}^{\mu}=-H^{\mu}_{\alpha\nu}\bar{\partial}^ {\alpha}\,, \tag{73}\] \[D_{\bar{\partial}^{*}}\delta_{\mu}=C^{\alpha\nu}_{\mu}\delta_{ \alpha}\,,\qquad D_{\bar{\partial}^{*}}\bar{\partial}^{\mu}=-C^{\mu\nu}_{ \alpha}\bar{\partial}^{\alpha}\,. \tag{74}\]
Otherwise, in the frame \((dx^{\mu},\delta p_{\nu})\) one has (see Proposition 4.9.1 in Ref. [47])
\[D_{\delta_{\nu}}dx^{\mu}=-H^{\mu}_{\alpha\nu}dx^{\alpha}\,, \qquad D_{\delta_{\nu}}\delta p_{\mu}=H^{\alpha}_{\mu\nu}\delta p_{\alpha}\,, \tag{75}\] \[D_{\bar{\partial}^{*}}dx^{\mu}=-C^{\alpha\nu}_{\alpha}dx^{\alpha }\,,\qquad D_{\bar{\partial}^{*}}\delta p_{\mu}=C^{\alpha\nu}_{\alpha\nu} \delta p_{\alpha}\,. \tag{76}\]
Considering a \(N\)-linear connection \(D\) with set of coefficients, \(D\Gamma(N)=(H^{\alpha}_{\mu\nu},C^{\alpha}_{\mu\nu})\), one can add to it a nonlinear connection, \(N_{\mu\nu}\), that is in general independent of the coefficients of \(D\), such that the new set is \(D\Gamma=(N_{\mu\nu},H^{\alpha}_{\mu\nu},C^{\alpha}_{\mu\nu})\). For this reason, the derivative of a \(d\)-tensor in the cotangent bundle presents similar usual rules for dealing with up and down indices:
\[T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{} _{\beta_{1}...\beta_{q}}{}_{|_{\epsilon}} \tag{77}\] \[=\frac{\delta}{\delta x^{\epsilon}}T^{\mu_{1}...\mu_{m}}{}_{\nu_{ 1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{ p}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}+...-H^{ \gamma}_{\nu_{1}\epsilon}T^{\mu_{1}...\mu_{m}}{}_{\gamma...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}\,,\] \[T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{ p}}{}_{\beta_{1}...\beta_{q}||\epsilon}\] (78) \[=\frac{\partial}{\partial p_{\epsilon}}T^{\mu_{1}...\mu_{m}}{}_{ \nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}+C^{\mu_{1} \epsilon}_{\gamma}T^{\gamma...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}... \alpha_{p}}{}^{\beta_{1}...\beta_{q}}+...-C^{\gamma\epsilon}_{\nu_{1}}T^{\mu_ {1}...\mu_{m}}{}_{\gamma...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}... \beta_{q}}\,.\]
Let us note that from the kinematical map relating velocities and momenta, the coefficients \(H^{\alpha}_{\mu\nu}(x,y(p))\) and \(C^{\alpha}_{\mu\nu}(x,y(p))\) can be found as been parametrization-invariant.
### Finsler-q-de Sitter (Cotangent Bundle Case)
Here, we again consider the \(q\)-de Sitter-inspired case. Then, using the Finsler function (46), the momentum is given by Equation: (53)
\[p_{0} =\frac{m\dot{x}^{0}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{ 2}}}-\ell\frac{m^{2}a^{2}(\dot{x}^{1})^{2}(a^{2}(\dot{x}^{1})^{2}+(\dot{x}^{0 })^{2})}{2[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2})]^{2}}\,, \tag{79}\] \[p_{1} =-\frac{ma^{2}\dot{x}^{1}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^ {1})^{2}}}+\ell\frac{m^{2}a^{2}(\dot{x}^{0})^{3}\dot{x}^{1}}{((\dot{x}^{0})^{ 2}-a^{2}(\dot{x}^{1})^{2}))^{2}}\,, \tag{80}\]
which furnishes a helpful expression that is throughout this Section and is a common trick when trying to find momentum-dependent quantities from the Finsler approach:
\[\frac{m\dot{x}^{0}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2 }}} =p_{0}+\ell\frac{a^{-2}(p_{1})^{2}(a^{-2}(p_{1})^{2}+(p_{0})^{2})}{2m^{2}}\,, \tag{81}\] \[\frac{ma\dot{x}^{1}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{ 2}}} =-a^{-1}p_{1}\left(1+\ell\frac{(p_{0})^{3}}{m^{2}}\right)\,. \tag{82}\]
The above expressions allow us to express the Finsler metric through its momentum dependence:
\[g^{F}_{\mu\nu}(x,\dot{x}(p))=h^{F}_{\mu\nu}(x,p)=\begin{pmatrix}1+\frac{3dp_{0} (p_{1})^{4}}{m^{4}}&-\frac{\ell a(p_{1})^{3}[(p_{1})^{2}-4(p_{0})^{2}]}{2\pi^{ 4}}\\ -\frac{\ell a(p_{1})^{3}[(p_{1})^{3}-4(p_{0})^{2}]}{2m^{4}}&-a^{2}+\frac{\ell a ^{2}(p_{0})^{2}[2(p_{0})^{2}+(p_{1})^{2}]}{m^{4}}\end{pmatrix}\,, \tag{83}\]
which can be called a "Finsler-rainbow metric."
One can also find the induced non-linear connection in the cotangent bundle through the definition (60) to read as
\[O_{00}(x,p)= -\frac{H\ell(p_{1})^{2}}{8m^{10}}\left[4(p_{0})^{10}+44(p_{0})^{8 }(p_{1})^{2}+190(p_{0})^{6}(p_{1})^{4}-196(p_{0})^{4}(p_{1})^{6}\right.\] \[\left.+31(p_{0})^{2}(p_{1})^{8}+32(p_{1})^{10}\right]\,, \tag{84}\] \[O_{01}(x,p)= Hp_{1}-\frac{\ell Hp_{0}p_{1}}{8m^{10}}\left[-4m^{8}(p_{0})^{2}+8(p_{ 0})^{10}+32(p_{0})^{8}(p_{1})^{2}+206(p_{0})^{6}(p_{1})^{4}\right.\] \[\left.-212(p_{0})^{4}(p_{1})^{6}+43(p_{0})^{2}(p_{1})^{8}+28(p_{1} )^{10}\right]\,,\] (85) \[O_{10}(x,p)= Hp_{1}-\frac{H\ell p_{0}p_{1}}{8m^{10}}\left(-4m^{8}(p_{0})^{2}+4(p_{ 0})^{10}+140(p_{0})^{8}(p_{1})^{2}+2(p_{0})^{6}(p_{1})^{4}\right.\] \[\left.-106(p_{0})^{4}(p_{1})^{6}+61(p_{0})^{2}(p_{1})^{8}+4(p_{1} )^{10}\right)\,,\] (86) \[O_{11}(x,p)= Hp_{0}+\frac{H\ell}{8m^{10}}\left(4(p_{0})^{2}(p_{1})^{2}\left(m^{8}+3(p_{ 1})^{8}\right)+8m^{8}(p_{1})^{4}-8(p_{0})^{12}-124(p_{0})^{10}(p_{1})^{2}\right.\] \[\left.-30(p_{0})^{8}(p_{1})^{4}+138(p_{0})^{6}(p_{1})^{6}-89(p_{0 })^{4}(p_{1})^{8}-4(p_{1})^{12}\right)\,. \tag{87}\]
From these expressions, one can construct the decomposition of the tangent and cotangent spaces of the cotangent bundle into horizontal and vertical parts, accordingly.
## V Geometry of the cotangent bundle: Hamilton geometry
Besides the Finsler geometry, another interesting proposal for building a natural geometry for propagation of particles that probe a modified dispersion relation consists of the so-called Hamilton geometry. In this case, different from the Finsler geometry, we start with a geometric structure defined in the cotangent bundle (the definitions used in this metric follow that in the book [47] and in papers [56; 57; 58; 59]).
A Hamilton space is a pair, \((M,H(x,p))\), where \(M\) is a smooth manifold and \(H:T^{*}M\to\mathbb{R}\) is a continuous function on the cotangent bundle that satisfies the following properties:
1. \(H\) is smooth on the manifold \(\widehat{T^{*}M}\);
2. the Hamilton metric, \(h_{H}\), with components, \[h_{H}^{\mu\nu}(x,p)=\frac{1}{2}\frac{\partial}{\partial p_{\mu}}\frac{\partial }{\partial p_{\nu}}H(x,p)\,,\] (88) is nondegenerate.
Since one does not have an arc-length functional, worldlines as extremizing curves are an absent concept in this approach. Instead, the equations of motion of a particle that obeys a given Hamiltonian are given by the Hamilton equations of motion:
\[\dot{x}^{\mu} =\frac{\partial H}{\partial p_{\mu}}\,, \tag{89}\] \[\dot{p}_{\mu} =-\frac{\partial H}{\partial\dot{x}^{\mu}}\,. \tag{90}\]
Since this is just another metric structure defined in the cotangent bundle, the same results regarding the tools for coordinate transformations given by Equation (54) are applicable here. As the case of Hamiltonian mechanics, the definition of Poisson brackets is useful enough for our purposes. For two real valued functions \(F(x,p)\) and \(G(x,p)\), their Poisson brackets are given in [56] (the geometry of the cotangent bundle with deformed Hamiltonian can also be described with the language of symplectic geometry, which is reviewed in Ref. [60]):
\[\{F(x,p),G(x,p)\}=\partial_{\mu}F\bar{\partial}^{\mu}G-\partial_{\mu}G\bar{ \partial}^{\mu}F\,. \tag{91}\]
As above, in order to divide the tangent and cotangent spaces of the cotangent bundle into horizontal and vertical spaces, a non-linear connection is necessary, and the canonical choice is given in Theorem 5.5.1 of Ref. [47] and Definition 2 of Ref. [56] as
\[O_{\mu\nu}(x,p)=\frac{1}{4}(\{h_{\mu\nu}^{H},H\}+h_{\mu\alpha}^{H}\partial_{ \nu}\bar{\partial}^{\alpha}H+h_{\nu\alpha}^{H}\partial_{\mu}\bar{\partial}^{ \alpha}H)\,, \tag{92}\]
where \(h_{\mu\nu}^{H}\) is the inverse of the metric \(h_{H}^{\mu\nu}\). This non-linear connection allows us to use the basis \(\delta_{\mu}=\partial_{\mu}+O_{\mu\nu}\bar{\partial}^{\nu}\) and \(\bar{\partial}^{\mu}\) as a special basis of \(T_{(x,p)}\widehat{T^{*}M}\), and to use the basis \(dx^{\mu}\) and \(\delta p_{\mu}=dp_{\mu}-O_{\nu\mu}dx^{\nu}\) as a special basis of \(T_{(x,p)}\widehat{T^{*}M}\), which transforms according to Equations (64), (65) and (67), (68).
Endowed with these coefficients, following Theorem 5.6.1 of Ref. [47], there exists a unique \(N\)-linear connection \(D\Gamma(O)=(H_{\mu\nu}^{\alpha},C_{\alpha}^{\mu\nu})\) such that:
1. \(O_{\mu\nu}\) is the canonical non-linear connection;
2. the metric \(h_{H}^{\mu\nu}\) is \(h\)-covariant constant (no horizontal non-metricity): \[D_{\delta_{\alpha}}h_{H}^{\mu\nu}=0\,;\] (93)
3. the metric \(h_{H}^{\mu\nu}\) is \(v\)-covariant constant (no vertical non-metricity): \[D\bar{\partial}^{\alpha}h_{H}^{\mu\nu}=0\,;\] (94)
4. \(D\Gamma(N)\) is horizontally torsion free: \[T^{\alpha}_{\ \mu\nu}=H^{\alpha}_{\mu\nu}-H^{\alpha}_{\nu\mu}=0\,;\] (95)
5. \(D\Gamma(N)\) is vertically torsion free: \[S_{\alpha}^{\ \mu\nu}=C^{\mu\nu}_{\alpha}-C^{\nu\mu}_{\alpha}=0\,;\] (96)
6. the triple \((O_{\mu\nu},H^{\alpha}_{\mu\nu},C^{\mu\nu}_{\alpha})\) has coefficients given by \[O_{\mu\nu}(x,p)=\frac{1}{4}(\{h^{H}_{\mu\nu},H\}+h^{H}_{\mu \alpha}\partial_{\nu}\bar{\partial}^{\alpha}H+h^{H}_{\nu\alpha}\partial_{\mu} \bar{\partial}^{\alpha}H)\,,\] (97) \[H^{\mu\nu}_{\alpha}=\frac{1}{2}h^{\alpha\beta}_{H}(\delta_{\mu} h^{H}_{\beta\nu}+\delta_{\nu}h^{H}_{\beta\mu}-\delta_{\beta}h^{H}_{\mu\nu})\,,\] (98) \[C^{\mu\nu}_{\alpha}=-\frac{1}{2}h^{H}_{\alpha\beta}\bar{ \partial}^{\mu}h^{\beta\nu}_{H}\,.\] (99)
This is called a Cartan \(N\)-linear covariant derivative. Equivalently, the notion of \(d\)-tensors and their derivatives discussed in Section IV.1 are applicable.
### Symmetries
Hamilton geometry also allows one to encompass a DSR language, as was the case for Finsler geometry discussed in Section III.2. However, its realization does not come from the invariance of an interval \(ds^{2}\), since one does not have it, but from the invariance of the Hamiltonian function \(H(x,p)\). The approach, which we highlight here, was done starting from Definition 4 of Section II-D of Ref. [56]. In a Hamilton space \((M,H)\) with manifold \(M\), and Hamiltonian \(H\), let \(X=\xi^{\mu}\partial_{\mu}\) be a vector field in the basis manifold \(M\) and \(X^{C}=\xi^{\mu}\partial_{\mu}-p_{\nu}\partial_{\mu}\xi^{\nu}\bar{\partial}^{\mu}\) be the so-called complete lift of \(X\) to \(\widetilde{T^{*}M}\). A symmetry of the Hamiltonian is a transformation generated by \(X^{C}\), whose components satisfy
\[X^{C}(H)\xi^{\mu}\partial_{\mu}H-p_{\nu}\partial_{\mu}\xi^{\nu}\bar{\partial}^ {\mu}H=0\,. \tag{100}\]
If one derivates this expression twice with respect to momenta, one gets the following result:
\[0=\frac{1}{2}\bar{\partial}^{\mu}\bar{\partial}^{\nu}X^{C}(H)=\xi^{\alpha} \partial_{\alpha}h^{\mu\nu}_{H}-h^{\mu\alpha}_{H}\partial_{\alpha}\xi^{\nu}- h^{\nu\alpha}_{H}\partial_{\alpha}\xi^{\mu}-p_{\beta}\partial_{\alpha}\xi^{ \beta}\bar{\partial}^{\alpha}h^{\mu\nu}_{H}\,. \tag{101}\]
This is just the generalization of the killing equation to a general Hamilton space. In general, if \(h_{H}\) does not depend on momenta, then it reduces to the standard Riemannian case. Besides, from the expression of the Poisson brackets (91), it can verified that such symmetries give rise to conserved charges \(\xi^{\mu}p_{\mu}\); i.e., that Poisson commutes with the Hamiltonian:
\[\{\xi^{\mu}p_{\mu},H\}=0. \tag{102}\]
These are the charges that, at an algebraic level, can generate translations, boosts, and rotations, for instance.
### Hamilton-q-de Sitter (Cotangent Bundle Case)
As an example, we rely on the results presented in Ref. [56], which are as well inspired by the \(q\)-de Sitter Hamiltonian (45). In this case, the Hamilton metric, defined by Equation (88), reads:
\[h^{\mu\nu}_{H}(x,p)=\begin{pmatrix}1&-\ell p_{1}(1+2Hx^{0})\\ -\ell p_{1}(1+2Hx^{0})&-(1+2Hx^{0})(1+\ell p_{0})\end{pmatrix}\,, \tag{103}\]
which, as can be seen, acquires a shape much simpler than the rainbow-Finsler one (83) due to the much direct way in which it is calculated.
The non-linear connection can be read from Equation (92) and can be cast in a matrix form due to its simplicity:
\[O_{\mu\nu}(x,p)=\begin{pmatrix}H\ell p_{1}^{2}&Hp_{1}\\ Hp_{1}&Hp_{0}(1-\ell p_{0})\end{pmatrix}\,. \tag{104}\]
As expected, it coincides with the case (84) in the Riemannian case, i.e., when \(\ell=0\).
The Hamilton equations of motion can be found from Equation (89) and read:
\[\dot{x}^{0}-2p_{0}+\ell p_{1}^{2}(1+2Hx^{0})=0\,, \tag{105}\] \[\dot{x}^{1}+2p_{1}(1+Hx^{0})+2\ell p_{0}p_{1}(1+2Hx^{0})=0\,,\] (106) \[\dot{p}_{0}-2Hp_{1}^{2}-2H\ell p_{0}p_{1}^{2}=0\,,\] (107) \[\dot{p}_{1}=0\,. \tag{108}\]
The autoparallel (horizontal) curves of the non-linear connection satisfy (see Equation (8.2) in Ref. [47])
\[\dot{p}_{\mu}-O_{\nu\mu}\dot{x}^{\nu}=0\,, \tag{109}\]
and, as can be seen from Equation (104) for \(O_{\mu\nu}\), the worldlines, defined from the Hamilton equations of motion, are not autoparallels of the non-linear connection.
The symmetries have also been analyzed in Ref. [56], where it has been noticed that the conserved charges that generate translations and the boost coincide with the results from Ref. [51] that do not rely on the geometrical approach used in this paper.
## VI The tangent-bundle version of Hamilton geometry
Endowed with Hamilton equations of motion (89), one has a map between the momenta and velocities from \(\dot{x}^{\mu}=y^{\mu}=\partial H/\partial p_{\mu}\). When it is possible to invert this map to find \(p_{\mu}=p_{\mu}(y)\) (as done in Appendix B of Ref. [58]), one derives an interesting map between the cotangent and tangent space version of Hamilton geometry. Indeed, using this map, a Hamilton metric defined in the tangent bundle reads:
\[g_{H}^{\mu\nu}(x,y)\doteq h_{H}^{\mu\nu}(x,p(y))\,. \tag{110}\]
The dual non-linear connection in this case has been discussed in Appendix C of Ref. [56], and is given by
\[N(x,y)^{\mu}{}_{\nu}=2O(x,p(y))_{\nu\alpha}h_{H}^{\alpha\mu}(x,p(y))-(\partial _{\nu}\bar{\partial}^{\mu}H)|_{p=p(y)}\,. \tag{111}\]
Its main property is the preservation of the horizontal tangent spaces of the cotangent and tangent bundle connected through the kinematical map \(y^{\mu}=\partial H/\partial p_{\mu}\).
With this map, it is possible to define the dual non-linear and \(N\)-linear connections, now defined in the tangent bundle. It should be stressed that although this gives geometrical quantities defined in the tangent bundle, this does not represent a Finsler geometry, since there is no arc-length functional and the Hamilton metric is not, in general, \(0\)-homogeneous to start with.
### Hamilton-\(\kappa\)-Poincare (Tangent Bundle Case)
The kinematical map that allows us to describe \(y=y(p)\) is found by inverting the relation \(y^{\mu}=\partial H/\partial p_{\mu}\) for the \(q\)-de Sitter Hamiltonian, given by
\[p_{0} =\frac{y^{0}}{2}+\ell\frac{(y^{1})^{2}}{8}\,, \tag{112}\] \[p_{1} =-\frac{y^{1}}{2}+H\frac{x^{0}y^{1}}{2}+\ell\frac{y^{0}y^{1}}{4}\,. \tag{113}\]
The metric in the tangent bundle reads:
\[g_{H}^{\mu\nu}(x,y)=\begin{pmatrix}1&\ell(Hx^{0}y^{1}+y^{1})/2\\ \ell(Hx^{0}y^{1}+y^{1})/2&-(1+2Hx^{0})(1+\ell y^{0}/2)\end{pmatrix}\,. \tag{114}\]
The dual non-linear connection reads
\[N^{\mu}{}_{\nu}(x,y)=\begin{pmatrix}-H\ell(y^{1})^{2}/2&\ell hy^{0}y^{1}+hy^{1 }\\ Hy^{1}&-hy^{0}-3\ell h(y^{1})^{2}/4\end{pmatrix}\,. \tag{115}\]
In Section VII below, some key points of each approach are discussed while comparing the descriptions of configuration and phase spaces.
## VII Advantages and Difficulties of Each Formalism
The approaches considered--Finsler and Hamilton spaces--present the points that can be considered positive or negative. In this Section, we highlight some of those points which look to be most important from theoretical and phenomenological points of view.
### Finsler Geometry
Let us emphasize that here not a complete list of positive or negative points are given, and, certainly, the points listed represent just our view on the subject under scrutiny and some points we are classifying in one way or another can be seen by others completely differently.
#### vii.1.1 Advantages
**Preservation of the equivalence principle**. Due to the presence of an arc-length functional, the extremizing geodesics of the Finsler function are the same worldlines of the Hamiltonian, from which the arc-length was derived. This means that, in the Finslerian language, the equivalence principle is satisfied, as soon as the worldlines are trajectories of free particles in this spacetime. There is a fundamental difference in comparison to the special or general relativity formulation, since these trajectories are now mass-dependent, since the Finsler function and the metric carry the mass of the particle due to Planck-scale effects. Intriguingly, although the metric does not present a massless limit (which is discussed below), it is possible to find trajectories of massless particles, which are compatible with the Hamiltonian formulation, by taking the limit \(m\to 0\) in the geodesic Equation [49; 50]. This finding leads to some effects due to modifications of the trajectories of particles. For instance, one of the most explored avenues of quantum gravity phenomenology (maybe competing with threshold effects) is the time delay until particles with different energies might arrive at a detector after a (almost) simultaneous emission [61; 62] (for reviews, see [4; 5]). This kind of experimental investigation is not exhausted, and novelties have arrived in the analysis of sets of gamma-ray bursts and candidate neutrinos emitted from them in the multimessenger astronomy approach [63; 64].
**Preservation of the relativity principle**. This formalism allows one to derive and solve the killing equation, which furnishes infinitesimal symmetry transformations of the metric. It has been shown in Ref. [49] that generators of these transformations can be constructed and identified with the transformations that are generally depicted in the doubly special relativity. The latter implies, in a preservation of the relativity principle that inertial frames should assign the same MDR to a given particle which, in its turn, implies that the deformation scale of quantum gravity is observer-independent, i.e., two observers would not assign different values, in the same system of units, to the quantum gravity scale. This preservation has important phenomenological consequences, such as the point that the threshold constraints on the quantum gravity parameter do not apply in the DSR scenario. The reason is that, accompanied by the deformation of the Lorentz (Poincare) symmetries, comes a deformation of the composition law of momenta of particles (for instance \(p\) and \(q\)), such that the nature of interaction vertices to not get modified when transforming from one frame to another:
\[\Lambda(p\oplus q)=\Lambda(p)\oplus\Lambda(q)\,, \tag{116}\]
where \(\Lambda\) is a deformed Lorentz transformation induced by the killing vectors and \(\oplus\) represents a modified composition of components of the involved momenta (this covariance condition usually needs a back-reaction on the boost parameter, but we do not dwell on that here; for more details, see [65; 55] and references therein). Threshold constraints, such as the one placed in Ref. [66], assumes that the composition of momenta is undeformed, although the dispersion relation is modified in a Lorentz invariance violation (LIV) scenario. When this is the case, processes that are forbidden in special relativity, such as the decay of the photon into an electron-positron pair, becomes kinematicaly allowed for a given threshold energy. The no observation of such decays allows one to place constraints on the quantum gravity parameter. When the dispersion relation is modified as well, what happens is that generally these kinds of processes remain forbidden or modifications in the threshold energies are so minute that they are unobservable for a quantum gravity parameter in the order of the Planck energy [55]. This is an important feature of "deforming" instead of "violating" the Lorentz symmetry.
**Preservation of the clock postulate**. The availability of an arc-length functional leads to a possibility to analyze the consequences of having the proper time of a given particle given by it. If this is the case, then the worldlines or geodesics are just paths that extremize the proper time an observer measures in spacetime, similar to that in special relativity. One of the consequences of this feature consists of the possibility of connecting the time elapsed in the
comoving frame of a particle during its lifetime (which is its lifetime at rest) and the coordinate time, which is the one that is assigned to this phenomenon in the laboratory coordinates. Using this expression, one can investigate the relativistic time dilation (responsible for the "twin paradox") or the so-called first clock effect (for further details on the first and also on the second clock effect, which can appear in theories with a non-metricity tensor; see Ref. [67]), in which, for instance, the lifetime of a particle is dilated in comparison to the one assigned in the laboratory. Due to Finslerian corrections, the lifetime of a particle in the laboratory would receive Planckian corrections, which, actually, is a novel avenue of phenomenological investigation that is being currently carried out [43; 55] through the search for signatures in particle accelerators and cosmic rays.
#### iv.1.2 Difficulties
**Absence of massless rainbow Finsler metric**. The Finsler approach had emerged as an opportunity to describe in a consistent way the intuition that the quantum spacetime probed by a high-energy particle would present some energy-momentum (of the particle itself) corrections, which is justified by different approaches to quantum gravity [24; 25]. Since then, proposals of rainbow metrics have considered a smooth transition from massive to massless cases, not only from the point of view of the trajectories, but from the metric itself. This is not the case for the Finsler approach presented here. Although the trajectories and symmetries are defined for both massive and massless cases by considering the \(m\to 0\) limit, the rainbow metric of Finsler geometry, given by Equation (83), is certainly not defined for massless particles. The reason for this is the point that when passing from the Hamiltonian to the Lagrangian formalism, we defined an arc-length functional, which is not a legitimate action functional for massless particles. In other words, a crucial step for deriving the Finsler function is the handling of the Lagrange multiplier \(\lambda\) of action (4), which can only be solved if the particle is massive, as can be found in Refs. [43; 49; 50; 53]. A possibility that has been explored consisted of not solving the equation for \(\lambda\) and defining a metric that depends on \(\lambda\) and on velocities from a Polyakov-like action for free particles (instead of the Nambu-Goto one given by the arc-length), which turned out to be out of the Finsler geometry scope [50; 53]. However, this possibility has not been further explored beyond preliminary investigations. The issue of the absence of a massless rainbow-Finsler metric could be circumvented by proposing a different kind of geometry, which from the very beginning started from the momenta formulation, like the other possibility described in this paper, namely the Hamilton geometry.
**Definition only through perturbations**. the Finsler geometry has been considered in this paper in this context at most perturbatively around the quantum-gravity-length scale (or inverse of energy scale), which may be considered as a negative point if one aims to make it at a more fundamental or theoretical level. Nevertheless, from the pragmatic perspective of phenomenology, since such effects, if they exist, are minute, then the perturbative approach is enough for proposing new effects that could serve as avenues of experimental investigation.
**The handling of finite symmetries**. Another issue that can be problematic is the handling of finite symmetries in the Finsler context. Up to today, the connection between Finsler geometry and quantum gravity phenomenology has not faced the issue of integrating the symmetries and finding finite versions of deformed Lorentz transformations. Some initial investigations were carried out in Ref. [55] from the momentum space perspective, but further issues are being currently faced by some authors of the present paper.
### Hamilton Geometry
The descriptions of the points in this section will be a bit shorter than the previous ones, because some universal points we already described above; therefore, we instruct the reader to check on them when that is the case.
#### iv.2.1 Advantages
**Presence of a massless rainbow Hamilton metric**. Differently from the Finsler case, the Hamilton geometry does not need an arc-length functional; instead, it only needs a given Hamiltonian, from which the metric, non-linear connection, and symmetries are derived. This means that from the very beginning, the massless limit of geometrical quantities exists.
**Does not require perturbative methods**. Another positive point about the Hamilton geometry is the finding that one can handle with the exact form of the proposed Hamiltonian, and it does not need to consider perturbations around a certain scale. Instead, independently of the form of the (smooth) dispersion relation that arises from de facto approaches to quantum gravity, the geometry can be handled, as has been considered, e.g., in Refs. [57; 58].
**Preservation of the relativity principle and the handling of symmetries**. Due to the proximity of this approach to the way that the DSR formalism generally handles with Planck scale corrections, i.e., from the point of view of momentum space and Hamilton equations, the handling of symmetries is facilitated in this approach. For instance, it is straightforward to find the conserved charges from the killing vectors, which generate finite transformations that are momenta-dependent without tedious terms in the denominator of the equations when one is working in velocity space, as Finsler geometry is initially formulated (or without mass terms in the denominator in the Finsler version of the phase space).
**Generalization to curved spacetimes**. This approach is considered in more curved space cases, beyond the \(q\)-de Sitter exemplified in this paper; for instance, its spherically symmetric and cosmological versions were explored giving rise to interesting phenomenological opportunities, from the point of view of time delays and gravitational redshift, among others (for some applications of Hamilton geometry in this context, see [59] and references therein).
#### vii.2.2 Difficulties
**Non-geodesic trajectory**. An issue that may be considered problematic is the point that the worldlines of particles, given by the Hamilton equations, are not geodesics of the non-linear connection that means that there exists a force term in the geodesic equation, which is in contrast with the Finsler case. This is a property of the Hamilton geometry, as has been shown in Ref. [56], and is not specific to the \(q\)-de Sitter case analyzed here.
**Absence of the arc-length**. The Hamilton geometry does not dwell with an arc-length functional that means that the only geodesics present are those of the non-linear or of the \(N\)-linear connections and there are no extremizing ones. The absence of a function that allows one to measure distances in spacetime can be seen as a difficulty of this geometry; if the distances cannot be calculated, one could wonder what such a metric means. Even if the norm of a tangent vector can be integrated, this integral would not be, in general, parametrization-independent, which is also a drawback of this tentative. Besides, the absence of an arc-length limits the phenomenology of the preservation of the clock postulate that was discussed in the Finsler case.
## VIII Final remarks
We revised two proposals that have been considered as candidates for describing the quantum configuration and phase spaces probed by particles whose kinematics are modified by a length scale identified as the quantum gravity scale.
Finsler geometry starts from a configuration space framework that presents applications on its own in biology, thermodynamics, and modified gravity; and it finds a natural environment in quantum gravity phenomenology due to its power to describe a scenario in which important principles that guided physics in the XXth century, such as the relativity principle, are preserved even at a Planckian regime. Besides its traditional description in terms of the couple spacetime and velocity space (configuration space), we also explored its development in terms of the induced couple spacetime and momentum space (phase space), which is actually more appropriate for a pure quantum description than the configuration space. Some points that we consider positive and negative and which are consequences of the requirements for using the Finsler language, the derivation of an arc-length functional defined in the slit tangent bundle, are discussed in Section VII.
The second case of the present study is the Hamilton geometry, whose properties are derived directly from the Hamiltonian itself, without the need to go through the definition of an arc-length. Actually, in general, the Hamilton metric does not even define a curve-parametrization-invariant length measure which brings some limitations to phenomenological investigations of this subject in quantum gravity. On the other hand, this issue circumvents some intrinsic difficulties of Finsler geometry, which were also discussed in Section VII.
The goal of this paper was to review some topics of these two important geometries by using kinematical descriptions of particles whose behavior might present departures from special relativity results due to the effective quantum gravity influence. We also aimed to bring some points that we consider as under-explored perspectives on the subject by explicitly presenting some geometric quantities that are dual to those, in which those quantities were originally presented, such as the dual metrics and non-linear connections (whose Finslerian one was proposed in this paper, by inspiration of definitions in the Hamilton geometry literature) of Finsler and Hamilton geometries in the cotangent and tangent bundles, respectively.
At least two global points could be considered insufficiently explored or unexplored in this subject. One is the geometry probed by an (non-)interacting multi-particle system. Some challenges of this problem can be found, for instance, in Ref. [68], but the relations between the approaches there described and Finsler/Hamilton geometries remains unclear. Another point that remains unexplored consists of the dynamics of the configuration/phase space in
a way that is compatible with quantum gravity phenomenology-inspired approaches. For instance, one could wonder if there exists a gravitational field theory defined in Finsler or Hamilton spaces that has \(q\)-de Sitter or other proposals as solutions, and how matter would interact in this scenario. The exploration of this topic might shed light on the one regarding a multi-particle system. These are more challenges that might be subjects of the future research in this area and which may help to build a bridge between quantum and modified gravities.
## Acknowledgements
I.P.L. was partially supported by the National Council for Scientific and Technological Development--CNPq grant 306414/2020-1, and by the grant 3197/2021, Paraiba State Research Foundation (FAPESQ). I.P.L. would like to acknowledge the contribution of the COST Action CA18108. L.C.N.S. would like to thank Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) for partial financial support through the research project no. 164762/2020-5. V.B.B. is partially supported by CNPq through the research project no. 307211/2020-7. P.H.M. and S.A. thank Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior--Brazil (CAPES)--Finance Code 001, for financial support. G.V.S thank Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) for financial support. E.R. and G.M were supported by the PIBIC program of the Federal University of Paraiba.
## Appendix A Dual Finsler Nonlinear Connection
The momenta of a particle in Finsler geometry, given by the following expression,
\[p_{\mu}=m\frac{\partial F}{\partial y^{\mu}}\equiv m\dot{\partial}_{\mu}F\,, \tag{100}\]
defines a kinematical map between velocity and momenta variables at each given point in the base manifold \(M\). We refer to such a map as
\[\flat:\quad\widetilde{TM}\to\widetilde{T^{*}}M \tag{101}\] \[(x,y)\mapsto\flat(x,y)=(x,m\dot{\partial}F(x,y))=(x,p(x,y))\,. \tag{102}\]
Inspired by the construction of Ref. [56], the condition that a nonlinear connection in the tangent bundle is dual to one in the cotangent bundle by a kinematical map, \(\flat\), is that such an application maps the tangent space of the tangent bundle onto the tangent space of the cotangent bundle. This means that the differential of such a map maps the preferred basis of one tangent space, \(\delta_{\mu}=\partial_{\mu}-N^{\nu}{}_{\mu}\dot{\partial}_{\nu}\), onto the other, \(d\flat(\delta_{\mu})=\delta^{\prime}_{\mu}=\partial_{\mu}-O_{\mu\nu}\dot{ \partial}^{\nu}\). This means that the action of this differential on a vector \(X=X^{\mu}\partial_{\mu}+\dot{X}^{\mu}\dot{\partial}_{\mu}\) is given by
\[d\flat_{(x,y)}:\quad T_{(x,y)}\widetilde{TM} \to T_{\flat(x,y)}\widetilde{T^{*}}\widetilde{M}\,, \tag{103}\] \[X=X^{\mu}\partial_{\mu}+\dot{X}^{\mu}\dot{\partial}_{\mu} \mapsto d\flat_{(x,y)}(X)=X^{\mu}d\flat_{(x,y)}(\partial_{\mu})+ \dot{X}^{\mu}d\flat_{(x,y)}(\dot{\partial}_{\mu})\] (104) \[=X^{\mu}(\partial_{\mu}+m\partial_{\mu}\dot{\partial}_{\nu}F \bar{\partial}^{\nu})+m\dot{X}^{\mu}\dot{\partial}_{\mu}\dot{\partial}_{\nu} F\bar{\partial}^{\nu}\,. \tag{105}\]
By acting on the basis vectors \(\delta_{\mu}=\partial_{\mu}-N^{\nu}{}_{\mu}\dot{\partial}_{\nu}\), one finds:
\[d\flat_{(x,y)}(\delta_{\mu})=d\flat_{(x,y)}(\delta_{\mu})-N^{\nu}{}_{\mu}d \flat_{(x,y)}(\dot{\partial}_{\nu})=\partial_{\mu}+m\partial_{\mu}\dot{ \partial}_{\nu}F\bar{\partial}^{\nu}-mN^{\nu}{}_{\mu}\dot{\partial}_{\nu} \dot{\partial}_{\alpha}F\bar{\partial}^{\alpha}\,. \tag{106}\]
In order to simplify this expression, the relation \(2g_{\nu\alpha}=\dot{\partial}_{\nu}\dot{\partial}_{\alpha}F^{2}=\dot{\partial} _{\nu}(2F\dot{\partial}_{\alpha}F)\) is used that leads to
\[\dot{\partial}_{\nu}\dot{\partial}_{\alpha}F=\frac{g_{\nu\alpha}-p_{\nu}p_{ \alpha}/m^{2}}{F}\,. \tag{107}\]
From this expression, one finds that
\[d\flat_{(x,y)}(\delta_{\mu})=\partial_{\mu}-m\left[N^{\alpha}{}_{\mu}\frac{(g _{\alpha\nu}-p_{\alpha}p_{\nu}/m^{2})}{F}-\partial_{\mu}\dot{\partial}_{\nu} F\right]\bar{\partial}^{\nu}=\partial_{\mu}+O_{\mu\nu}\bar{\partial}^{\nu}\,, \tag{108}\]
which leads to the dual nonlinear connection,
\[O_{\mu\nu}(x,p)=-m\left[N^{\alpha}{}_{\mu}\frac{(g_{a\nu}-p_{\alpha}p_{\nu}/m^{2}) }{F}-\partial_{\mu}\dot{\partial}_{\nu}F\right]\Bigg{|}_{(x,y(p))}\,. \tag{10}\]
|
2303.10760 | Geometric linearisation for optimal transport with strongly p-convex
cost | We prove a geometric linearisation result for minimisers of optimal transport
problems where the cost-function is strongly p-convex and of p-growth. Initial
and target measures are allowed to be rough, but are assumed to be close to
Lebesgue measure. | Lukas Koch | 2023-03-19T20:26:46Z | http://arxiv.org/abs/2303.10760v3 | # Geometric regularisation for optimal transport with strongly \(p\)-convex cost
###### Abstract
We prove a geometric regularisation result for minimisers of optimal transport problems where the cost-function is strongly \(p\)-convex and of \(p\)-growth. Initial and target measures are allowed to be rough, but are assumed to be close to Lebesgue measure.
## 1 Introduction
The study of the optimal transport problem:
\[\min_{\pi\in\Pi(\lambda,\mu)}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}c(x-y)\, \mathrm{d}\pi \tag{1.1}\]
is well established. We refer the reader to [18] and [17] for an introduction and overview of the literature. When solutions take the form of a transport map \(\pi=(\mathrm{Id}\times T)_{\#}\mu\), under mild assumptions, minimisers are characterised by satisfying the Euler-Lagrange equation
\[\lambda(x)\mathrm{det}\left(\mathrm{D}T(x)\right)=\mu(T(x)) \tag{1.2}\]
as well as the additional structure condition
\[T(x)=x+\nabla c^{*}(\mathrm{D}\phi), \tag{1.3}\]
where \(\phi\) is a \(c\)-concave function and \(c^{*}\) denotes the convex conjugate of \(c\). Assuming \(\mu\sim\lambda\sim 1\) and linearising the geometric nonlinearity in (1.2), that is formally expanding \(\mathrm{det}(\mathrm{Id}+A)=1+\mathrm{tr}\,A+\ldots\), we find that
\[\mathrm{div}\,\nabla c^{*}(\mathrm{D}\phi)=\mu-\lambda. \tag{1.4}\]
Thus, at least formally, we expect solutions of (1.1) to be well approximated by solutions of (1.4). Note that in general (1.4) is a nonlinear equation. Thus we refer to the process of moving from (1.1) to (1.4) as geometric linearisation. The aim of this paper is to make this connection rigorous. We show the following:
**Theorem 1.1**.: _Let \(1<p<\infty\). Suppose \(c\colon\mathbb{R}^{d}\to\mathbb{R}\) is a strongly \(p\)-convex cost function of controlled-duality \(p\)-growth. Let \(\pi\) be a minimiser of (1.1) for some measures \(\lambda\), \(\mu\). Denote_
\[E(R):=\frac{1}{|B_{R}|}\int_{(B_{R}\times\mathbb{R}^{d})\cup( \mathbb{R}^{d}\times B_{R})}c(x-y)\,\mathrm{d}\pi\] \[D(R):=\frac{1}{|B_{R}|}W_{p}^{p}(\lambda_{\vdash}B_{R},\kappa_{ \lambda,R}\,\mathrm{d}x_{\vdash}B_{R})+\frac{R^{p}}{\kappa_{\lambda,R}^{p-1}} (\kappa_{\lambda,R}-1)^{p}\] \[+\frac{1}{|B_{R}|}W_{p}^{p}(\mu_{\vdash}B_{R},\kappa_{\mu,R}\, \mathrm{d}x_{\vdash}B_{R})+\frac{R^{p}}{\kappa_{\mu,R}^{p-1}}(\kappa_{\mu,R}- 1)^{p}.\]
_Here \(\kappa_{\lambda,R}=\frac{\lambda(B_{R})}{|B_{R}|}\) and \(\kappa_{\mu,R}=\frac{\mu(B_{R})}{|B_{R}|}\). Then, for every \(\tau>0\), there exists \(\varepsilon(\tau)>0\) such that if \(E(4)+D(4)\leq\varepsilon\), then there exists a radius \(R\in(2,3)\), \(c\in\mathbb{R}\) and \(\phi\) satisfying_
\[-\mathrm{div}\,\nabla c^{*}(\mathrm{D}\phi)=c\text{ in }B_{R}\]
_such that_
\[\int_{(B_{1}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{1})}c(x-y- \nabla c^{*}(\mathrm{D}\phi))\,\mathrm{d}\pi\lesssim\tau E(4)+D(4).\]
_Moreover_
\[\sup_{B_{1}}|\mathrm{D}\phi|^{p^{\prime}}+\int_{B_{R}}|\mathrm{D}\phi|^{p^{ \prime}}\,\mathrm{d}x\lesssim E(4)+D(4).\]
We remark that we explain our assumptions on the cost function in detail in Section 1.1.
Traditionally, (1.1) has been approached via the study of (1.2) using the theory of fully nonlinear elliptic equations developed by Caffarelli, see e.g. [3, 2] and the references therein. Recently, an alternative approach using variational techniques has been developed by Goldman and Otto in [8]. There, partial \(C^{1,\alpha}\)-regularity for solutions to (1.1) in the case of Holder-regular densities \(\lambda\), \(\mu\) and quadratic cost function \(c(x-y)=\frac{1}{2}|x-y|^{2}\) was proven. The key tool in the proof was a version of Theorem 1.1 in this setting. In later papers, continuous densities [5], rougher measures [7], more general cost functions (albeit still close to the quadratic cost functional) [15], as well as almost-minimisers of the quadratic cost functional [15] were considered. The quadratic version of Theorem 1.1 was also used to provide a more refined linearisation result of (1.2) in the quadratic set-up in [7] and of a similar statement in the context of optimal matching in [6]. Finally, quadratic versions of Theorem 1.1 played a key role in disproving the existence of a stationary cyclically monotone Poisson matching in 2-d [11].
We remark that very little information is available about the regularity of minimiser of (1.1) already in the simplest degenerate/singular case \(c(x-y)=\frac{|x-y|^{p}}{p}\). In order to attempt to extend the techniques of [8] to this setting, an essential first step is Theorem 1.1. This result will also play a key role in extending the results of [11] to \(p\)-costs with \(p\neq 2\).
The strategy of proof is similar to that used in [7], although with a number of simplifications. Further, we point the reader to [12] where a detailed account of the
proof of Theorem 1.1 and the motivations behind the strategy are given in the quadratic case.
Here we only comment on the steps where additional effort is required. An essential part of the proof is to obtain a \(L^{\infty}\)-bound for minimisers of (1.1) in the small-energy regime, see Section 3. In the quadratic case, this relies on the monotonicity (in the classical sense) property of solutions. In the non-quadratic case, \(c\)-monotonicity needs to be used directly. Our proof relies on the same intuition but highlights how the \(L^{\infty}\)-bound is a direct consequence of convexity and growth bounds of the cost function. We remark that \(L^{\infty}\)-bounds for \(p\)-homogeneous cost functions with \(p\geq 2\) in all energy regimes were obtained in [9].
A further key step is the so-called quasi-orthogonality property, see Section 7. In the quadratic case, this relied on expanding squares. Again, if \(p\neq 2\), this tool is not available and needs to be replaced by exploiting inequalities expressing the convexity of \(c\). Finally, regularity properties of solutions to (1.4) play a key role in the proof. In the quadratic case, such solutions are harmonic and hence very regular- the proof in [7] requires \(C^{3}\)-regularity of solutions! Already in the case \(c(x-y)=\frac{|x-y|^{p}}{p}\) with \(p\neq 2\), the best regularity that is known for solutions to (1.4) in general is \(C^{1,\beta}\)-regularity for some \(\beta>0\). Thus, at various places in the proof, more careful estimates are needed.
Finally, we comment why we restrict our attention to cost functions of the form \(c(x-y)\). This is due to the fact that our proof relies on the availability of a dynamical formulation. We want to identify points \((x,y)\in\operatorname{spt}\pi\) with the trajectory \(X(t)=tx+(1-t)y\). This is related to the Benamou-Brenier formulation of optimal transport, c.f. [4], which in our case states that (1.1) can be alternatively characterised as
\[\min_{(j,\rho)}\left\{\int c\left(\frac{\mathrm{d}j}{\mathrm{d}\rho}\right) \,\mathrm{d}\rho\colon\partial_{t}\rho+\operatorname{div}j=0,\,\rho(0)= \lambda,\,\rho(1)=\mu\right\} \tag{1.5}\]
Here \(\frac{\mathrm{d}j}{\mathrm{d}\rho}\) denotes the Radon-Nikodym derivative. This alternative, dynamical formulation of optimal transport is only available for costs of the form \(c(x-y)\) where \(c\) is convex.
### Assumptions on the cost function and its dual
In this section, we explain our assumptions on the cost function \(c\). The convex theory we quote can be found in [16] and [10].
Let \(p\in(1,\infty)\). We consider a \(C^{1}\)-cost function \(c\colon\mathbb{R}^{d}\to\mathbb{R}\) satisfying the following properties: There is \(\Lambda\geq 1\) such that
* \(c\) is strongly \(p\)-convex: for any \(x,y\in\mathbb{R}^{d}\) and \(\tau\in[0,1]\), \[\Lambda^{-1}\tau(1-\tau)V_{p}(x,y)+c(\tau x+(1-\tau)y)\leq\tau c(x)+(1-\tau)c (y).\] (1.6) where \[V_{p}(x,y)=\begin{cases}(|x|^{2}+|y|^{2})^{\frac{p-2}{2}}|x-y|^{2}&\text{if }p \leq 2\\ |x-y|^{p}&\text{if }p\geq 2.\end{cases}\]
2. \(c\) has \(p\)-growth: for any \(x\in\mathbb{R}^{d}\), \[\Lambda^{-1}|x|^{p}\leq c(x)\leq\Lambda|x|^{p}.\] (1.7)
3. for any \(x,y\in\mathbb{R}^{d}\), \[|c(x)-c(y)|\leq\Lambda U_{p}(x,y)\] (1.8) where \[U_{p}(x,y)=(|x|+|y|)^{p-1}|x-y|.\]
If the choice of \(p\) is clear from the context, we will write \(V=V_{p}\) and \(U=U_{p}\). We further note the following elementary inequality, valid for any \(z_{1},z_{2}\in\mathbb{R}^{d}\) and with implicit constants depending only on \(p\) and \(d\),
\[|V_{p}(z_{1})-V_{p}(z_{2})|\lesssim(|z_{1}|+|z_{2}|)^{p-1}|z_{1}-z_{2}|. \tag{1.9}\]
Note that (1.6) and the fact that \(c\) is \(C^{1}\) imply that for any \(x,y\in\mathbb{R}^{d}\),
\[c(x)\geq c(y)+\langle\nabla c(y),x-y\rangle+\lambda V(x,y) \tag{1.10}\] \[\langle\nabla c(x)-\nabla c(y),x-y\rangle\geq\lambda V(x,y). \tag{1.11}\]
Introduce the convex conjugate \(c^{*}\) defined on \(\mathbb{R}^{d}\) via
\[c^{*}(y)=\sup_{x}\langle y,x\rangle-c(x).\]
Note that due to (1.7), we have
\[|z|^{p^{\prime}}\lesssim c^{*}(z)\lesssim|z|^{p^{\prime}}. \tag{1.12}\]
Due to strict \(p\)-convexity of \(c\), \(c^{*}\) satisfies
\[|\nabla c^{*}(x)-\nabla c^{*}(y)|\lesssim\begin{cases}(|x|+|y|)^{p^{\prime}-2} |x-y|&\quad\text{if }p\leq 2\\ |x-y|^{p^{\prime}-1}&\quad\text{if }p\geq 2.\end{cases} \tag{1.13}\]
Finally, it follows from (1.12) and (1.13) that for \(x,y\in\mathbb{R}^{d}\),
\[|c^{*}(x)-c^{*}(y)|\lesssim U_{p^{\prime}}(x,y). \tag{1.14}\]
In addition to (1.6)-(1.8), which are standard assumptions quantifying the convexity, smoothness and growth of \(c\), we need to ensure that also \(c^{*}\) is \(p\)-convex. This is ensured by requiring a slightly non-standard growth assumption on the Lipschitz constant of \(c\). To be precise, we assume controlled duality \(p\)-growth on \(c\), that is
\[|\nabla c(x)-\nabla c(y)|\leq\begin{cases}\Lambda(|\nabla c(x)|+|\nabla c(y)| )^{\frac{p-2}{p-1}}|x-y|&\quad\text{if }p\geq 2\\ \Lambda|x-y|^{p-1}&\quad\text{if }p\leq 2.\end{cases} \tag{1.15}\]
Assuming (1.15), \(c^{*}\) is \(p^{\prime}\)-convex, that is for some \(c=c(p,\Lambda)>0\),
\[c\tau(1-\tau)V_{p^{\prime}}(x,y)+c^{*}(\tau x+(1-\tau)y)\leq\tau c^{*}(x)+(1- \tau)c^{*}(y). \tag{1.16}\]
We remark that (1.15) implies the more standard controlled \(p\)-growth condition
\[|\nabla c(x)-\nabla c(y)|\lesssim\begin{cases}\Lambda(|x|+|y|)^{p-2}|x-y|&\quad \text{if }p\geq 2\\ \Lambda|x-y|^{p-1}&\quad\text{if }p\leq 2.\end{cases}\]
Further, we point out that (1.15) is satisfied by polynomial cost functions as well as \(p\)-cost [1].
To close this section, we note that \(\nabla c^{*}=(\nabla c)^{-1}\) and recall the Fenchel-Young inequality in the form
\[c(\xi)+c^{*}(x)\geq\langle\xi,x\rangle\quad\forall\xi,x\in\mathbb{R}^{d}, \tag{1.17}\]
with equality if and only if \(\xi=\nabla c^{*}(x)\).
### Regularity assumptions on the dual system
In this section, we state the regularity assumptions we make on distributional solutions \(\phi\in W^{1,p^{\prime}}(B)\) of the equation
\[-\text{div}\,\nabla c^{*}(\text{D}\phi) =c_{g}\quad\text{ on }B \tag{1.18}\] \[\nabla c^{*}(\text{D}\phi)\cdot\nu =g\quad\text{ on }\partial B, \tag{1.19}\]
where \(g\in L^{p}(B)\) and \(c_{g}\) satisfies the compatibility condition \(|B|c_{g}=\int_{\partial B}g\). \(\nu\) denotes the outward pointing normal vector on \(\partial B\). Note that solutions are only defined up to a constant. Hence, we usually normalise solutions by requiring that \(\int_{B}\phi=0\).
We assume that solutions to (1.18) exist, are unique up to constant and moreover satisfy the energy estimate
\[\int_{B_{R}}|\text{D}\phi|^{p^{\prime}}\,\mathrm{d}x\lesssim\int_{\partial B_ {R}}|g|^{p}. \tag{1.20}\]
Using (1.17), (1.12) and Young's inequality, (1.20) implies that also
\[\int_{B_{R}}c(\nabla c^{*}(\text{D}\phi))\,\mathrm{d}x\lesssim\int_{\partial B _{R}}|g|^{p}. \tag{1.21}\]
We further assume interior regularity of solutions: for \(r<R\),
\[\sup_{x\in B_{r}}|\text{D}\phi|^{p^{\prime}}\lesssim_{R-r}\int_{\partial B_{R }}|g|^{p}. \tag{1.22}\]
Suppose \(\phi^{r}\) solves (1.18) with data \(g^{r}\), where \(g^{r}\) denotes convolution with a smooth convolution kernel on \(\partial B\) at scale \(r\). Then we require
\[\int_{B}|\text{D}\phi-\text{D}\phi^{r}|^{p^{\prime}}\lesssim\begin{cases}r\int _{\partial B}|g|^{p}&\quad\text{if }p\leq 2\\ r^{\frac{p^{\prime}}{2}}\int_{\partial B}|g|^{p}&\quad\text{if }p\geq 2\end{cases} \tag{1.23}\]
Denote by \(g^{r}\) the convolution of \(g\) with a smooth convolution kernel at scale \(r\) on \(\partial B\). Denote by \(\phi^{r}\) the solution of (1.18) with data \(g^{r}\). Then we require that for some \(\beta\in(0,1)\),
\[r^{\beta}[\text{D}\phi^{r}]^{p^{\prime}}_{C^{0,\beta}(B)}+\sup_{B}|\text{D} \phi^{r}|^{p^{\prime}}\lesssim\frac{1}{r^{d-1}}\int_{\partial B}|g|^{p}. \tag{1.24}\]
Moreover, we note by direct calculation using (1.8), (1.14) and (1.13) that
\[[c^{*}(\mathrm{D}\phi)+c(\nabla c^{*}(\mathrm{D}\phi))]_{C^{0,\beta}(B)}\lesssim \|\mathrm{D}\phi\|_{L^{\infty}(B)}^{p^{\prime}-1}[D\phi]_{C^{0,\beta}(B)}.\]
We note that, if \(c\) satisfies controlled-duality \(p\)-growth (1.15), (or alternatively, making the slightly weaker assumption that \(c^{*}\) is strictly \(p^{\prime}\)-convex, that is (1.16)) our assumptions are satisfied.
**Lemma 1.2**.: _If \(c^{*}\) satisfies (1.16) and (1.6)-(1.8) hold, then solutions to (1.18) exist and moreover (1.20)-(1.24) are satisfied._
Proof.: As \(c^{*}\) is \(p^{\prime}\)-convex, by the direct method, solutions in \(\phi\in W^{1,p^{\prime}}(B)\) exist and are unique up to constant. (1.20) follows immediately from testing the equation with \(\phi\) and using Young's inequality, the trace estimate and Poincare's inequality. In order to see (1.23), test the equations for \(\phi\) and \(\phi^{r}\) against \(\phi-\phi^{r}\) and use the \(p^{\prime}\)-convexity, duality, trace estimate and Poincare's inequality to see
\[\int_{B}V(\mathrm{D}\phi,\mathrm{D}\phi^{r})\leq \int_{B}\langle\nabla c^{*}(\mathrm{D}\phi)-\nabla c^{*}(\mathrm{ D}\phi^{r}),\mathrm{D}\phi-\mathrm{D}\phi^{r}\rangle\] \[= \int_{\partial B}(g-g^{r})(\phi-\phi^{r})\] \[\leq \|g-g^{r}\|_{W^{-\frac{1}{p},p}(\partial B)}\|\phi-\phi^{r}\|_{W ^{\frac{1}{p},p^{\prime}}(\partial B)}\] \[\lesssim {}^{\frac{1}{p}}\|g\|_{L^{p}(\partial B)}\|\phi-\phi^{r}\|_{W^{1, p^{\prime}}(B)}.\]
If \(p^{\prime}\geq 2\), this gives the result immediately after re-arranging. If \(p^{\prime}\leq 2\), we note
\[\int_{B}V(\mathrm{D}\phi,\mathrm{D}\phi^{\prime})\geq\left(\int_{B}|\mathrm{D }\phi-\mathrm{D}\phi^{\prime}|^{p^{\prime}}\right)^{\frac{2}{p^{\prime}}}\left( \int_{B}|\mathrm{D}\phi|^{p^{\prime}}+|\mathrm{D}\phi^{\prime}|^{p^{\prime}} \right)^{\frac{p^{\prime}-2}{p^{\prime}}},\]
so that the claimed inequality follows after re-arranging and using (1.20).
(1.22) and (1.24) follows from [14] and [13].
## 2 Preliminaries
### General notation
Throughout, we let \(1<p<\infty\). \(B_{r}(x)\) will denote a ball of radius \(r>0\) centered at \(x\in\mathbb{R}^{d}\). We further write \(B_{r}=B_{r}(0)\) and \(B=B_{1}(0)\). \(c\) denotes a generic constant that may change from line to line. Relevant dependencies on \(\Lambda\), say, will be denoted \(c(\Lambda)\). We say \(a\lesssim b\), if there exists a constant \(c>0\) depending only on \(d\), \(p\) and \(\Lambda\) such that \(a\leq cb\).
Given \(\Omega\subset\mathbb{R}^{d}\), we denote by \([\cdot]_{C^{0,\alpha}}\), the \(\alpha\)-Holder-seminorm. Given \(\alpha\in(0,\infty)\), \(L^{p}(\Omega)\) and \(W^{\alpha,p}(\Omega)\) denote the usual Lebesgue and (fractional) Sobolev spaces. If \(\mu\) is a measure on \(\mathbb{R}^{d}\), \(\mu_{\cdot}\Omega\) denotes its restriction to \(\Omega\).
Given \(R>0\), we let \(\Pi_{R}(x)=R\frac{x}{|x|}\) be the projection onto \(\partial B_{R}\) and define for every measure \(\rho\) on \(\mathbb{R}^{d}\) the projected measure on \(\partial B_{R}\), \(\hat{\rho}=\Pi_{R}\#\rho\), i.e.
\[\int\xi\,\mathrm{d}\hat{\rho}=\int\xi\left(R\frac{x}{|x|}\right)\,\mathrm{d} \rho(x).\]
A set \(\Omega\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\) is said to be \(c\)-cyclically monotone if for any \(N\in\mathbb{N}\) and any points \((x_{1},y_{1}),\ldots,(x_{N},y_{N})\in\Omega\), there holds
\[\sum_{i=1}^{N}c(x_{i}-y_{i})\leq\sum_{i=1}^{N}c(x_{i}-y_{i+1}),\]
where we identify \(y_{N+1}=y_{1}\).
A function \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is called \(c\)-concave if there exists a function \(g\colon\mathbb{R}^{d}\to\mathbb{R}\) such that
\[f(x)=\inf_{y\in\mathbb{R}^{d}}c(x-y)-g(y)\]
for all \(x\in\mathbb{R}^{d}\).
### Optimal transportation
We recall some definitions and facts about optimal transportation, see [18] for more details. Given a measure \(\pi\) on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) we denote its marginals by \(\pi_{1}\) and \(\pi_{2}\) respectively. The set of measures on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with marginals \(\pi_{1}\) and \(\pi_{2}\) is denoted \(\Pi(\pi_{1},\pi_{2})\). Given two positive measures with compact support and equal mass \(\lambda\) and \(\mu\) we define
\[W_{c}(\lambda,\mu)=\min_{\pi_{1}=\lambda,\pi_{2}=\mu}\int c(x-y)\,\mathrm{d}\pi.\]
While our notation is reminiscent of the Wasserstein distance, and in fact gives the Wasserstein \(p\)-distance in the case \(c(x-y)=|x-y|^{p}\), in general it is not a distance on measures. Under our hypothesis, an optimal coupling always exists and moreover a coupling \(\pi\) is optimal, if and only if its support is \(c\)-cyclical monotone.
Moreover, we note the following triangle-type inequality:
**Lemma 2.1**.: _Let \(\varepsilon\in(0,1)\). There is \(c(\varepsilon)>0\) such that for any admissible measures \(\mu_{1},\mu_{2},\mu_{3}\) it holds that_
\[W_{c}(\mu_{1},\mu_{3})\leq(1+\varepsilon)W_{c}(\mu_{1},\mu_{2})+c(\varepsilon) W_{c}(\mu_{2},\mu_{3}).\]
Proof.: Due to the gluing lemma, see e.g. [17, Lemma 5.5.], there exists \(\sigma\), a positive measure on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{d}\) with marginal \(\pi_{1}\) on the first two variables and marginal \(\pi_{2}\) on the last two variables. Here \(\pi_{1}\) and \(\pi_{2}\) are the optimal couplings between \(\mu_{1}\) and \(\mu_{2}\) and \(\mu_{2}\) and \(\mu_{3}\), respectively, with respect to \(W_{c}\). Set \(\gamma\) to be the marginal of \(\sigma\) with respect to the first and third variable. Then \(\gamma\in\Pi(\mu_{1},\mu_{3})\). It follows using the convexity of \(c\) and the triangle inequality in \(L^{p}(\gamma)\) that for any \(t\in(0,1)\),
\[W_{c}(\mu_{1},\mu_{3})\leq \left(\int c(x-z)\,\mathrm{d}\gamma\right)^{\frac{1}{p}}\leq\left( \int tc\left(\frac{x-y}{t}\right)+(1-t)\left(\frac{y-z}{1-t}\right)\,\mathrm{d }\gamma\right)^{\frac{1}{p}}\] \[\leq \left(\int\left(\left(tc\left(\frac{x-y}{t}\right)\right)^{\frac {1}{p}}+\left((1-t)\left(\frac{y-z}{1-t}\right)\right)^{\frac{1}{p}}\right)^{p }\,\mathrm{d}\gamma\right)^{\frac{1}{p}}\] \[\leq \left(\int tc\left(\frac{x-y}{t}\right)\,\mathrm{d}\gamma\right)^ {\frac{1}{p}}+\left(\int(1-t)c\left(\frac{y-z}{1-t}\right)\,\mathrm{d}\gamma \right)^{\frac{1}{p}}.\]
Using (1.7) and recalling the definition of \(\gamma\), we deduce
\[W_{c}(\mu_{1},\mu_{3})\leq\left(\Lambda^{2}t^{1-p}\right)^{\frac{1}{p}}W_{c}(\mu_ {1},\mu_{2})+\left(\Lambda^{2}(1-t)^{1-p}\right)^{\frac{1}{p}}W_{c}(\mu_{2},\mu _{3}).\]
Choosing \(t\) sufficiently close to \(1\), this gives the desired estimate.
We require also the following consequence of Lemma 2.1.
**Corollary 2.2**.: _Let \(\mu_{1},\mu_{2}\) be measures. Then_
\[W_{c}(\mu_{1},\mu_{2})\lesssim W_{c}(\mu_{1}+\mu_{2},2\mu_{2}).\]
Proof.: Using Lemma 2.1 and sub-additivity of \(W_{c}\), we note for any \(\delta>0\),
\[W_{c}(\mu_{1},\mu_{2})\leq (1+\delta)W_{c}\left(\mu_{1},\frac{1}{2}(\mu_{1}+\mu_{2})\right)+ c(\delta)W_{c}\left(\frac{1}{2}(\mu_{1}+\mu_{2}),\mu_{2}\right)\] \[= (1+\delta)W_{c}\left(\frac{1}{2}\mu_{1},\frac{1}{2}\mu_{2}\right) +c(\delta)W_{c}\left(\frac{1}{2}(\mu_{1}+\mu_{2}),\mu_{2}\right)\] \[\leq \frac{1+\delta}{2}W_{c}\left(\mu_{1},\mu_{2}\right)+c(\delta)W_{ c}\left(\mu_{1}+\mu_{2},2\mu_{2}\right).\]
Re-arranging gives the result.
Given \(O\subset\mathbb{R}^{d}\) set \(\kappa_{\mu,O}\) to be the generic constant such that \(W_{c}(\mu_{\sqcup}O,\kappa_{\mu,O}\,\mathrm{d}x_{\sqcup}O)\) is well-defined, that is \(\kappa_{\mu,O}=\frac{\mu(O)}{|O|}\). If \(O=B_{R}\), we write \(\kappa_{\mu,R}=\kappa_{\mu,B_{R}}\).
It will be convenient to denote \(\#_{R}=(B_{R}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{R})\). We recall the definition of the quantities that we use to measure smallness:
\[E(R):=\frac{1}{|B_{R}|}\int_{\#_{R}}c(x-y)\,\mathrm{d}\pi\] \[D(R):=\frac{1}{|B_{R}|}W_{p}^{p}(\lambda_{\sqcup}B_{R},\kappa_{ \lambda,R}\,\mathrm{d}x_{\sqcup}B_{R})+\frac{R^{p}}{\kappa_{\lambda,R}^{p-1}} (\kappa_{\lambda,R}-1)^{p}\] \[\qquad+\frac{1}{|B_{R}|}W_{p}^{p}(\mu_{\sqcup}B_{R},\kappa_{\mu, R}\,\mathrm{d}x_{\sqcup}B_{R})+\frac{R^{p}}{\kappa_{\mu,R}^{p-1}}(\kappa_{\mu,R} -1)^{p}.\]
We will find it convenient to work with trajectories \(X(t)=tx+(1-t)y\). In this context, it is useful to work on the domain
\[\Omega_{R}=\{(x,y)\in\#_{3}\colon\exists t\in[0,1]\text{ s.t. }X(t)\in\overline{B}_{R}\}.\]
To every trajectory \(X\in\Omega\), we associate entering and exiting times of \(B_{R}\):
\[\sigma_{R}:=\min\{t\in[0,1]\colon X(t)\in\overline{B}_{R}\}\] \[\tau_{R}:=\max\{t\in[0,1]\colon X(t)\in\overline{B}_{R}\}.\]
Often, we will drop the subscripts and denote \(\Omega=\Omega_{R}\), \(\sigma=\sigma_{R}\) and \(\tau=\tau_{R}\). Further, we will need to track trajectories entering and leaving \(B_{R}\). This is achieved through the non-negative measures \(f_{R}\) and \(g_{R}\) concentrated on \(\partial B_{R}\) and defined by the relations
\[\int\zeta\,\mathrm{d}f_{R}=\int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}} \zeta(X(\sigma))\,\mathrm{d}\pi,\]
\[\int\zeta\,\mathrm{d}g_{R}=\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}\zeta(X( \sigma))\,\mathrm{d}\pi. \tag{2.1}\]
Note that the set of trajectories \(\Omega\cap\{X(\sigma)\in\partial B_{R}\}\) implicitly defines a Borel measurable subset of \(\mathbb{R}^{d}\times\mathbb{R}^{d}\), namely the pre-image under the mapping \((x,y)\to X\), which is continuous from \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) into \(C^{0}([0,1])\). Thus, the integrals in (2.2) are well-defined. We will often use similar observations without further justification.
### Estimating radial projections
We record a technical estimate concerning radial projections we will require.
**Lemma 2.3**.: _For \(R>0\), there exists \(1\geq\varepsilon(d)>0\) such that for every \(g\geq 0\) with \(\operatorname{Spt}g\subset B_{(1+\varepsilon)R}\setminus B_{(1-\varepsilon)R}\) we have_
\[R^{1-d}\left(\int g\right)^{p}\lesssim\int_{\partial B_{R}}\hat{g}^{p} \lesssim\sup g^{p-1}\int|R-|x||^{p-1}g\]
**Proof.** By scaling we may assume \(R=\sup g=1\). The first inequality is then a direct consequence of Jensen's inequality.
For the second inequality, note that if \(\varepsilon\ll 1\), \(\sup_{\partial B_{1}}|\hat{g}|\ll 1\), since we assume \(\operatorname{Spt}g\subset B_{1+\varepsilon}\setminus B_{1-\varepsilon}\). Fix \(\omega\in\partial B_{1}\) and set \(\psi(r)=r^{d-1}g(r\omega)\) for \(r>0\). Then we have \(0\leq\psi\leq(1+\varepsilon)^{d-1}\leq 2\) and
\[\int_{0}^{\infty}\psi=\hat{g}(\omega).\]
We conclude that for \(\omega\in\partial B_{1}\),
\[\int_{0}^{\infty}|1-r|^{p-1}r^{d-1}g(r\omega)\geq\min_{0\leq\tilde{\psi}\leq 2,\int\tilde{\psi}=\hat{g}(\omega)}\int_{0}^{\infty}|1-r|^{p-1}\tilde{\psi}(r) \gtrsim\hat{g}(\omega).\]
The last inequality holds, since the minimiser of
\[\min_{0\leq\tilde{\psi}\leq 2,\int\tilde{\psi}=\hat{g}(\omega)}\int_{0}^{ \infty}|1-r|^{p-1}\tilde{\psi}(r)\]
is given by \(2I\left(|r-1|\leq\frac{1}{4}\hat{g}(\omega)\right)\).
## 3 A \(L^{\infty}\)-bound on the displacement
A key point in our proof will be that trajectories do not move very much. Since we assume \(E(4)\ll 1\), this is evidently true on average. However, we will require to control the length of trajectories not just on average, but in a pointwise sense. We establish this result in this section. In the quadratic case, the proof in [5] relies on the fact that \(2\)-monotonicity is equivalent to standard monotonicity. In our setting this is not available and we hence provide a different proof.
**Lemma 3.1**.: _Let \(1<p<\infty\). Let \(\pi\) be a coupling between two measures \(\lambda\) and \(\mu\). Assume that \(\operatorname{Spt}\pi\) is cyclically monotone with respect to \(c\)-cost and that \(E(4)+D(4)\ll 1\). Then for every \((x,y)\in\operatorname{Spt}\pi\cap\#_{3}\), we have_
\[|x-y|\lesssim\left(E(4)+D(4)\right)^{\frac{1}{p+d}}. \tag{3.1}\]
_As a consequence, for \((x,y)\in\operatorname{Spt}\pi\) and \(t\in[0,1]\),_
\[x\in B_{3}\text{ or }y\in B_{3}\Rightarrow(1-t)x+ty\in B_{4}. \tag{3.2}\]
In the proof of Lemma 3.1 we require the following technical result, which we state independently as we will require it again in the future.
**Lemma 3.2**.: _Let \(1<p<\infty\) and \(0<\alpha<1\). For every \(R>0\), \(\xi\in C^{0,\alpha}(B_{R})\) and \(\mu\) supported in \(B_{R}\) with \(\mu(B_{R})\sim|B_{R}|\),_
\[\left|\int_{B_{R}}\xi(\,\mathrm{d}\mu-\kappa_{\mu,R}\,\mathrm{d}x)\right|\leq [\xi]_{C^{0,\alpha}(B_{R})}W_{c}(\mu,\kappa_{\mu,R}\,\mathrm{d}x \llcorner B_{R})^{\frac{\alpha}{p}}R^{\frac{2d(p-\alpha)}{p}}. \tag{3.3}\]
_In case \(\alpha=1\), (3.3) holds with \(C^{0,1}\) replaced by \(C^{1}\). Further, if in addition we have \(\xi\in C^{[p-1],p-[p-1]}(B_{R})\), there is \(C>0\) such that_
\[\left|\int_{B_{R}}\xi(\,\mathrm{d}\mu-\kappa_{\mu,R}\,\mathrm{d}x)\right|\leq C\sum_{i=1}^{[p-1]}\left(\kappa_{\mu,R}\int\lvert\mathrm{D}^{i}\xi \rvert^{\frac{p}{p-i}}\,\mathrm{d}x\right)^{\frac{p-i}{p}}W_{c}(\mu,\kappa_{ \mu,R}\,\mathrm{d}x\llcorner B_{R})^{\frac{i}{p}}\] \[+[\xi]_{C^{[p-1],p-[p-1]}}W_{c}(\mu,\kappa_{\mu,R}\ x\llcorner B_ {R}).\]
**Proof.** Integrate the estimate
\[|\xi(x)-\xi(y)|\leq[\xi]_{C^{0,\alpha}}|x-y|^{\alpha}\]
against an optimal transport plan \(\pi\) between \(\mu\) and \(\kappa_{\mu,R}\,\mathrm{d}x\llcorner B_{R}\) to find,
\[\left|\int_{B_{R}}\xi(\,\mathrm{d}\mu-\kappa_{\mu}\,\mathrm{d}x)\right|\leq [\xi]_{C^{0,\alpha}(B_{R})}\int_{B_{R}}|x-y|^{\alpha}\,\mathrm{d}\pi.\]
Applying Holder and using (1.7) the result follows.
To obtain the second estimate, we proceed similarly, but start with the estimate
\[|\xi(x)-\xi(y)-\sum_{|\alpha|=1}^{[p-1]}\mathrm{D}^{\alpha}\xi(y)\frac{(x-y)^ {\alpha}}{|\alpha|!}\leq[\xi]_{C^{[p-1],p-[p-1]}(B_{R})}|x-y|^{p}.\]
The result follows using (1.7) and using Holder to estimate
\[\int\lvert\mathrm{D}^{\alpha}\xi(y)\rvert|x-y|^{|\alpha|}\,\mathrm{d}\pi\leq \left(\kappa_{\mu,R}\int\lvert\mathrm{D}^{|\alpha|}\xi\rvert^{\frac{p}{p-[ \alpha]}}\,\mathrm{d}x\right)^{\frac{p-[\alpha]}{p}}W_{c}(\mu,\kappa_{\mu,R}\, \mathrm{d}x\llcorner B_{R})^{\frac{|\alpha|}{p}}.\]
We proceed to prove Lemma 3.1.
**Proof of Lemma 3.1.** Fix \((x,y)\in\operatorname{Spt}\pi\cap\#_{3}\). Without loss of generality we may assume that \((x,y)\in B_{3}\times\mathbb{R}^{d}\).
**Step 1. Barrier points exist in all directions:** In this step we show that in all directions we may find points \((x^{\prime},y^{\prime})\in\operatorname{Spt}\pi\) with \(x^{\prime}\approx y^{\prime}\). To be precise, consider an arbitrary
unit vector \(\boldsymbol{n}\in\mathbb{R}^{d}\) and let \(r>0\). We show that for any \(\boldsymbol{n}\), and all \(r\ll 1\), there is \(M=M(p,d,\Lambda)>0\) and \((x^{\prime},y^{\prime})\in\operatorname{Spt}\pi\cap(B_{r}(x+2r\boldsymbol{n}) \times\mathbb{R}^{d})\) such that
\[c(x^{\prime}-y^{\prime})\leq\frac{ME(4)}{r^{d}}.\]
Assume, for contradiction, that for any \(M>0\), there is \(\boldsymbol{n}\in\mathbb{R}^{d}\) and \(r>0\) such that for all \((x^{\prime},y^{\prime})\in\operatorname{Spt}\pi\cap(B_{r}(x+2r\boldsymbol{n}) \times\mathbb{R}^{d})\), \(|x^{\prime}-y^{\prime}|\geq\frac{ME(4)}{r^{d}}\). Let \(\eta\) be a non-negative, smooth cut-off supported in \(B_{r}(x+2r\boldsymbol{n})\) satisfying
\[\sum_{i=1}^{|p-1|}r^{i}\sup|\mathrm{D}^{i}\eta|+r^{p}[\eta]_{C^{[p-1],p-[p-1]} }\lesssim 1.\]
Then
\[E(4)\gtrsim\int\int\eta(x)c(x-y)\,\mathrm{d}\pi(x,y)\geq\int\int\frac{ME(4)}{r ^{d}}\eta(x)\,\mathrm{d}\pi(x,y)=\frac{ME(4)}{r^{d}}\int\eta(x)\,\mathrm{d} \mu(x).\]
However, due to Lemma 3.2 and noting \(\kappa_{\mu,4}\sim 1\),
\[\left|\int\eta\,\mathrm{d}\mu(x)-\kappa_{\mu,4}\int\eta\,\mathrm{d}x\right| \lesssim\sum_{i=1}^{|p-1|}r^{\frac{d(p-i)}{p}-i}D(4)^{\frac{i}{p}}+r^{-p}D(4).\]
Normalising \(\eta\) such that \(\int_{B_{r}(x+2r\boldsymbol{n})}\eta\,\mathrm{d}x\sim r^{d}\), we can guarantee \(\kappa_{\mu,4}\int\eta\,\mathrm{d}x\sim\kappa_{\mu,4}r^{d}\sim r^{d}\). Ensuring \(D(4)\ll r^{p+d}\), so that \(\sum_{i=1}^{|p-1|}r^{\frac{d(p-i)}{p}-i}D(4)^{\frac{i}{p}}+r^{-p}D(4)\ll r^{d}\), we may thus conclude
\[E(4)\gtrsim\frac{ME(4)}{r^{d}}r^{d}=ME(4).\]
As \(M\) was arbitrary, this is a contradiction.
**Step 2. Building barriers:** In this step, we show that if we are given points
\[(x^{\prime},y^{\prime})\in\operatorname{Spt}\pi\cap(B_{r}(x+2r \boldsymbol{n})\times\mathbb{R}^{d})\]
such that \(|x^{\prime}-y^{\prime}|\leq\frac{ME(4)}{r^{d}}\) for some \(M=M(p,d,\Lambda)>0\), then there is a cone \(C_{x,x^{\prime}}\) with vertex \(x^{\prime}+r\rho(x^{\prime}-x)\) for some \(\rho=\rho(p,d,\Lambda)>0\), aperture \(\alpha=\alpha(p,d,\Lambda)\) and axis \(x^{\prime}-x\) such that \(y\not\in C_{x,x^{\prime}}\).
Without loss of generality, we may assume that \(x^{\prime}-x\) points in the \(e_{n}\) direction. Moreover, considering the cost \(c(\cdot)-c(x)\), we may assume that \(c(x)=0\). Suppose for a contradiction that
\[y\in C_{x,x^{\prime}}=x^{\prime}+\{a\in\mathbb{R}^{d-1}\times \mathbb{R}^{+}\colon d(a,\Gamma)\leq\alpha(|\overline{a}-x^{\prime}|-r\rho)\}\]
for some \(\alpha,\rho>0\) to be determined. Here \(\Gamma=\{t(x^{\prime}-x)\colon t\geq 0\}\) and \(\overline{a}\) denotes the orthogonal projection of a point \(a\in\mathbb{R}^{d-1}\times\mathbb{R}^{+}\) onto \(\Gamma\). We want to show that then
\[c(x-y)\geq c(x^{\prime}-y)+c(x-y^{\prime}). \tag{3.4}\]
(3.4) is a contradiction to the \(c\)-monotonicity of \(\pi\) and hence proves the stated claim.
We note that we may assume \(x=0\). Indeed, setting \(z=y-x\), \(z^{\prime}=y^{\prime}-x\) and \(\tilde{z}=x^{\prime}-x\), (3.4) becomes
\[c(-z)\geq c(\tilde{z}-z)+c(-z^{\prime})\]
with \(|\tilde{z}|\leq\frac{ME}{r^{d}}\) and \(z\in C_{0,\tilde{z}}\), which we recognise as precisely the situation we are in if \(x=0\).
Taking \(\rho\geq 4\), we then estimate using (1.8) and (1.6)
\[c(-y)\geq c(-\overline{y})+c(x^{\prime}-y^{\prime})-\Lambda U(-y,-\overline{ y})\] \[\geq \frac{|\overline{y}|}{|-\overline{y}+x^{\prime}|}c(-\overline{y} +x^{\prime})+\frac{\lambda|x^{\prime}|}{|\overline{y}|}V(-\overline{y},0)- \Lambda U(-y,-\overline{y})\] \[\geq c(-\overline{y}+x^{\prime})+c(x^{\prime})+\frac{\lambda|x^{ \prime}|}{|\overline{y}|}V(-\overline{y},0)+\frac{\lambda|\overline{y}-2x^{ \prime}|}{|\overline{y}-x^{\prime}|}V(-\overline{y}+x^{\prime},0)-\Lambda U(- y,-\overline{y})\] \[\geq c(-y+x^{\prime})+c(-x^{\prime})-\Lambda U(-y+x^{\prime},- \overline{y}+x^{\prime})+\frac{\lambda|x^{\prime}|}{|\overline{y}|}V(- \overline{y},0)\] \[\quad+\frac{\lambda|\overline{y}-2x^{\prime}|}{|\overline{y}-x^{ \prime}|}V(-\overline{y}+x^{\prime},0)-\Lambda U(-y,-\overline{y})\] \[\geq c(-y+x^{\prime})+c(-y^{\prime})-\Lambda U(-y+x^{\prime},- \overline{y}+x^{\prime})+\frac{\lambda|x^{\prime}|}{|\overline{y}|}V(- \overline{y},0)\] \[\quad+\frac{\lambda|\overline{y}-2x^{\prime}|}{|\overline{y}-x^{ \prime}|}V(-\overline{y}+x^{\prime},0)-\Lambda U(-y,-\overline{y})-\Lambda U( -x^{\prime},-y^{\prime})\]
In particular, it suffices to show
\[c(x^{\prime}-y^{\prime})+\frac{\lambda|x^{\prime}|}{|\overline{y }|}V(-\overline{y},0)+\frac{\lambda|\overline{y}-2x^{\prime}|}{|\overline{y}- x^{\prime}|}V(-\overline{y}+x^{\prime},0)\] \[\geq \Lambda U(-y,-\overline{y})+\Lambda U(-y+x^{\prime},-\overline{y} +x^{\prime}). \tag{3.5}\]
Figure 1: Geometric situation in Step 2.
We note that, if \(\rho\geq 8\), \(|\overline{y}-2x^{\prime}|\geq\frac{1}{2}|\overline{y}|\). Then we can estimate
\[\frac{\lambda|x^{\prime}|}{|\overline{y}|}V(-\overline{y},0)+\frac{ \lambda|\overline{y}-2x^{\prime}|}{|\overline{y}-x^{\prime}|}V(-\overline{y}+x ^{\prime},0)= \lambda(|x^{\prime}||\overline{y}|^{p-1}+|\overline{y}-2x^{\prime}| |\overline{y}-x^{\prime}|^{p-1})\] \[\gtrsim |\overline{y}|^{p}.\]
Further, if \(E(4)\leq\varepsilon r^{d+1}\),
\[\Lambda U(-y,-\overline{y})+\Lambda U(-y+x^{\prime},-\overline{y} +x^{\prime})+\Lambda U(-x^{\prime},-y^{\prime})\] \[\leq 2\Lambda(2|y|)^{p-1}|y-\overline{y}|+\Lambda\left(|x^{\prime}|+ |y^{\prime}|\right)^{p-1}|x^{\prime}-y^{\prime}|\] \[\lesssim |y|^{p-1}\alpha|\overline{y}-x^{\prime}|+\frac{ME}{r^{d}}(2r)^{p-1}\] \[\lesssim \alpha|\overline{y}|^{p}+\varepsilon|\overline{y}|^{p}.\]
Thus choosing \(\alpha\), \(\varepsilon>0\), sufficiently small, we find (3.5) holds, proving our claim.
**Step 3. Proving the \(L^{\infty}\)-bounds:** Choose \(r=c(E(4)+D(4))^{\frac{1}{p+d}}\). For sufficiently large choice of \(c>0\) and selecting \(c(d)\) directions \(n_{i}\), by Step 1 and Step 2 we obtain points
\[(x^{\prime},y^{\prime})\in\operatorname{Spt}\pi\cap(B_{r}(x+2rn_{i})\times \mathbb{R}^{d})\]
and cones \((C_{x,x^{\prime}_{i}})_{i\leq c(d)}\) with vertices \(x^{\prime}_{i}+r\rho(x^{\prime}_{i}-x)\), aperture \(\alpha\) and axis \(x^{\prime}_{i}-x\) such that for some \(c(\alpha)>0\),
\[y\not\in\cup C_{y_{i}}\text{ and }\mathbb{R}^{d}\setminus B_{(\rho+c(\alpha))r} (x)\subset\cup C_{y_{i}}.\]
In particular, we have
\[|y-x|\leq(\rho+c(\alpha))r\lesssim(E+D)^{\frac{1}{p+d}},\]
that is (3.1).
(3.2) is a direct consequence of (3.1), concluding the proof.
We record two consequences of Lemma 3.1 we will use later.
**Corollary 3.3**.: _Under the assumptions of Lemma 3.1, it holds that_
\[\int_{2}^{3}\int_{\Omega\cap\{\exists t\in[0,1]\colon X(t)\in \partial B_{R})\}}c(x-y)\,\mathrm{d}\pi\,\mathrm{d}R\lesssim(E(4)+D(4))^{1+ \frac{1}{p+d}},\] \[\int_{2}^{3}\int_{\Omega}I(\{\exists t\in[0,1]\colon X(t)\in \partial B_{R})\})\,\mathrm{d}\pi\,\mathrm{d}R\lesssim(E(4)+D(4))^{\frac{1}{p+ d}}.\]
**Proof.** We use Lemma 3.1 to deduce there is \(C>0\) such that
\[\int_{2}^{3}\int_{\Omega\cap\{\exists t\in[0,1]\colon X(t)\in \partial B_{R})\}}c(x-y)\,\mathrm{d}\pi\,\mathrm{d}R\] \[\leq \int_{2}^{3}\int_{(B_{7/2}\setminus B_{3/2})\times(B_{7/2}-B_{3/2 })}I(\{||x|-R|\leq C(E(4)+D(4))^{\frac{1}{p+d}}\})c(x-y)\,\mathrm{d}\pi\, \mathrm{d}R\] \[\lesssim (E(4)+D(4))^{\frac{1}{p+d}}\int_{\#_{4}}c(x-y)\,\mathrm{d}\pi\]
\[\int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}}\zeta(X(\tau),y)\pi(\, \mathrm{d}x\,\mathrm{d}y)=\int_{\partial B_{R}}\int\zeta(x,y)\lambda_{z}(\, \mathrm{d}x)\overline{\pi}(\,\mathrm{d}z\,\mathrm{d}y)\]
\[+\int_{B_{R}\times\partial B_{R}}c(x-y)\mu_{w}(\,\mathrm{d}y)\overline{ \pi}(\,\mathrm{d}x\,\mathrm{d}w)+\int_{\partial B_{R}\times\partial B_{R}}\int \int\zeta(x,y)\mu_{w}(\,\mathrm{d}y)\lambda_{z}(\,\mathrm{d}x)\overline{\pi}( \,\mathrm{d}z\,\mathrm{d}w)\] \[= I+II+III+IV+V. \tag{4.1}\]
In order to see that \(\tilde{\pi}\in\Pi(\lambda,\mu)\), by symmetry it suffices to check that the first marginal is \(\lambda\). Hence test (4.1) against \(\zeta(x)\). We begin by noting that due to the definition of \(\mu_{w}\) and using that \(\overline{\pi}\) is supported in \(\overline{B}_{R}\),
\[II+IV=\int_{B_{R}\times\mathbb{R}^{d}}\zeta(x)\overline{\pi}(\,\mathrm{d}x\, \mathrm{d}y)=\int_{B_{R}}\zeta(x)\mu(\,\mathrm{d}x)=\int_{\Omega\cap\{X( \sigma)\in B_{R})\}}\zeta(x)\pi(\,\mathrm{d}x\,\mathrm{d}y).\]
Similarly, using also the definition of \(f_{R}\),
\[III+V= \int_{\partial B_{R}\times\mathbb{R}^{d}}\int\zeta(x)\lambda_{z} (\,\mathrm{d}x)\overline{\pi}(\,\mathrm{d}z\,\mathrm{d}y)=\int_{\partial B_{R }}\zeta(z)f_{R}(\,\mathrm{d}z)\] \[= \int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}}\zeta(x)\pi(\, \mathrm{d}x\,\mathrm{d}y).\]
In particular, we have shown
\[\int\zeta(x)\tilde{\pi}(\,\mathrm{d}x\,\mathrm{d}y)=\int\zeta(x)\pi(\, \mathrm{d}x\,\mathrm{d}y)=\int\zeta(x)\lambda(\,\mathrm{d}x)\]
as desired.
Using optimality of \(\pi\) in the form
\[\int_{\Omega}c(x-y)\,\mathrm{d}\pi+\int_{\Omega^{c}}c(x-y)\,\mathrm{d}\pi\leq \int c(x-y)\,\mathrm{d}\tilde{\pi}\]
and testing (4.1) against \(\zeta(x,y)=c(x-y)\), we learn
\[\int_{\Omega}c(x-y)\,\mathrm{d}\pi\] \[\leq \int_{B_{R}\times B_{R}}c(x-y)\overline{\pi}(\,\mathrm{d}x\, \mathrm{d}y)+\int_{\partial B_{R}\times B_{R}}\int c(x-y)\lambda_{z}(\, \mathrm{d}x)\overline{\pi}(\,\mathrm{d}z\,\mathrm{d}y)\] \[\quad+\int_{B_{R}\times\partial B_{R}}c(x-y)\mu_{w}(\,\mathrm{d}y )\overline{\pi}(\,\mathrm{d}x\,\mathrm{d}w)\] \[\quad\quad+\int_{\partial B_{R}\times\partial B_{R}}\int\int c(x -y)\mu_{w}(\,\mathrm{d}y)\lambda_{z}(\,\mathrm{d}x)\overline{\pi}(\,\mathrm{d }z\,\mathrm{d}w)\] \[= \int_{B_{R}\times B_{R}}f_{1}\overline{\pi}(\,\mathrm{d}x\, \mathrm{d}y)+\int_{\partial B_{R}\times B_{R}}f_{2}\overline{\pi}(\,\mathrm{d }z\,\mathrm{d}y)+\int_{B_{R}\times\partial B_{R}}f_{3}\overline{\pi}(\, \mathrm{d}x\,\mathrm{d}w)\] \[\quad+\int_{\partial B_{R}\times\partial B_{R}}f_{4}\overline{ \pi}(\,\mathrm{d}z\,\mathrm{d}w)\]
As in the proof of Lemma 2.1, for any \(\delta>0\), there is \(C_{\delta}>0\) such that for any \(x,y,z\),
\[c(x-z)\leq(1+\delta)c(x-y)+C_{\delta}c(y-z).\]
Using this in combination with the fact that \(\lambda_{z}\), \(\mu_{w}\) are probability measures we deduce
\[f_{2}\leq (1+\delta)c(z-y)+C(\delta)\tilde{f}_{2},\quad f_{3}\leq(1+\delta)c (x-w)+C(\delta)\tilde{f}_{3}\]
\[f_{4}\leq (1+\delta)c(z-w)+C(\delta)\check{f}_{4},\]
where
\[\check{f}_{2}(z,y)=\int c(x-z)\lambda_{z}(\,\mathrm{d}x),\quad\check{f}_{3}(x,w )=\int c(w-y)\mu_{w}(\,\mathrm{d}y),\]
\[\check{f}_{4}(z,w)=\check{f}_{2}(z,y)+f_{3}(x,w).\]
In particular, we deduce
\[\int_{\Omega}c(x-y)\,\mathrm{d}\pi\] \[\leq (1+\delta)\int_{\overline{B}_{R}\times\overline{B}_{R}}c(x-y) \overline{\pi}(\,\mathrm{d}x\,\mathrm{d}y)+C(\delta)\int_{\partial B_{R}\times B _{R}}\check{f}_{2}\overline{\pi}(\,\mathrm{d}z\,\mathrm{d}y)\] \[\quad+C(\delta)\int_{B_{R}\times\partial B_{R}}\check{f}_{3} \overline{\pi}(\,\mathrm{d}x\,\mathrm{d}w)+C(\delta)\int_{\partial B_{R}\times \partial B_{R}}\check{f}_{4}\overline{\pi}(\,\mathrm{d}z\,\mathrm{d}w)\] \[= (1+\delta)\int_{\overline{B}_{R}\times\overline{B}_{R}}c(x-y) \overline{\pi}(\,\mathrm{d}x\,\mathrm{d}y)+2C(\delta)\int_{\partial B_{R} \times\mathbb{R}^{d}}\int c(x-z)\lambda_{z}(\,\mathrm{d}x)\overline{\pi}(\, \mathrm{d}z\,\mathrm{d}y)\] \[\quad+2C(\delta)\int_{\mathbb{R}^{d}\times\partial B_{R}}c(w-y) \mu_{y}(\,\mathrm{d}y)\overline{\pi}(\,\mathrm{d}x\,\mathrm{d}w)\] \[= (1+\delta)\int_{\overline{B}_{R}\times\overline{B}_{R}}c(x-y) \overline{\pi}(\,\mathrm{d}x\,\mathrm{d}y)+2C(\delta)\int_{\partial B_{R} \times\mathbb{R}^{d}}\int c(x-z)\lambda_{z}(\,\mathrm{d}x)f_{R}(\,\mathrm{d}z)\] \[\quad+2C(\delta)\int_{\mathbb{R}^{d}\times\partial B_{R}}c(w-y) \mu_{y}(\,\mathrm{d}y)g_{R}(\,\mathrm{d}w)\] \[= (1+\delta)\int_{\overline{B}_{R}\times\overline{B}_{R}}c(x-y) \overline{\pi}(\,\mathrm{d}x\,\mathrm{d}y)+2C(\delta)\int_{\Omega\cap\{X( \sigma)\in\partial B_{R}\}}c(x-X(\sigma))\,\mathrm{d}\pi\] \[\quad+2C(\delta)\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}c( X(\tau)-y)\,\mathrm{d}\pi\]
In order to obtain the second to last line we used the admissibility of \(\overline{\pi}\). Now note on the one hand, that due to optimality of \(\overline{\pi}\),
\[\int_{\overline{B}_{R}\times\overline{B}_{R}}c(x-y)\,\mathrm{d}\overline{\pi} =W_{c}(\lambda_{\sqcup}B_{R}+f_{R},\mu_{\sqcup}B_{R}+g_{R}).\]
On the other hand, on \(\Omega\cap\{X(\sigma)\in\partial B_{R}\}\cap\{X(\tau)\in\partial B_{R}\}\), for some \(\rho_{1},\rho_{2}\geq 0\) with \(\rho_{1}+\rho_{2}\leq 1\), due to convexity of \(c\) and \(c(0)=0\),
\[c(x-X(\sigma))+c(X(\tau)-y)=c(\rho_{1}(x-y))+c(\rho_{2}(x-y))\leq c(x-y).\]
Thus, we have shown
\[\int_{\Omega}c(x-y)\,\mathrm{d}\pi\] \[\leq (1+\delta)W_{c}(\lambda_{\sqcup}B_{R}+f_{R},\mu_{\sqcup}B_{R}+g_{ R})+C(\delta)\int_{\Omega\cap\{X(\sigma\in\partial B_{R}\}\cup\{X(\tau)\in \partial B_{R}\})}c(x-y)\,\mathrm{d}\pi\]
\[\int\zeta(x,y,z)\tilde{\pi}(\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z)=\int\int \zeta(x,y,z)\overline{\pi}(\,\mathrm{d}z|y)\pi(\,\mathrm{d}x\,\mathrm{d}y)\]
valid for any test function \(\zeta\). Note that with respect to the \((x,y)\) variables \(\tilde{\pi}\) has marginal \(\pi\), while with respect to the \((y,z)\) variables \(\tilde{\pi}\) has marginal \(\overline{\pi}\). Extend a trajectory \(X\) in \(\Omega\) in a piecewise affine fashion by setting for \(t\in[1,2]\),
\[X(t)=(t-1)z+(2-t)y.\]
Note that the distribution \(g^{\prime}\) of the endpoint of those trajectories that exit \(\overline{B}_{R}\) during the time interval \([0,1]\) is given by
\[\int\zeta\,\mathrm{d}g^{\prime}=\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}} \zeta(z)\tilde{\pi}(\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z). \tag{5.1}\]
Note that due to Lemma 3.1, \(y=X(1)\in B_{4}\) for any trajectory \(X\) that contributes to (5.1). Since \(\overline{\pi}(B_{4},B_{4}^{c})=0\), we deduce that also \(z=X(2)\in B_{4}\) and hence that \(g^{\prime}\) is
supported in \(B_{4}\). In particular, we may estimate for any \(\zeta\geq 0\), using the admissibility of \(\overline{\pi}\) for \(W_{c}(\mu,\kappa_{\mu,4}\,\mathrm{d}z\llcorner B_{4}+\mu\llcorner B_{4}^{c})\),
\[\int\zeta\,\mathrm{d}g^{\prime}\leq\int_{\{z\in B_{4}\}}\zeta(z)\overline{\pi}( \,\mathrm{d}y\,\mathrm{d}z)=\kappa_{\mu,4}\int_{B_{4}}\zeta.\]
This shows that \(g^{\prime}\) has a density, still denoted \(g^{\prime}\), satisfying \(g^{\prime}\leq\kappa_{\mu,4}\) and allows us to conclude the construction of \(\overline{g}_{R}\) by defining
\[\int\zeta\,\mathrm{d}\overline{g}_{R}=\int\zeta\left(R\frac{z}{|z|}\right)g^{ \prime}(\,\mathrm{d}z).\]
We now turn to establishing the claimed estimates for \(\overline{g}_{R}\). Note that, directly from the definitions of \(\tilde{\pi}\), \(g^{\prime}\) and \(\overline{g}_{R}\), an admissible plan for \(W_{c}(g_{R},\overline{g}_{R})\) is
\[\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}\zeta\left(X(\tau),R\frac{z}{|z| }\right)\,\mathrm{d}\tilde{\pi}.\]
In particular, using (1.7),
\[W_{c}(g_{R},\overline{g}_{R})\lesssim\int_{\Omega\cap\{X(\tau)\in\partial B_{ R}\}}c\left(X(\tau)-R\frac{z}{|z|}\right)\,\mathrm{d}\tilde{\pi}\lesssim\int_{ \Omega\cap\{X(\tau)\in\partial B_{R}\}}\left|X(\tau)-R\frac{z}{|z|}\right|^{p} \,\mathrm{d}\tilde{\pi}.\]
Noting that \(|X(\tau)-R\frac{z}{|z|}|\leq 2|X(\tau)-z|\), we deduce
\[\left|X(\tau)-R\frac{z}{|z|}\right|^{p}\lesssim|x-y|^{p}+|y-z|^{p}.\]
Thus, we deduce using (1.7) and Corollary 3.3,
\[W_{c}(g_{R},\overline{g}_{R})\lesssim\int_{\Omega\cap\{\exists t\in[0,1]:\;X( t)\in\partial B_{R}\}}c(x-y)\,\mathrm{d}\pi+D(4)\lesssim E(4)^{1+\frac{1}{p+d}}+D( 4).\]
Choosing \(\varepsilon\) sufficiently small, the first estimate holds.
Noting \(\sup g^{\prime}\leq\kappa_{\mu,4}\lesssim 1\), in order to prove the second inequality, it suffices to prove
\[\int_{2}^{3}\int\lvert R-|x|\|^{p-1}\,\mathrm{d}g^{\prime}\lesssim E(4)+D(4)\]
and to apply Lemma 2.3. The condition on the support of \(g\) in Lemma 2.3 applies due to Lemma 3.1. Note that by definition of \(g^{\prime}\),
\[\int\lvert R-|x|\|^{p-1}\,\mathrm{d}g^{\prime}= \int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}\lvert\lvert z\rvert -R\rvert^{p-1}\tilde{\pi}(\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z)\] \[\lesssim \int_{\{y\in B_{4}\}\cap\{\min_{[0,1]}\lvert X\rvert\leq R\leq \max_{[0,1]}\lvert X\rvert\}}\lvert x-y\rvert^{p-1}+|y-z|^{p-1}\tilde{\pi}( \,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z).\]
In order to obtain the second line, we observed that since \(|X(\tau)|=R\), it holds that \(\lvert\lvert z\rvert-R\rvert\leq\lvert x-y\rvert+\lvert y-z\rvert\). In addition we noted that, since \(X(\tau)\in\partial B_{R}\), we have
\(\min_{[0,1]}X\leq R\leq\max_{[0,1]}X\) and \(X(1)\in B_{4}\) due to Lemma 3.1. Integrating over \(R\), this gives
\[\int_{2}^{3}\int\!||z|-R|^{p-1}\,\mathrm{d}g^{\prime}\,\mathrm{d}R\] \[\lesssim \int_{\{y\in B_{4}\}}(\max_{[0,1]}X-\min_{[0,1]}X)\left(|x-y|^{p-1 }+|y-z|^{p-1}\right)\tilde{\pi}(\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z)\] \[\leq \int_{\{y\in B_{4}\}}|x-y|\left(|x-y|^{p-1}+|y-z|^{p-1}\right) \tilde{\pi}(\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z)\] \[\lesssim \int_{\{y\in B_{4}\}}|x-y|^{p}\pi(\,\mathrm{d}x\,\mathrm{d}y)+ \int\!|y-z|^{p}\overline{\pi}(\,\mathrm{d}y\,\mathrm{d}z)\] \[\leq E(4)+D(4).\]
The second-to last line was obtained applying Young's inequality. This concludes the proof.
## 6 Restricting the data
We also need to control \(D(R)\), while at the moment we only control \(D(4)\). Unfortunately, this does not follow immediately from the definition but requires a technical proof utilising ideas of the previous sections. The outcome of these considerations is the following lemma:
**Lemma 6.1**.: _For any non-negative measure \(\mu\) there is \(\varepsilon>0\) such that if \(D(4)\leq\varepsilon\), then_
\[\int_{2}^{3}\left(W_{c}(\mu_{\llcorner}B_{R},\kappa_{\mu,R}\,\mathrm{d}x_{ \llcorner}B_{R})+\frac{1}{\kappa_{\mu,R}}(\kappa_{\mu,R}-1)^{p}\right)\, \mathrm{d}R\lesssim D(4).\]
**Proof.** In this lemma \(\pi\) will denote the optimal transference plan for the problem \(W_{c}(\mu_{\llcorner}B_{4},\kappa_{\mu,R}\,\mathrm{d}x_{\llcorner}B_{4})\). Define the measures \(0\leq f^{\prime}\leq\kappa_{\mu,4}\) on \(\overline{B}_{R}\) and \(0\leq g^{\prime}\leq\kappa_{\mu,4}\) on \(\overline{B}_{4}\setminus B_{R}\), which record where exiting and entering trajectories end up:
\[\int\zeta\,\mathrm{d}f^{\prime}\coloneqq\int_{\Omega\cap\{X(0) \not\in B_{R}\}\cap\{X(1)\in B_{R}\}}\zeta(X(1))\,\mathrm{d}\pi\] \[\int\zeta\,\mathrm{d}g^{\prime}\coloneqq\int_{\Omega\cap\{X(0) \in B_{R}\}\cap\{X(1)\not\in B_{R}\}}\zeta(X(1))\,\mathrm{d}\pi.\]
Introduce the mass densities
\[\kappa_{f}=\frac{f^{\prime}(\mathbb{R}^{D})}{|B_{R}|}\leq\kappa_{\mu,R}\qquad \kappa_{g}=\frac{g^{\prime}(\mathbb{R}^{d})}{|B_{R}|}.\]
We use Lemma 2.1 to deduce
\[W_{c}(\mu_{\llcorner}B_{R},\kappa_{\mu,R}\,\mathrm{d}x_{\llcorner }B_{R})\] \[\lesssim W_{c}(\mu_{\llcorner}B_{R},\kappa_{\mu,4}\,\mathrm{d}x_{\llcorner }B_{R}-f^{\prime}+g^{\prime})+W_{c}(\kappa_{\mu,4}\,\mathrm{d}x_{\llcorner}B_{R }-f^{\prime}+g^{\prime},(\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x_{\llcorner}B_ {R}+g^{\prime})\] \[\quad+W_{c}((\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x_{\llcorner}B _{R}+g^{\prime},\kappa_{\mu,R}\,\mathrm{d}x_{\llcorner}B_{R})\]
\[= I+II+III.\]
Restricting \(\pi\) to trajectories that start in \(B_{R}\) gives an admissible plan for \(I\), that is,
\[I=W_{c}(\mu_{\ll}B_{R},\kappa_{\mu,4}\,\mathrm{d}x_{\ll}B_{R}-f^{ \prime}+g^{\prime})\leq W_{c}(\mu_{\ll}B_{4},\kappa_{\mu,4}\,\mathrm{d}x_{\ll}B _{4})\leq D(4).\]
Regarding \(II\), we begin by noting that using the sub-additivity of \(W_{c}\), we have
\[W_{c}(\kappa_{\mu,4}\,\mathrm{d}x_{\ll}B_{R}-f^{\prime}+g^{\prime },(\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x_{\ll}B_{R}+g^{\prime})\] \[\lesssim W_{c}(\kappa_{\mu,4}\,\mathrm{d}x_{\ll}B_{R}-f^{\prime},( \kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x_{\ll}B_{R})+W_{c}(g^{\prime},g^{\prime})\] \[= W_{c}(\kappa_{\mu,4}\,\mathrm{d}x_{\ll}B_{R}-f^{\prime},(\kappa_ {\mu,4}-\kappa_{f})\,\mathrm{d}x_{\ll}B_{R}).\]
Since this term will be estimated in the same way as \(III\) but is slightly more tricky, we first turn to \(III\).
In order to estimate \(III\), introduce the projection \(\overline{g}\) of \(g^{\prime}\) onto \(\partial B_{R}\), that is
\[\int\zeta\,\mathrm{d}\overline{g}=\int\zeta\left(\frac{Rx}{|x|} \right)g^{\prime}(\,\mathrm{d}x).\]
Using Lemma 2.1, we deduce
\[III\lesssim W_{c}((\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x_{\ll}B_{R}+ \overline{g},\kappa_{\mu,R}\,\mathrm{d}x_{\ll}B_{R})+W_{c}(\overline{g},g^{ \prime}).\]
We claim that
\[W_{c}((\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x_{\ll}B_{R}+ \overline{g},\kappa_{\mu,R}\,\mathrm{d}x_{\ll}B_{R})\lesssim\int_{\partial B_ {R}}\overline{g}^{2}.\]
Indeed, an admissible density-flux pair \((\rho,j)\) for the Benamou-Brenier formulation is given by
\[\begin{cases}\rho_{t}=(\kappa_{\mu,4}\,\mathrm{d}x_{\ll}-\kappa_{f}+t\kappa_{ g})+(1-t)\overline{g}\\ j_{t}=\nabla c^{*}(\mathrm{D}\phi)\,\mathrm{d}x_{\ll}B_{R},\end{cases}\]
where \(\phi\) solves
\[\begin{cases}-\mathrm{div}\,\nabla c^{*}(\mathrm{D}\phi)=\kappa_{g}&\text{ in }B_{R}\\ \nu\cdot\nabla c^{*}(\mathrm{D}\phi)=\overline{g}&\text{ on }\partial B_{R}.\end{cases}\]
We find, writing \(s=(\kappa_{\mu,4}-\kappa_{f}+t\kappa_{g})+(1-t)\overline{g}\), for any \(\zeta\) supported in \(B_{R}\),
\[\int\zeta\,\mathrm{d}j_{t}-\int c^{*}(\zeta)\,\mathrm{d}\rho_{t}= \int_{B_{R}}\zeta\cdot\nabla c^{*}(\mathrm{D}\phi)-c^{*}(\zeta)s \,\mathrm{d}x\] \[\leq \int_{B_{R}}\int_{0}^{1}sc\left(\frac{1}{s}\nabla c^{*}(\mathrm{ D}\phi)\right)\,\mathrm{d}t\,\mathrm{d}x\]
To obtain the second line, we used the Fenchel-Young inequality. Assuming \(|s-1|\ll 1\) for now, using (1.7) and (1.8) it is straightforward to see that
\[\int_{B_{R}}\int_{0}^{1}sc\left(\frac{1}{s}\nabla c^{*}(\mathrm{D}\phi)\right) \,\mathrm{d}t\,\mathrm{d}x\lesssim\int_{B_{R}}c\left(\nabla c^{*}(\mathrm{D} \phi)\right)\,\mathrm{d}x\lesssim\int_{\partial B_{R}}\overline{g}^{p}.\]
To obtain the last inequality, we used (1.21). Taking into account Lemma 3.1 and Lemma 2.3, we find
\[\int_{\partial B_{R}}\overline{g}^{\rho}\lesssim\int\lvert R-\lvert x \rvert\rvert^{p-1}\,\mathrm{d}g^{\prime}.\]
Noting \(\left\lvert R\frac{x}{\lvert x\rvert}-x\right\rvert=\lvert\lvert x\rvert-R\rvert\) and that \(g^{\prime}\) is supported in \(\overline{B}_{4}\setminus B_{R}\), we obtain using (1.7)
\[W_{c}(\overline{g},g^{\prime})\lesssim\int\lvert\lvert x\rvert-R \rvert^{p-1}\,\mathrm{d}g^{\prime}.\]
Proceeding exactly as in Lemma 6.1 we obtain
\[\int\lvert R-\lvert x\rvert\rvert^{p-1}\,\mathrm{d}g^{\prime} \lesssim D(4).\]
Since \(\kappa_{\mu,R}=\kappa_{4,R}-\kappa_{f}+\kappa_{g}\) to deduce that \(\lvert s-1\rvert\ll 1\) and to conclude the estimate of \(III\), it suffices to show \(\kappa_{f}^{p}+\kappa_{g}^{p}\lesssim D(4)\). By symmetry it suffices to consider \(\kappa_{g}\). By definition of \(\overline{g}\) and Young's inequality, we find
\[\kappa_{g}^{p}=\frac{\overline{g}(\mathbb{R}^{d})^{p}}{\lvert B_{R}\rvert^{p} }\leq\frac{\lvert\partial B_{R}\rvert^{p-1}}{\lvert B_{R}\rvert^{p}}\int_{ \partial B_{R}}\overline{g}^{\rho}.\]
This concludes the estimate for \(III\). It remains to finish the estimate of \(II\), by estimating
\[W_{c}(\kappa_{\mu,4}\,\mathrm{d}x\llcorner B_{R}-f^{\prime},( \kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x\llcorner B_{R}).\]
We want to proceed exactly as we did in the estimate for \(III\), the only delicate issue being that we do not have \(\kappa_{\mu,4}\,\mathrm{d}x\llcorner B_{R}-f^{\prime}\geq c>0\). However, this can be remedied by using Corollary 2.2 to deduce
\[W_{c}(\kappa_{\mu,4}\,\mathrm{d}x\llcorner B_{R}-f^{\prime},( \kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x\llcorner B_{R})\] \[\lesssim W_{c}((2\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x\llcorner B_{R}- f^{\prime},2(\kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x\llcorner B_{R}).\]
We can now proceed using the same argument as for \(III\) to conclude
\[W_{c}(\kappa_{\mu,4}\,\mathrm{d}x\llcorner B_{R}-f^{\prime},( \kappa_{\mu,4}-\kappa_{f})\,\mathrm{d}x\llcorner B_{R})\lesssim D(4).\]
This completes the proof.
## 7 The \(c^{*}\)-harmonic approximation result
The goal of this section is to prove the \(c^{*}\)-harmonic approximation result. In order to define the approximation, note that in light of Lemma 5.1, given \(\tau>0\), we fix \(R\in(3/2,2)\) such that there exist non-negative \(\overline{f}_{R}\), \(\overline{g}_{R}\) such that
\[W_{c}(f_{R},\overline{f}_{R})+W_{c}(g_{R},\overline{g}_{R}) \lesssim\tau E(4)+D(4)\] \[\int_{\partial B_{R}}\overline{f}_{R}^{p}+\overline{g}_{R}^{p} \lesssim E(4)+D(4).\]
Then let \(\phi\) be a solution with \(\int_{B_{R}}\phi\,\mathrm{d}x=0\) of
\[\begin{cases}-\mathrm{div}\,\nabla c^{*}(\mathrm{D}\phi)=c_{R}&\text{ in }B_{R}\\ \nabla c^{*}(\mathrm{D}\phi)\cdot\nu=\overline{g}_{R}-\overline{f}_{R}&\text{ on }\partial B_{R}.\end{cases} \tag{7.1}\]
where \(c_{R}=|B_{R}|^{-1}\left(\int_{\partial B_{R}}\overline{g}_{R}-\overline{f}_{R}\right)\) is the constant so that (7.1) is well-posed. We emphasize that while we do not make the dependence explicit in our notation, \(\phi\) depends on the choice of radius \(R\).
Finally due to Lemma 6.1, we will further assume throughout this section that \(D(R)\lesssim D(4)\).
With this notation in place, we state a precise version of our main result.
**Theorem 7.1**.: _For every \(0<\tau\), there exist positive constants \(\varepsilon(\tau),C(\tau)>0\) such that if \(E(4)+D(4)\leq\varepsilon(\tau)\), then there exists \(R\in(2,3)\) and \(\phi\) solving (7.1) such that_
\[\int_{\#_{1}}c(x-y-\nabla c^{*}(\mathrm{D}\phi))\leq\tau E(4)+C(\tau)D(4).\]
The proof will be a direct consequence of the lemmata we prove in the following subsections.
### Quasi-orthogonality
The key observation in order to prove Theorem 7.1 is contained in the following elementary lemma.
**Lemma 7.2**.: _For any \(\pi\in\Pi(\lambda,\mu)\) and \(\phi\) continuously differentiable in \(\overline{B}_{R}\), there is \(c(p,\Lambda)\) such that we have_
\[\begin{split}& c(p,\Lambda)\int_{\Omega}\int_{\sigma}^{\tau}V\left( \dot{X}(t)-\nabla c^{*}(\nabla\phi(X(t))\right)\,\mathrm{d}t\,\mathrm{d}\pi\\ \leq&\int_{\Omega}c(x-y)\,\mathrm{d}\pi-\int_{B_{R} }c(\nabla^{*}(\mathrm{D}\phi))\,\mathrm{d}x-\int_{\Omega}\int_{\sigma}^{\tau} \langle\dot{X}(t)-\nabla c^{*}(\mathrm{D}\phi(X(t)),\mathrm{D}\phi(X(t)))\, \mathrm{d}t\,\mathrm{d}\pi\\ &\quad+\int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi))\,\mathrm{d}x- \int_{\Omega}\int_{\sigma}^{\tau}c(\nabla c^{*}(\mathrm{D}\phi(X(t)))\, \mathrm{d}t\,\mathrm{d}\pi.\end{split}\]
Proof.: We apply (1.10) with \(x=\dot{X}\) and \(y=\nabla c^{*}(\mathrm{D}\phi(X(t)))\). Noting that we have \(\nabla c(\nabla c^{*}(\mathrm{D}\phi))=\mathrm{D}\phi\) and \(\int_{\Omega}\int_{\sigma}^{\tau}c(\dot{X}(t))\,\mathrm{d}t\,\mathrm{d}\pi\leq \int_{\Omega}c(x-y)\,\mathrm{d}\pi\) this gives the desired result.
### Error estimates
We would like to apply Lemma 7.2 with \(\phi\) solving (7.1). However note that \(\overline{g}_{R}\) and \(\overline{f}_{R}\) will in general not be sufficiently smooth in order to ensure that \(\phi\) is \(C^{1}(\overline{B}_{R})\). Thus, we approximate them using mollification. To be precise, let \(0<r\ll 1\) and denote by \(\overline{f}_{R}^{r}\) and \(\overline{g}_{R}^{r}\), respectively, the convolution with a smooth convolution kernel (on \(\partial B_{R}\)) at scale \(r\) of \(\overline{f}_{R}\) and \(\overline{g}_{R}\). Set \(\phi^{r}\) to be the solution with \(\int_{B_{R}}\phi^{r}\,\mathrm{d}x=0\) of
\[\begin{cases}-\mathrm{div}\,\nabla c^{*}(\mathrm{D}\phi^{r})=c^{r}&\text{ in }B_{R}\\ \nabla c^{*}(\mathrm{D}\phi^{r})\cdot\nu=\overline{g}_{R}^{r}-\overline{f}_{R }^{r}&\text{ on }\partial B_{R}.\end{cases} \tag{7.2}\]
Here \(c^{r}=|B_{R}|^{-1}\left(\int_{\partial B_{R}}\overline{g}_{R}^{r}-\overline{f}_{R} ^{r}\right)\) is the constant such that (7.2) is well-posed.
We begin by showing that the left-hand sides of the estimate in Lemma 7.2 are close when evaluated for \(\phi\) and \(\phi^{r}\).
**Lemma 7.3**.: _For every \(0<\tau\) there exists \(\varepsilon(\tau)\) and \(C(\tau),r_{0}(\tau)>0\), such that if it holds that \(E(4)+D(4)\leq\varepsilon(\tau)\) and \(0<r\leq r_{0}\), then there exists \(R\in[2,3]\) such that if \(\phi\) solves (7.1) and \(\phi^{r}\) solves (7.2), then_
\[\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}V\left(\dot{X}(t)- \nabla c^{*}(\mathrm{D}\phi(X(t))\right)\,\mathrm{d}t\,\mathrm{d}\pi-\int_{ \Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}V\left(\dot{X}(t)-\nabla c^{*}(\mathrm{ D}\phi^{r}(X(t))\right)\,\mathrm{d}t\,\mathrm{d}\pi\] \[\lesssim \tau(E(4)+D(4)).\]
**Proof.** Write \(\xi(x)=\nabla c^{*}(\mathrm{D}\phi(x))\), \(\xi^{r}(x)=\nabla c^{*}(\mathrm{D}\phi^{r}(x))\). We focus on the case \(p\leq 2\). The case \(p>2\) follows by similar arguments, but is easier. In light of (1.9), (1.13) and using Holder, we find
\[\left|\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}V\left(\dot{X} (t)-\xi(X(t))\right)\,\mathrm{d}t\,\mathrm{d}\pi-\int_{\Omega}\int_{\sigma}^{ \tau}V\left(\dot{X}(t)-\xi^{r}(X(t))\right)\,\mathrm{d}t\,\mathrm{d}\pi\right|\] \[\lesssim \int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|\xi(X(t))-\xi^{r}( X(t))|\left(|\dot{X}(t)+|\xi(X(t))|+|\xi^{r}(X(t))|\right)^{p-1}\,\mathrm{d}t\, \mathrm{d}\pi\] \[\lesssim \int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|\mathrm{D}\phi(X(t) )-\mathrm{D}\phi^{r}(X(t))|\left(|\mathrm{D}\phi(X(t))|+|\mathrm{D}\phi^{r}(X( t))|\right)^{p^{\prime}-1}\,\mathrm{d}t\,\mathrm{d}\pi\] \[\leq \left(\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|\mathrm{D}\phi (X(t))-\mathrm{D}\phi^{r}(X(t))|^{p^{\prime}}\,\mathrm{d}t\,\mathrm{d}\pi \right)^{\frac{1}{p^{\prime}}}\] \[\quad\times\left(\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}| \mathrm{D}\phi(X(t))|^{p^{\prime}}+|\mathrm{D}\phi^{r}(X(t))|^{p^{\prime}}\, \mathrm{d}t\,\mathrm{d}\pi\right)^{\frac{1}{p}}.\]
Note that due to Lemma 3.1, if \(X\in\Omega_{1}\), then \(X(0)\in B_{3/2}\). Thus using (1.22) and (1.24),
\[\left|\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|\mathrm{D} \phi(X(t)-\mathrm{D}\phi^{r}(X(t))|^{p^{\prime}}\,\mathrm{d}t\,\mathrm{d}\pi- \int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|\mathrm{D}\phi(X(0)-\mathrm{D} \phi^{r}(X(0))|^{p^{\prime}}\,\mathrm{d}t\,\mathrm{d}\pi\right|\] \[\lesssim \left([\mathrm{D}\phi]_{C^{0,\beta}(B_{3/2})}+[\mathrm{D}\phi^{r }]_{C^{0,\beta}(B_{3/2})}\right)\left(\sup_{B_{3/2}}|\mathrm{D}\phi|+|\mathrm{D} \phi^{r}|\right)^{p^{\prime}-1}\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|X (t)-X(0)|^{\beta}\,\mathrm{d}t\,\mathrm{d}\pi\] \[\lesssim c(r)(E(4)+D(4))\pi(\Omega_{1})(E(4)+D(4))^{\frac{\beta}{p+d}} \lesssim c(r)(E(4)+D(4))^{1+\frac{\beta}{p+d}}.\]
In order to obtain the last line, we used Lemma 3.1. Moreover, using Lemma 3.2 and (1.23),
\[\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}|\mathrm{D}\phi(X(0) )-\mathrm{D}\phi^{r}(X(0))|^{p^{\prime}}\,\mathrm{d}t\,\mathrm{d}\pi\leq\int_{ B_{3/2}}|\mathrm{D}\phi(x)-\mathrm{D}\phi^{r}(x)|^{p^{\prime}}\,\mathrm{d}\mu\] \[\leq \left|\int_{B_{3/2}}|\mathrm{D}\phi(x)-\mathrm{D}\phi^{r}(x)|^{p^ {\prime}}(\,\mathrm{d}\mu-\kappa_{\mu,R}\,\mathrm{d}x)\right|+\kappa_{\mu,R} \int_{B_{R}}|\mathrm{D}\phi(x)-\mathrm{D}\phi^{r}(x)|^{p^{\prime}}\,\mathrm{d}x\] \[\lesssim \left(\sup_{B_{3/2}}|\mathrm{D}\phi|^{p^{\prime}-1}+|\mathrm{D}\phi ^{r}|^{p^{\prime}-1}\right)\left([\mathrm{D}\phi]_{C^{0,\beta}(B_{(1+R)/2})}+ [\mathrm{D}^{r}\phi]_{C^{0,\beta}(B_{(1+R)/2})}\right)\]
\[\times W_{c}(\mu_{\ll}B_{R},\kappa_{\mu,R}\,\mathrm{d}x_{\ll}B_{R})^{ \frac{\beta}{p}}+r(E(4)+D(4))\] \[\lesssim c(r)(E(4)+D(4))^{1+\frac{\beta}{p}}+r(E(4)+D(4)).\]
Arguing similarly, that is first replacing \(X(t)\) with \(X(0)\), at the cost of making an error of size \(c(r)\left(E(4)+D(4)\right)^{1+\frac{\beta}{p+d}}\), and then replacing \(\,\mathrm{d}\mu\) with \(\,\mathrm{d}x\), making an error \(c(r)(E(4)+D(4))^{1+\frac{\beta}{p}}+r(E(4)+D(4))\), we find
\[\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}\lvert\nabla\phi(X(t ))\rvert^{p^{\prime}}+\lvert\nabla\phi^{r}(X(t))\rvert^{p^{\prime}}\,\mathrm{ d}t\,\mathrm{d}\pi\] \[\lesssim r(E(4)+D(4))+c(r)(E(4)+D(4))^{1+\frac{\beta}{p+d}}.\]
Collecting estimates and choosing \(r\) as well as \(\varepsilon(\tau)\) sufficiently small, the desired result follows.
We now turn to estimating each of the three terms on the right-hand side of the estimate in Lemma 7.2 in turns. We will see that the second and third term are errors that arise from the approximation of the boundary data and from passing to the perspective of trajectories, respectively. Accordingly, estimating them will be essentially routine. In contrast, estimating the first term requires us to contrast an appropriate competitor to \(\pi\).
**Lemma 7.4**.: _For every \(0<\tau\) there exists \(\varepsilon(\tau),C(\tau),r_{0}(\tau)>0\) such that if it holds that \(E(4)+D(4)\leq\varepsilon(\tau)\) and \(0<r\leq r_{0}\), then there exists \(R\in[2,3]\) such that if \(\phi^{r}\) solves (7.2), then_
\[\int_{\Omega}c(x-y)\,\mathrm{d}\pi-\int_{B_{R}}c(\nabla c^{*}( \mathrm{D}\phi^{r}))\,\mathrm{d}x\lesssim\tau E(4)+D(4).\]
Proof.: We note, in the case \(p\leq 2\), using (1.8), (1.13) and Holder,
\[\int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi^{r}))-c(\nabla c^{*}( \mathrm{D}\phi))\,\mathrm{d}x\] \[\lesssim \int_{B_{R}}\lvert\nabla c^{*}(\mathrm{D}\phi^{r})-\nabla c^{*}( \mathrm{D}\phi)\rvert(\lvert\nabla c^{*}(\mathrm{D}\phi^{r})\rvert+\lvert \nabla c^{*}(\mathrm{D}\phi)\rvert)^{p-1}\,\mathrm{d}x\] \[\lesssim \int_{B_{R}}\lvert\mathrm{D}\phi-\mathrm{D}\phi^{r}\rvert\left( \lvert D\phi\rvert+\lvert\mathrm{D}\phi^{\prime}\rvert\right)^{p^{\prime}-1}\] \[\lesssim \lVert\mathrm{D}\phi-\mathrm{D}\phi^{r}\rVert_{L^{p^{\prime}}(B_ {R})}\left(\lVert\mathrm{D}\phi\rVert_{L^{p^{\prime}}(B_{R})}+\lVert\mathrm{D} \phi^{r}\rVert_{L^{p^{\prime}}(B_{R})}\right)^{p^{\prime}-1}\] \[\lesssim r^{\frac{1}{p^{\prime}}}(E(4)+D(4)).\]
To obtain the last line, we used (1.20) and (1.23). In case \(p\geq 2\) a similar estimate holds by the same argument. Due to Lemma 4.1 and Corollary 3.3,
\[\int_{\Omega}c(x-y))\,\mathrm{d}\pi\leq W_{c}(\lambda_{\ll}B_{R}+ f_{R},\mu_{\ll}B_{R}+g_{R})+2\int_{\Omega\cap[\exists t\in[0,1]\colon X(t)\in \partial B_{R}]}c(x-y)\,\mathrm{d}\pi\\ \leq W_{c}(\lambda_{\ll}B_{R}+f_{R},\mu_{\ll}B_{R}+g_{R})+c\left( \tau E(4)+D(4)\right).\]
In particular, combining the previous two estimates and choosing \(r\) sufficiently small, it suffices to prove
\[W_{c}(\lambda_{\!\perp}B_{R}+f_{R},\mu_{\!\perp}B_{R}+g_{R})-\int_{B_{R}}c(\nabla c ^{*}(\mathrm{D}\phi))\,\mathrm{d}x\lesssim\tau E(4)+D(4).\]
Using Lemma 2.1, we obtain for \(\delta\in(0,1)\) to be fixed
\[W_{c}(\lambda_{\!\perp}B_{R}+f_{R},\mu_{\!\perp}B_{R}+g_{R})\] \[\leq (1+\delta)W_{c}(\kappa_{\lambda,R}\,\mathrm{d}x_{\!\perp}B_{R}+ \overline{f}_{R},\kappa_{\mu,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{g}_{R})\] \[\quad+c(\delta)\left(W_{c}(\lambda_{\!\perp}B_{R},\kappa_{\lambda,R}\,\mathrm{d}x_{\!\perp}B_{R})+W_{c}(\mu_{\!\perp}B_{R},\kappa_{\mu,R}\, \mathrm{d}x_{\!\perp}B_{R})\right)\] \[\qquad+c(\delta)\left(W_{c}(f_{R},\overline{f}_{R})+W_{c}(g_{R},\overline{g}_{R})\right).\]
Noting that due to the definition of \(D\) and our choice of \(R\),
\[W_{c}(\lambda_{\!\perp}B_{R},\kappa_{\lambda,R}\,\mathrm{d}x_{ \!\perp}B_{R})+W_{c}(\mu_{\!\perp}B_{R},\kappa_{\mu,R}\,\mathrm{d}x_{\!\perp}B_ {R})+W_{c}(f_{R},\overline{f}_{R})+W_{c}(g_{R},\overline{g}_{R})\] \[\lesssim \tau E(4)+D(4),\]
we claim that for some \(C>0\),
\[W_{c}(\kappa_{\lambda,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{f}_{R},\kappa_ {\mu,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{g}_{R})\leq(1+CD(4)^{\frac{1}{p} })\int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi))\,\mathrm{d}x. \tag{7.3}\]
Collecting estimates, choosing first \(\delta\) and \(r\) small, then \(\varepsilon\) small, once (7.3) is established, the proof is complete.
Establishing (7.3) is easy to do using the Benamou-Brenier formulation. For \(t\in[0,1]\) introduce the non-singular, non-negative measure
\[\rho_{t}=t(\kappa_{\mu,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{f}_{R})+(1-t) (\kappa_{\lambda,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{g}_{R}),\]
and the vector-valued measure
\[j_{t}=\nabla c^{*}(\mathrm{D}\phi)\,\mathrm{d}x_{\!\perp}B_{R}.\]
Note that (7.2) can be rewritten as
\[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int\zeta\,\mathrm{d}\rho_{t}=\int\nabla \zeta\cdot\,\mathrm{d}j_{t}\]
for all test functions \(\zeta\). Set
\[\int c\left(\frac{\,\mathrm{d}j_{t}}{\,\mathrm{d}\rho_{t}}\right)\,\mathrm{d} \rho_{t}=\sup_{\zeta\in C^{0}_{c}(\mathbb{R}^{d})}\left\{\int\zeta\,\mathrm{d }j_{t}-\int c^{*}(\xi)\,\mathrm{d}\rho_{t}\right\}\]
The Benamou-Brenier formula gives
\[W_{c}(\kappa_{\lambda,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{f}_{R},\kappa_ {\mu,R}\,\mathrm{d}x_{\!\perp}B_{R}+\overline{g}_{R})\leq\int_{0}^{1}\int c \left(\frac{\,\mathrm{d}j_{t}}{\,\mathrm{d}\rho_{t}}\right)\,\mathrm{d}\rho_{ t}\,\mathrm{d}t.\]
Since \(j_{t}\) is supported in \(B_{R}\) it suffices to consider \(\zeta\) supported in \(B_{R}\). Then by definition and the Fenchel-Young inequality for any \(s>0\),
\[\int\zeta\,\mathrm{d}j_{t}-\int c^{*}(\zeta)\,\mathrm{d}\rho_{t}= \int_{B_{R}}\zeta\cdot\nabla c^{*}(\mathrm{D}\phi)-c^{*}(\zeta)(t \kappa_{\mu,R}+(1-t)\kappa_{\lambda,R})\,\mathrm{d}x\] \[\leq \int_{B_{R}}sc^{*}(\zeta)+sc\left(\frac{1}{s}\nabla c^{*}( \mathrm{D}\phi)\right)-c^{*}(\zeta)(t\kappa_{\mu,R}+(1-t)\kappa_{\lambda,R})\, \mathrm{d}x.\]
Choosing \(s=t\kappa_{\mu,R}+(1-t)\kappa_{\lambda,R}\) and integrating in \(t\), we deduce
\[W_{c}(\kappa_{\mu,R}\,\mathrm{d}x_{\llcorner}B_{R}+\overline{f}_{R},\kappa_{ \lambda,R}\,\mathrm{d}x_{\llcorner}B_{R}+\overline{g}_{R})\leq\int_{B_{R}} \int_{0}^{1}sc\left(\frac{\nabla c^{*}(\mathrm{D}\phi))}{s}\right)\,\mathrm{d}t \,\mathrm{d}x.\]
Now
\[\int_{B_{R}}\int_{0}^{1}sc\left(\frac{\nabla c^{*}(\mathrm{D} \phi))}{s}\right)\,\mathrm{d}t\,\mathrm{d}x\leq \int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi))\,\mathrm{d}x+\int_{B_{ R}}\int_{0}^{1}(s-1)c(\nabla c^{*}(\mathrm{D}\phi))\,\,dx\,\mathrm{d}t\] \[\quad+\int_{B_{R}}\int_{0}^{1}s\left(c\left(\frac{\nabla c^{*}( \mathrm{D}\phi)}{s}\right)-c(\nabla c^{*}(\mathrm{D}\phi))\right)\,\mathrm{d}t \,\mathrm{d}x.\]
Note that
\[|s-1|\lesssim D(4)^{\frac{1}{p}}.\]
Further using (1.8) and (1.7),
\[\int_{B_{R}}\int_{0}^{1}s\left(c\left(\frac{\nabla c^{*}(\mathrm{ D}\phi)}{s}\right)-c(\nabla c^{*}(\mathrm{D}\phi))\right)\,\mathrm{d}t\, \mathrm{d}x\] \[\lesssim \int_{B_{R}}\int_{0}^{1}\left|1-\frac{1}{s}\right|\left(1+\frac{ 1}{s}\right)^{p-1}|\nabla c^{*}(\mathrm{D}\phi)|^{p}\,\mathrm{d}t\,\mathrm{d}x \lesssim D(4)^{\frac{1}{p}}\int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi))\, \mathrm{d}x.\]
Thus the proof of (7.3) is complete.
We turn to the second term on the right-hand side of Lemma 7.2. This term will be small due to the definition of \(\phi^{r}\).
**Lemma 7.5**.: _For every \(0<\tau\) there exists \(\varepsilon(\tau),C(\tau),r_{0}(\tau)>0\) such that if it holds that \(E(4)+D(4)\leq\varepsilon(\tau)\) and \(0<r\leq r_{0}\), then there exists \(R\in[2,3]\) such that if \(\phi^{r}\) solves (7.2), then_
\[\int_{\Omega}\int_{\sigma}^{\tau}\langle\dot{X}(t)-\nabla c^{*}(\mathrm{D} \phi^{r}(X(t)),\mathrm{D}\phi^{r}(X(t))\rangle\,\mathrm{d}t\,\mathrm{d}\pi \lesssim\tau E(4)+D(4).\]
**Proof.** Note that \(\frac{\mathrm{d}}{\mathrm{d}t}\phi^{r}(X(t))=\langle\dot{X}(t),\nabla\phi^{r }(X(t))\rangle\). Thus, since \(\pi\in\Pi(\lambda,\mu)\) as well as using the definition of \(f_{R}\) and \(g_{R}\),
\[\int_{\Omega}\int_{\sigma}^{\tau}\langle\dot{X}(t),\mathrm{D}\phi^{r}(X(t)) \rangle\,\mathrm{d}t\,\mathrm{d}\pi= \int_{\Omega}\phi^{r}(X(\tau)-\phi^{r}(X(\sigma))\,\mathrm{d}\pi\]
\[= \int_{\Omega\cap\{X(\tau)\in\partial BR\}}\phi^{r}(X(\tau))\,{\rm d} \pi+\int_{\{x\in B_{R}\}}\phi^{r}(x)\,{\rm d}\pi\] \[\quad-\int_{\Omega\cap\{X(\sigma)\in\partial BR\}}\phi^{r}(X( \sigma))\,{\rm d}\pi-\int_{\{y\in B_{R}\}}\phi^{r}(y)\,{\rm d}\pi\] \[= \int_{B_{R}}\phi^{r}\,{\rm d}(\mu-\lambda)+\int_{\partial B_{R}} \phi^{r}\,{\rm d}(g_{R}-f_{R}).\]
On the other hand, as in Lemma 7.3, at cost of an error \(c(r)(E(4)+D(4))^{1+\frac{\beta}{p+d}}\), we may replace \({\rm D}\phi^{r}(X(t))\) with \({\rm D}\phi^{r}(x)\) in the expression
\[\int_{\Omega}\int_{\sigma}^{\tau}\langle\nabla c^{*}({\rm D}\phi^{r}(X(t)),{ \rm D}\phi^{r}(X(t))\rangle\,{\rm d}t\,{\rm d}\pi\]
and \(\int_{\Omega}\int_{\sigma}^{\tau}\,{\rm d}t\,{\rm d}\pi\) with \(\int_{B_{R}}\,{\rm d}x\) at the cost of a further error \(c(r)(E(4)+D(4))^{1+\frac{\beta}{p}}\). Thus it suffices to consider
\[\int_{B_{R}}\langle\nabla c^{*}({\rm D}\phi),{\rm D}\phi\rangle\,{\rm d}x=c^{ r}\int_{B_{R}}\phi^{r}\,{\rm d}x+\int_{\partial B_{R}}\phi^{r}\,{\rm d}\left( \overline{g}_{R}^{r}-\overline{f}_{R}^{r}\right).\]
Collecting estimates, we have shown
\[\int_{\Omega}\int_{\sigma}^{\tau}\langle\dot{X}(t)-\nabla c^{*}({ \rm D}\phi^{r}(X(t)),{\rm D}\phi^{r}(X(t))\rangle\,{\rm d}t\,{\rm d}\pi\] \[\lesssim \int_{B_{R}}\phi^{r}\,{\rm d}(\mu-\lambda-c^{r})+\int_{\partial B _{R}}\phi^{r}\,{\rm d}(g_{R}-\overline{g}_{R}^{r}+f_{R}-\overline{f}_{R}^{r}) +c(r)\left(E(4)+D(4)\right)^{1+\frac{\beta}{p+d}}\] \[= I+II+III.\]
We find using Lemma 3.2, Young's inequality, (1.24) and (1.20),
\[I\leq |\!\int_{B_{R}}\phi^{r}\,{\rm d}(\mu-\kappa_{\mu,R}-\lambda+ \kappa_{\lambda,R})|+|\int_{B_{R}}\phi^{r}\,{\rm d}(\kappa_{\mu,R}-\kappa_{ \lambda,R}-c^{r})|\] \[\lesssim \sup_{B_{R}}\!\!|{\rm D}\phi^{r}|\left(W_{c}(\lambda_{\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Now note that a standard mollification argument shows
\[\int_{\partial B_{R}}\phi^{r}\,\mathrm{d}(g_{R}-\overline{g}_{R}^{r})\lesssim \tau^{\frac{1}{p}}\|\mathrm{D}\phi^{r}\|_{\mathrm{L}^{p^{\prime}}(B_{R})}\| \overline{g}_{R}\|_{\mathrm{L}^{p}(\partial B_{R})}\lesssim r^{\frac{1}{p}}(E( 4)+D(4)),\]
and that moreover using Lemma 3.2 and Young's inequality,
\[\int_{\partial B_{R}}\phi^{r}(\,\mathrm{d}\overline{g}_{R}-g)\lesssim[\mathrm{ D}_{\tan}\phi^{r}]_{C^{0,\beta}(\partial B_{R})}Wc(\overline{g}_{R},\overline{g})^{ \frac{1}{p}}\lesssim c(r)(\tau E(4)+D(4)).\]
Thus, collecting estimates and first choosing \(r_{0}\) sufficiently small, then \(\varepsilon\) small, we conclude the proof.
We next estimate the third term on the right-hand side of the estimate in Lemma 7.2.
**Lemma 7.6**.: _For every \(0<\tau\) there exists \(\varepsilon(\tau),C(\tau),r_{0}(\tau)>0\) such that if it holds that \(E(4)+D(4)\leq\varepsilon(\tau)\) and \(0<r\leq r_{0}\), then there exists \(R\in[2,3]\) such that if \(\phi^{r}\) solves (7.2), then_
\[\int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi^{r}))\,\mathrm{d}x-\int_{\Omega}\int _{\sigma}^{\tau}c(\nabla c^{*}(\mathrm{D}\phi^{r}(X(t)))\,\mathrm{d}t\, \mathrm{d}\pi\lesssim\tau E(4)+D(4).\]
**Proof.** Set \(\xi=c(\nabla c^{*}(\mathrm{D}\phi^{r}))\). Then
\[\int_{B_{R}}c(\nabla c^{*}(\mathrm{D}\phi^{r}))\,\mathrm{d}x- \int_{\Omega}\int_{\sigma}^{\tau}c(\nabla c^{*}(\mathrm{D}\phi^{r}(X(t))\, \mathrm{d}t\,\mathrm{d}\pi\] \[= (1-\kappa_{\mu,R})\int_{B_{R}}\xi\,\mathrm{d}x+\left(\kappa_{\mu, R}\int_{B_{R}}\xi\,\mathrm{d}x-\int_{B_{R}\times\mathbb{R}^{d}}\xi\,\mathrm{d}\pi\right)\] \[+\int_{B_{R}\times\mathbb{R}^{d}}\xi\,\mathrm{d}\pi-\int_{\Omega }\int_{\sigma}^{\tau}\xi(X(t))\,\mathrm{d}t\,\mathrm{d}\pi\] \[= I+II+III.\]
Using (1.21) and Young's inequality, we find
\[I\leq D(4)^{\frac{1}{p}}(E(4)+D(4))\lesssim\tau E(4)+D(4).\]
Employing Lemma 3.2 we deduce
\[II\lesssim\|\xi\|_{C^{0,\beta}(\overline{B}_{R})}W_{c}(\mu_{\mathbb{L}}B_{R}, \kappa_{\mu,R\mathbb{L}}B_{R})^{\frac{\beta}{p}}.\]
It is straightforward to check that \(\|\xi\|_{C^{0,\beta}(\overline{B}_{R})}\lesssim\|\mathrm{D}\phi^{r}\|_{C^{0, \beta}(\overline{B}_{R})}\left(\sup_{\overline{B}_{R}}\mathrm{D}\phi^{r}\right) ^{p^{\prime}-1}\), so that using (1.24) and Young's inequality,
\[II\lesssim c(r)(E(4)+D(4))D(4)^{\frac{\beta}{p}}\lesssim\tau E(4)+D(4).\]
In order to estimate \(III\), we first find
\[I(X(0)\in B_{R})\xi(X(0))-I(X(t)\in B_{R})\xi(X(t))\]
\[\leq I(\exists s\in[0,1]\colon X(s)\in\partial B_{R},X(0)\in\overline{B}_{R}) \xi(X(0))\] \[+I(\forall s\in[0,1]\;X(s)\in B_{R})\left(\xi(X(0))-\xi(X(t)\right).\]
Thus, using also (1.24) and Jensen's inequality,
\[III\leq \int_{0}^{1}\int I(X(0)\in B_{R})\xi(X(0))-I(X(t)\in B_{R})\xi(X(t ))\,\mathrm{d}\pi\,\mathrm{d}t\] \[\leq \sup_{\overline{B}_{R}}\lvert\xi\rvert\int_{0}^{1}\int I(\exists s \in[0,1]\colon X(s)\in\partial B_{R},X(0)\in\partial B_{R})\,\mathrm{d}\pi\, \mathrm{d}t\] \[+\lVert D\xi\rVert_{C^{0,\beta}(\overline{B}_{R})}\int_{0}^{1} \int I(\forall s\in[0,1]\;X(s)\in B_{R})\lvert X(t)-X(0)\rvert^{\beta}\, \mathrm{d}t\,\mathrm{d}\pi\] \[\lesssim c(r)(E(4)+D(4))\pi(\exists t\in[0,1]\colon X(t)\in\partial B_{R}) +c(r)(E+D)\int_{\Omega}\lvert x-y\rvert^{\beta}\,\mathrm{d}\pi\] \[\lesssim c(r)(E(4)+D(4))^{1+\frac{1}{p+d}}+c(r)(E(4)+D(4))^{1+\frac{\beta} {p}}.\]
In order to obtain the last line we used Corollary 3.3. Collecting estimates and choosing first \(r_{0}\) small, then \(\varepsilon\) small, we conclude the proof.
### Proof of Theorem 7.1
We are now ready to prove Theorem 7.1.
**Proof of Theorem 7.1.** Applying Lemma 7.2 to \(\phi^{r}\) and collecting the output of Lemma 7.3, Lemma 7.4, Lemma 7.5 and Lemma 7.6, we have shown that for any \(0<\tau\), there is \(\varepsilon,C>0\) such that if \(E(4)+D(4)\leq\varepsilon\), then
\[\int_{\Omega_{1}}\int_{\sigma_{1}}^{\tau_{1}}V\left(\dot{X}(t)-\nabla c^{*}( \mathrm{D}\phi(X(t))\right)\,\mathrm{d}t\,\mathrm{d}\pi\leq\tau E(4)+CD(4).\]
Arguing as in Lemma 7.3, we may replace \(\mathrm{D}(\phi(X(t)))\) by \(\mathrm{D}(\phi(X(0)))\) at the cost of an error of size \(\left(E(4)+D(4)\right)^{1+\frac{\beta}{p+d}}\). Noting that \(V(z-\nabla c^{*}(\mathrm{D}\phi(x))\) is a convex function of \(z\), we employ Jensen's inequality to deduce
\[\int_{\#_{1}}V\left(x-y-\nabla c^{*}(\mathrm{D}\phi(x))\right)\,\mathrm{d} \pi\leq\int_{\Omega_{1}}\int_{0}^{1}V\left(\dot{X}(t)-\nabla c^{*}(\mathrm{D} \phi(X(0))\right)\,\mathrm{d}t\,\mathrm{d}\pi.\]
Now using (1.14) and (1.22), as well as Corollary 3.3,
\[\int_{\Omega_{1}\cap\left(\{\sigma_{1}>0\}\cup\{\tau_{1}<1\} \right)}\int_{\sigma_{1}}^{\tau_{1}}V\left(\dot{X}(t)-\nabla c^{*}(\mathrm{D} \phi(X(0))\right)\,\mathrm{d}t\,\mathrm{d}\pi\] \[\lesssim \int_{\Omega_{1}\cap\left(\{\sigma_{1}>0\}\cup\{\tau_{1}<1\} \right)}\lvert\dot{X}(t)\rvert^{p}+\lvert\mathrm{D}\phi(X(0))\rvert^{p^{ \prime}}\] \[\lesssim (E(4)+D(4))\pi(\Omega_{1}\cap\left(\{\sigma_{1}>0\}\cup\{\tau_{1 }<1\}\right))\] \[\lesssim (E(4)+D(4))^{1+\frac{1}{p+d}}.\]
Thus, we conclude
\[\int_{\#_{1}}V(x-y-\nabla c^{*}(\mathrm{D}\phi(x)))\,\mathrm{d}\pi\lesssim\tau E (4)+D(4).\]
This concludes the proof in the case \(p\geq 2\). In the case \(p\leq 2\), an application of Holder's inequality combined with (1.7) concludes the proof. |
2310.08017 | Harnessing Large Language Models' Empathetic Response Generation
Capabilities for Online Mental Health Counselling Support | Large Language Models (LLMs) have demonstrated remarkable performance across
various information-seeking and reasoning tasks. These computational systems
drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also
carry substantial promise in meeting the growing demands of mental health care,
albeit relatively unexplored. As such, this study sought to examine LLMs'
capability to generate empathetic responses in conversations that emulate those
in a mental health counselling setting. We selected five LLMs: version 3.5 and
version 4 of the Generative Pre-training (GPT), Vicuna FastChat-T5, Pathways
Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple
instructional prompt, these models responded to utterances derived from the
EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we
compared their responses to those from traditional response generation dialogue
systems, which were fine-tuned on the ED dataset, along with human-generated
responses. Notably, we discovered that responses from the LLMs were remarkably
more empathetic in most scenarios. We position our findings in light of
catapulting advancements in creating empathetic conversational systems. | Siyuan Brandon Loh, Aravind Sesagiri Raamkumar | 2023-10-12T03:33:06Z | http://arxiv.org/abs/2310.08017v1 | Harnessing Large Language Models' Empathetic Response Generation Capabilities for Online Mental Health Counselling Support
###### Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across various information-seeking and reasoning tasks. These computational systems drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also carry substantial promise in meeting the growing demands of mental health care, albeit relatively unexplored. As such, this study sought to examine LLMs' capability to generate empathetic responses in conversations that emulate those in a mental health counselling setting. We selected five LLMs: version 3.5 and version 4 of the Generative Pre-training (GPT), Vicuna FastChat-TS, Pathways Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple instructional prompt, these models responded to utterances derived from the EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we compared their responses to those from traditional response generation dialogue systems, which were fine-tuned on the ED dataset, along with human-generated responses. Notably, we discovered that responses from the LLMs were remarkably more empathetic in most scenarios. We position our findings in light of catapulting advancements in creating empathetic conversational systems.
empathetic conversational systems, empathetic chatbots, empathetic dialogue systems, empathy, empathetic artificial intelligence, online mental health, affective computing
## I Introduction
Humanity faces an unprecedented need for mental health services. Global crises, such as the recent COVID-19 pandemic, have greatly burdened people's mental health, with the World Health Organization (WHO) reporting a 25% increase in depression and anxiety cases during the first week of the pandemic. The accessibility of mental health services is far from ideal, with those at greatest risk of mental distress being the least likely to receive help [1]. This escalating demand for mental health services and workers highlights the urgent need for accessible, scalable, and transformative approaches to address the mental health crisis [2]. This demand is backed by the finding that mental health workers are more empathetic towards victims than general physicians and non-medical workers [3]. Empathy is vital in these settings as it leads to higher satisfaction and improved patient outcomes [4].
Digital technologies such as dialogue/conversational systems (i.e., chatbots) present viable solutions for providing remote psychological care and emotional support [5]. Preliminary reports suggest positive outcomes for individuals who engage with such tools [6]. These automated solutions are also positively received by both general users and mental health professionals alike [7][8]. A recent study comparing physician and chatbot (ChatGPT) responses to patient questions in social media forums, found that chatbot responses had better quality and empathy [9]. Apart from fully automated solutions, conversational AI systems have been found to be helpful in assisting novice counsellors in online peer support systems [10]. Given their acceptance and positive results derived from digital platforms, it seems worthwhile to employ the latest advancements in artificial intelligence (AI) to enhance these initiatives further.
### _Empathetic Conversational Systems_
Advancements in AI have paved the way for the development of dialogue systems imbued with the capacity to discern and appropriately respond to the emotional content of a user's messages. Termed Empathetic Conversational Systems (ECS)[11], these systems often represent a sophisticated modification of pre-trained encoder-decoder Transformer-based neural architectures [12]. Certain models include a dedicated function to encode the emotional content of a user's message [13], while others utilize external knowledge structures such as knowledge graphs to derive meaningful insights from a user's message that go beyond its immediate interpretation [14]. The emphasis of these systems on modelling empathetic responses, a crucial element in fostering therapeutic results in psychotherapy [15], positions them as promising tools for technologically-mediated mental healthcare.
Despite their potential, the development of ECS is significantly constrained by the lack of high-quality training data. As pointed out by Raamkumar and Yang [11], the primary resource for developing ECS is the EmpatheticDialogues (ED) dataset [16]. This publicly available seminal dataset was designed to enable the development of dialogue agents capable of discerning feelings during a conversation and responding empathetically. However, the ED dataset presents several challenges.
The data in the ED dataset consists of conversations between randomly selected Mechanical Turk (mTurk) workers, without any criteria requiring participation from trained mental health professionals. This introduces a potential for significant variance in the types of responses in the dataset, increasing the risk of inclusion of malicious, judgement, or unempathetic responses. Montiel and colleagues' findings support this
concern [17]; Volunteers who scored high on an emotional quotient test rated the empathy level in a representative subset of responses in the ED dataset as significantly lower than those initially assigned in the dataset. Furthermore, the structure of the conversations within the ED dataset poses additional limitations. Most conversations in the dataset are brief, typically only encompassing one exchange, or 'turn'. This brevity leaves little room for an extended dialogue, which is a crucial component for modeling the different stages of dialogue typically encountered in counselling or mental health settings. This could hinder the system's capability to fully engage with users and navigate the various stages of a therapeutic conversation. Taken together, the variance in responses and the structure of the dataset underscores the shortcomings in ED. These limitations could result in ECS models that fall short of providing the needed empathetic responses and potentially negatively impact user engagement and trust in such systems.
### _Large Language Models (LLMs)_
LLMs such as Generative Pre-Training models (GPT) [18] have shown impressive capabilities across multiple tasks, including logical reasoning, text summarisation, machine translation, and language understanding [19][20]. GPT is the backbone of ChatGPT, the well-acclaimed general purpose chatbot. Crucially, humans preferred responses from language models trained with minimal fine-tuning than those that were fine-tuned with human feedback [20] (provide study context). Overall, they showed that LLM's performance is highly dependent on the unsupervised, task-agnostic, pre-training phase, where the model encodes a general-purpose representation of a large quantity of text, rather than during the fine-tuning phase. This discovery, along with many others, suggests the potential for LLMs to serve as a practical alternative to ECSs in a mental health context, especially considering the data constraints discussed earlier.
Given the paucity of research in this domain, it remains to be seen if LLMs are capable of generating responses in a manner appropriate for a mental healthcare setting. Thus, the current study attempts to answer this central research question through a comparative evaluation of responses from ECS models and LLMs to a query in the ED dataset. The comparison is conducted at both the individual model level and the aggregated group level. Each model's response was evaluated using a preexisting computational framework for detecting the presence of empathetic features in textual data[21]. This framework, which models empathy in text as a three-dimensional construct, is used as a basis to answer our main research question (see Methods for details).
## II Methods
### _Dataset_
We comparatively evaluated the empathetic response generation abilities of different language models through a series of experiments on the EmpatheticDialogues (ED) dataset (Rashkin et al., 2019). ED comprises a series of conversations between two participants. The first participant (P1) was randomly assigned one of 32 emotion words (the "prompt") and was asked to recount a personal experience related to that emotion (the "situation"). The participant then entered a chatroom, where he/she discussed the "situation" with another participant (P2), who was tasked to listen and respond with empathy. Altogether, 810 individuals participated in the dataset creation exercise, amounting to 24,850 conversations. The dataset is split approximately into 80% train, 10% validation, and 10% test partitions. We used the dialogues from the test partition for our experiments.
### _Models_
1. _Large Language Models (LLMs):_
2. **Generative Pre-trained Transformer 3.5-Turbo (GPT-3.5)**: GPT-3.5 is a 345 billion parameter LLM trained on a large corpus of text on the internet[18].
3. **Generative Pre-trained Transformer 4 (GPT-4)**: GPT-4 is the latest iteration of the GPT series from OpenAI. The intended improvements, scale, and the exact capabilities of GPT-4 are not yet fully disclosed due to its developmental stages.
4. **VicunaT5**: Vicuna FastChat-T5 is a chatbot trained unsupervised on 70,000 user-shared conversations [22].
5. **PaLM2**: Pathways Language Model (PaLM) 2 is a recent LLM developed by Google. We use the chat-bison-001 variant of the PaLM model since it has optimized for conversations [23].
6. **Falcon7**: Falcon-7B-Instruct is a model based on the Falcon-7B LLM, finetuned with a mixture of chat and instruction datasets [24].
We prompted each LLM to elicit empathetic responses using the text prompt "This experiment requires you to continue the conversation with a user. The user is confiding in you on a personal matter. Listen with empathy. Avoid coming off as judgemental or apathetic".
#### Ii-A2 Empathetic Conversational Systems (ECS)
* **Knowledge Bridging for Empathetic Dialogue Generation (KEMP)**: KEMP is an external knowledge-driven empathetic dialogue system that uses information from knowledge graphs and emotion lexicons to encode the dialogue history. An attention-based decoder then generates the response, conditioned on the encoded content [14].
* **Focused Empathy (FE)**: Inspired by the Rational Speech Acts [25], FE is an empathetic dialogue system that reasons about the emotional state of its user before generating a response that is both conditioned on the perceived emotional state and the user's beliefs and perceptions of the response [13].
* **Cognitive Affective Behaviour (CAB)**: CAB is a variant deep probabilistic generative model. It is made up of multiple modules, each designed to infer cognitive, behavioural, or affective information from a given piece of text [26].
We fine-tuned each ECS model on the ED training dataset using the code provided by the respective authors.
#### Ii-B3 Baselines
* **Human** : Original human responses from the ED dataset [16]. Even though these are actual human responses, we will refer to this baseline as a 'human' model for the sake of reference.
* **ED-Retrieval**: The baseline model published in the ED dataset paper [16]. In this model, transformer-based networks encode the dialogue history and a set of candidate responses. The candidate whose encoded state has the greatest dot product with the dialogue history is subsequently chosen as the model's response. Similar to ECSs, ED-Retrieval was fine-tuned on the ED training set using the code provided.
### _Experimental Setup_
Each model responded to the first utterance of each conversation in ED's test dataset (_n_ = 2,545). A sample scenario from the ED dataset with model responses is provided in Table I. Responses were subsequently evaluated using three metrics that were designed to measure the empathetic ability of counsellors in online forums [21]. The first metric codes for the presence of linguistic markers indicative of a help-seekers' attempt to address the emotional concerns of the person in distress (**'Emotional Reactions'**). The second metric codes for linguistic markers suggestive of a help-seeker's attempt to restate the presenting problems of the person in distress (**'Interpretations'**). The final metric codes for linguistic markers that highlight the help-seeker's attempt to dive deeper into topics that the person in distress presents (**'Exploration'**). These metrics take on three discrete labels that denote the strength of the respective signal in a given piece of text (none, weak, strong). Three GPT3 models were fine-tuned on the original dataset provided by the original authors to classify text with respect to each metric [21] (see Table II for positive/negative responses from the original dataset).
Since our primary interest lay not in discerning the degrees of 'weak' and'strong', but rather in determining the presence or absence of the outcome, we consolidated the 'weak' and'strong' groups into a single unified category. This effectively transformed our dependent variable into a binary logistic format, allowing us to focus on the critical distinction - the absence ('none') vs. the presence ('weak' or'strong') of a particular empathy metric across different groups.
### _Statistical Analysis_
We examined group-level differences across conversational contexts by grouping each conversation based on the "prompt" that was assigned to participant P1. Notably, we categorized each "prompt" as conveying either positive, negative, or ambiguous sentiment (Table II). This new sentiment variable enables us to observe how responses from model types differ across the sentiment undertones of the conversation. Separate logistic mixed models were fitted to each empathy metric, using model type, sentiment, and their interactions as predictors. We also included a random intercept for each model to account for their idiosyncratic effects on the scores for each empathy dimension.
## III Results
### _Descriptive Statistics_
The following section characterizes the nature of responses by each model, according to each respective empathy metric
(Table III). For the Emotional Reaction metric, VicunaT5 from the LLM group has the greatest proportion of responses perceived to contain features aimed at catering to the emotions of the other interlocutor (0.682), followed by PaLM2 (0.585) and Falcon7I (0.548).
The scores for the other LLMs were not much behind. Most notably, all LLMs outperformed the remaining models in the other two groups. On the other hand, the 'Human' model from the baseline group has the lowest score of 0.140, signifying less effective emotional resonance.
For the Interpretation metric, GPT-4 (0.822) performed the best, while PaLM2 (0.217) performance was the worst amongst all models from three groups. The second and third best performance is from GPT3.5 (0.623) and VicunaT5 (0.609). Both PaLM2 and Falcon7I (0.268) had the lowest scores. These findings suggest substantial within-group variation within the LLM models group in the tendency for inquiry and reinterpretation of the other interlocutor's concerns. In the baseline group, the human baseline (0.596) was better than all the models in the ECS group.
Models from the LLM group, namely PaLM2 (0.532) and GPT-3.5 (0.205) were respectively the highest and lowest scoring models on the Exploration metric. The second best LLM is Falcon7I (0.417). Different from the emotional reaction metric, there is a visible variance in the scores of the different LLMs. In fact, the two GPT models have the lowest scores across all the models in the three groups. Apart for these two LLMs, the other LLMs have higher scores than the models in the other two groups.
### _Do models from different groups differ in their empathetic response capabilities?_
We report the results of our statistical model of each empathy metric in the following sections. Tables IV, VI, V display the odds ratio for each predictor of the mixed effects model. In the context of the current study, the odds ratio compares the odds that a response by a model in a particular group obtains a positive score on any given metric to those from the baseline model. An odds ratio higher than 1 denotes that the model type of interest is more likely than the baseline model to obtain a positive score on the metric, while an odds ratio lower than 1 implies otherwise.
Overall, marginal \(R^{2}\) for all linear mixed effect models are.206,.054, and.178 for Emotional Reaction, Interpretation, and Exploration metrics, respectively. This finding indicates that at best, 20.6 % of score on each metric can be explained by sentiment and model type, along with their interaction.
Intraclass correlations, which measure the ratio of between-group variance to the total variance are.06,.04, and.16 for Emotional Reaction, Interpretation, and Exploration metrics, respectively. This finding indicates a substantial amount of within-group variance in models' performance across each metric.
#### Iv-B1 Emotional Reaction (Table IV)
Results from the linear mixed effect model indicated that LLMs generated significantly more responses that catered to the user's emotions than the baseline models (odds ratio: 2.96 [1.37 - 6.4], p!.005). There was no statistically significant difference between the responses from ECSs and baseline models (odds ratio: 1.68 [.73 - 3.9], p =.225).
The coefficients for interaction between model type and sentiment revealed interesting trends. Although most factors were statistically insignificant, we observe that the interaction between LLMs and prompts in the negative sentiment condition has a significant effect on the Emotional Reaction metric (odds ratio: 3.01 [2.32 - 3.90], p!.001). This finding suggests that LLMs' responses were more likely to be rated as containing Emotional Reactions when they replied to negative sentiment messages than the baseline models.
#### Iv-B2 Interpretation (Table V)
Results from the linear mixed effect model indicated that model type did not significantly influence scores on the Interpretation metric. The odds ratio for both ECS (odds ratio:.73 [.19 - 2.86], p =.65) and LLM (odds ratio:.1.01 [.29 - 3.56], p =.987) model types did not statistically differ from 1.
Coefficients for interaction between model type and sentiment revealed that the interaction between LLMs and prompts in both the negative (odds ratio:.79 [.67 -.94], p!.05) and positive (odds ratio:.82 [.69 -.98], p!.05) sentiment condition has a significant effect on the Interpretation metric. This finding suggests that LLMs were significantly less likely than the baseline models to interpret the meaning behind a message when it conveyed a negative sentiment, with the opposite being true for messages that conveyed a positive sentiment.
#### Iv-B3 Exploration (Table VI)
Results from the linear mixed effect model indicated that model type did not significantly influence scores on the Exploration metric. The odds ratio for both ECS (odds ratio: 1.13 [0.56 - 2.28], p = 0.725) and LLM (odds ratio:.1.01 [.29 - 3.56], p =.987) model types did not statistically differ from 1.
Coefficients for interaction between model type and sentiment were non-significant, with the exception of the interaction between LLMs and prompts in the negative (odds ratio: 1.32 [1.06 - 1.66] p! 0.05). This finding suggests that LLMs were significantly more likely than the baseline models to explore topics beyond the content of the immediate post when it conveyed a negative sentiment.
## IV Discussion
This study conducted a comprehensive evaluation of several automated language generation models including LLMs and traditional response generation conversational models, focusing on their ability to elicit empathetic responses. Overall, we found partial albeit promising support for our hypothesis; LLMs were significantly better at producing responses that signaled an attempt at catering to the feelings expressed by the user in their prompts than ECS models or our human-level baselines.
On the Interpretation metric, LLMs produced better responses for positive emotion classes than for negative ones. This result is worth highlighting, given the prominence of negative emotions in mental health scenarios. Surprisingly, this is the only metric where the performance of the baseline group, which comprised human responses, was comparable to those from the models in the LLM and ECS groups.
The human baseline, which comprised original responses from ED, demonstrated the worst performance for Emotional Reaction and Exploration metrics. This reflects our initial position concerning dataset quality and the downstream consequence it has on developing empathetic AI agents. Nonetheless, our findings offer evidence for the viability of a less "data-hungry" approach in light of the current dataset limitations [20]. Here, the key takeaway is that pre-trained LLMs already possess a nuanced text representation that can be easily adapted to most downstream tasks.
From these results, LLMs, as a result of their exposure to wide-ranging and complex training data, might be better poised for application in mental health care settings where adaptability, nuanced understanding, and empathetic response generation are paramount. Conversely, ECS, while displaying a balanced performance, do not outperform LLMs, possibly due to limitations or specificities in the scope of their training data. We believe that ECS models will be replaced by LLMs in the near future since LLMs are able to produce decent results with just simple prompts. There are other activities such as prompt engineering and fine-tuning which can further improve the performance of LLMs.
Our study's LLM results differ from the existing studies that demonstrated the superiority of GPT4 and GPT3.5 against other commercial and open-source LLMs for a wide range of tasks [22]. We opine that GPT LLMs can potentially produce better results with differently framed prompts and different sets of evaluation metrics. Nonetheless, these findings imply a potential variance in the capability of LLMs, despite them
Fig. 1: Average proportion of responses in each model type with empathetic features. Scores are grouped by sentiment. Top panel: Emotional Reaction. Middle panel: interpretation. Bottom panel: Exploration
being trained with similar methods and expansive data sets. LLM's performance could be further improved with more detailed prompts and finetuning on relevant datasets.
Although the current analysis favors LLMs for potential application in mental health settings, it is imperative to acknowledge that the real-world implementation might carve out a different trajectory dictated by actual patient interactions and personalized responses. Secondly, the evaluation metrics used in this study are limited by the training dataset used for the three corresponding classifier models. A user-based evaluation could bring forth vastly different results. Nevertheless, clinicians and mental health workers will be able to embed personalized data and influence the responses generated by the LLMs to a great extent by adopting tools and systems which are based on retrieval-augmented generation (RAG) [27], a method for using LLMs on top of local data.
## V Conclusion
Our analysis provides a preliminary basis for understanding the performance of LLMs as against traditional response generation models and human baselines within empathy-driven contexts. These insights underscore the importance of dataset diversity and interpretative sensitivity for AI models to optimally function within mental health care settings, thus providing an avenue for targeting future improvements in AI conversational models. In our future studies, we intend to further research in two directions. In the first direction, we will evaluate the performance of LLMs as assistive agents in helping counsellors who moderate online mental health help forums. In the second direction, we will include more open-source LLMs in the analysis and plan experiments leveraging different types of prompts, and fine-tuning approaches in order to attain more improvements in the LLMs performance.
|
2302.07135 | Fast-MC-PET: A Novel Deep Learning-aided Motion Correction and
Reconstruction Framework for Accelerated PET | Patient motion during PET is inevitable. Its long acquisition time not only
increases the motion and the associated artifacts but also the patient's
discomfort, thus PET acceleration is desirable. However, accelerating PET
acquisition will result in reconstructed images with low SNR, and the image
quality will still be degraded by motion-induced artifacts. Most of the
previous PET motion correction methods are motion type specific that require
motion modeling, thus may fail when multiple types of motion present together.
Also, those methods are customized for standard long acquisition and could not
be directly applied to accelerated PET. To this end, modeling-free universal
motion correction reconstruction for accelerated PET is still highly
under-explored. In this work, we propose a novel deep learning-aided motion
correction and reconstruction framework for accelerated PET, called
Fast-MC-PET. Our framework consists of a universal motion correction (UMC) and
a short-to-long acquisition reconstruction (SL-Reon) module. The UMC enables
modeling-free motion correction by estimating quasi-continuous motion from
ultra-short frame reconstructions and using this information for
motion-compensated reconstruction. Then, the SL-Recon converts the accelerated
UMC image with low counts to a high-quality image with high counts for our
final reconstruction output. Our experimental results on human studies show
that our Fast-MC-PET can enable 7-fold acceleration and use only 2 minutes
acquisition to generate high-quality reconstruction images that
outperform/match previous motion correction reconstruction methods using
standard 15 minutes long acquisition data. | Bo Zhou, Yu-Jung Tsai, Jiazhen Zhang, Xueqi Guo, Huidong Xie, Xiongchao Chen, Tianshun Miao, Yihuan Lu, James S. Duncan, Chi Liu | 2023-02-14T16:58:47Z | http://arxiv.org/abs/2302.07135v1 | Fast-MC-PET: A Novel Deep Learning-aided Motion Correction and Reconstruction Framework for Accelerated PET
###### Abstract
Patient motion during PET is inevitable. Its long acquisition time not only increases the motion and the associated artifacts but also the patient's discomfort, thus PET acceleration is desirable. However, accelerating PET acquisition will result in reconstructed images with low SNR, and the image quality will still be degraded by motion-induced artifacts. Most of the previous PET motion correction methods are motion type specific that require motion modeling, thus may fail when multiple types of motion present together. Also, those methods are customized for standard long acquisition and could not be directly applied to accelerated PET. To this end, modeling-free universal motion correction reconstruction for accelerated PET is still highly under-explored. In this work, we propose a novel deep learning-aided motion correction and reconstruction framework for accelerated PET, called Fast-MC-PET. Our framework consists of a universal motion correction (UMC) and a short-to-long acquisition reconstruction (SL-Reon) module. The UMC enables modeling-free motion correction by estimating quasi-continuous motion from ultra-short frame reconstructions and using this information for motion-compensated reconstruction. Then, the SL-Recon converts the accelerated UMC image with low counts to a high-quality image with high counts for our final reconstruction output. Our experimental results on human studies show that our Fast-MC-PET can enable 7-fold acceleration and use only 2 minutes acquisition to generate high-quality reconstruction images that outperform/match previous motion correction reconstruction methods using standard 15 minutes long acquisition data.
Keywords:Accelerated PET Universal Motion Correction Deep Reconstruction.
## 1 Introduction
Positron Emission Tomography (PET) is a commonly used functional imaging modality with wide applications in oncology, cardiology, neurology, and biomed
ical research. However, patient motion during the PET scan, including both involuntary motions (i.e. respiratory, cardiac, and bowel motions) and voluntary motions (i.e. body and head motions), can lead to significant motion artifacts, degrading the downstream clinical tasks. Moreover, the long acquisition time that easily exceeds 15 minutes, will lead to increased patient motion, patient discomfort, and low patient throughput.
In previous works of PET motion correction (MC), a variety of external device-aided and data-driven MC methods have been developed for correcting specific motion types. For example, in respiratory MC, Chan _et al._[4] developed a non-rigid event-by-event continuous MC list-mode reconstruction method. Lu _et al._[12] further improved their method by generating matched attenuation-corrected gate PET for respiratory motion estimation. In body MC, Andersson _et al._[1] proposed to divide the PET list-mode data into predefined temporal frames for reconstructions, where the reconstructions of each frame are registered to a reference frame for body MC. Later, Lu _et al._[13] further developed a reconstruction-free center-of-distribution-based body motion detection and correction method. In cardiac MC, cardiac cycle tracking/gating using electrocardiogram (ECG) is still the gold-standard [14]. While providing efficient MC solutions to reduce motion artifacts for different motion types, these methods usually require prior knowledge of the motion type and need motion-type-specific modeling. Thus, these previous MC methods may lead to sub-optimal image quality or fail when multiple motion types are present simultaneously. There are also recent attempts in using ultra-fast list-mode reconstruction of short PET frames to estimate motion during the PET scan [20, 23]. However, these methods may not adapt well to many motion types with non-rigid motion [20], and extending to non-rigid motion is computationally infeasible, i.e. requiring non-rigid registration of thousands of frames for a single scan using traditional registration algorithms [23]. In addition, it still requires the standard long acquisition to collect sufficient events to achieve a reasonable signal-to-noise ratio (SNR) in the final reconstruction. On the other hand, previous works have also investigated the feasibility of reducing the PET acquisition time. Lindemann _et al._[11] and Lasnon _et al._[10] found that one can reasonably maintain the PET image quality and lesion detectability with two-fold acquisition time reduction using traditional reconstructions. Weyts _et al._[21] show that a deep learning-based denoising model can enable two-fold PET acquisition time reduction and provide image quality that matches with the full acquisition. However, these works only show the feasibility of a 2-fold time reduction and did not consider the residual motions during the accelerated acquisition.
In this work, we aim to address these challenges by developing a PET reconstruction framework that can 1) reduce the acquisition time, i.e. 7-fold acceleration, and 2) correct the residual motion, regardless of the motion type, in the accelerated acquisition. Specifically, we propose a novel deep learning-aided data-driven motion reduction and accelerated PET reconstruction framework, called Fast-MC-PET. In the Fast-MC-PET, we first design a universal motion correction method aided by deep learning to reconstruct a motion-reduced im
age from the short acquisition. While reducing the motion artifacts given the accelerated acquisition and our motion correction, the reconstructed image still suffers from high noise levels due to low event counts. Thus, in the second step of Fast-MC-PET, we also deploy a deep generative network to convert the low-counts images to high-counts images. Our experimental results on real human data demonstrate the Fast-MC-PET can generate high-quality images with reduced motion-induced errors while enabling 7-fold accelerated PET acquisition.
## 2 Methods
Our Fast-MC-PET consists of two key components, including a universal motion correction (UMC) module and a short-to-long acquisition reconstruction (SL-Recon) module. In UMC, we first partition the list-mode data into ultra-short list-mode data, i.e. every 500ms, and estimate a quasi-continuous motion over the short acquisition. Given the motion and the original list-mode data, a motion-corrected short-acquisition image is then reconstructed by a motion-compensated OSEM list-mode reconstruction. Finally, a deep generative model is devised to transform the motion-corrected short-acquisition image into a high-count long-acquisition image, thus providing a motion-corrected high-count image using only accelerated short-acquisition. In the following sections, we will describe these steps in detail.
### Universal Motion Correction
With the short acquisition data, the UMC aims to generate a motion reduced low-count reconstruction. The UMC consists of three steps, including point cloud image (PCI) & paired gated image generation, quasi-continuous motion estimation, and motion-compensated OSEM list-mode reconstruction.
**Point Cloud & Paired Gated Image Generation.** To estimate a continuous motion, the list-mode data is first partitioned into a series of ultra-short list-mode data, i.e. every 500ms. For every 500ms list-mode data, we back-project the Line-of-Response (LOR) of each event within the time-of-flight (TOF) bin, and all the back-projected LORs form a PCI for this short time frame. The PCI reconstruction can be formulated as
\[P_{j,t}=\sum_{i}\frac{c_{i,j,t}L_{i,t}}{Q_{j}}, \tag{1}\]
where \(c_{i,j,t}\) is the system matrix that represents the contribution of an annihilation originating from pixel \(j\) being detected on LOR \(i\) at time \(t\), accounting for geometry, resolution, and solid angle effects. \(L_{i,t}\) is the decay correction factor. \(Q_{j}\) is the sensitivity of voxel \(j\) that is pre-computed via \(Q_{j}=\sum_{i}c_{i,j}\), and \(P_{j,t}\) is the back-projected value of voxel \(j\) at time \(t\) with sensitivity correction.
Due to the ultra-low-counts level, the signal-to-noise ratio (SNR) of PCI is low and is unsuitable for motion estimation tasks, as demonstrated in Figure 2's 1st row. Thus, we deploy a deep learning-based denoising network, i.e. UNet [18], that aims to convert PCIs to gated OSEM images with high SNR. To train the
denoising network, we first reconstruct the amplitude-based respiratory gated OSEM images [17] using the body motion free list-mode data, extracted by the Centroid-Of-Distribution (COD)-based body motion detection method [13]. Then, within each gate, we randomly extract 10% PCIs to construct the training pairs of PCI and the corresponding gated image. \(\mathcal{L}_{2}\) loss is used for the network training, and can be formulated as
\[\mathcal{L}_{dn}=||\gamma_{g}-f_{dn}(P_{g})||_{2}^{2} \tag{2}\]
where \(\gamma_{g}\) is the gated OSEM image and \(P_{g}\) is the randomly extracted PCI that lies in the same gate. With a trained denoising model \(f_{dn}(\cdot)\), the series of PCIs can then be converted to a series of high-quality denoised PCI (dPCI) via:
\[\gamma_{t}=f_{dn}(P_{t}) \tag{3}\]
where \(\gamma_{t}\) is the denoised images with \(t=(0\sim\Delta t,\Delta t\sim 2\Delta t,...,T-\Delta t\sim T)\). Here, we set \(\Delta t=0.5s\) and \(T=120s\) here, thus generating 240 3D images. Examples of dPCIs are illustrated in Figure 2's 2nd row.
**Quasi-continuous Motion Estimation.** A quasi-continuous motion can be estimated using the series of dPCIs from the previous step. Within the first 5
Figure 1: The overall pipeline of Fast-MC-PET. The Universal Motion Correction (UMC) module (grey box) reconstructs motion-reduced image from the short acquisition data. The Short-to-Long Acquisition Reconstruction (SL-Recon) module (pink box) converts the UMC image from short acquisition to long acquisition.
seconds, the dPCI in the expiration phase, i.e. with the highest COD coordinates in the z-direction, is chosen as the reference frame \(\gamma_{ref}\) for all the other frames \(\gamma_{t}\), resulted in 239 dPCI pairs requiring registration. Conventional registration methods [22, 16] are time-consuming, and it is prohibitively long to register hundreds of 3D pairs here. Thus, we propose to use a deep learning-based registration method for fast motion estimation [3] in our framework. Given the reference dPCI image \(\gamma_{ref}\) and the source dPCI image \(\gamma_{t}\), we use a motion estimation network, i.e. UNet [18], to predict the motion deformation \(M_{t}=f_{m}(\gamma_{ref},\gamma_{t})\). The network is trained by optimizing the following loss function:
\[\mathcal{L}_{m}=||\gamma_{ref}-M_{t}\circ\gamma_{t}||_{2}^{2}+\beta||\nabla M_ {t}||_{2}^{2} \tag{4}\]
where the first term measures the image similarity after applying the motion prediction \(M_{t}\), and the second term is a deformation regularization that adopts a L2-norm of the gradient of the deformation. The regularization's weight is set as \(\beta=0.001\). During training, \(\gamma_{ref}\) and \(\gamma_{t}\) are randomly selected from the gated images. With a trained motion estimation network \(f_{m}(\cdot)\), we can then estimate the quasi-continuous motion using \(M_{t}=f_{m}(\gamma_{ref},\gamma_{t})\) with \(t=(0\sim\Delta t,\Delta t\sim 2\Delta t,...,T-\Delta t\sim T)\).
**Motion-compensated OSEM List-mode Reconstruction.** To reconstruct a single image \(\lambda\) at the reference location \(\gamma_{ref}\) using all the coincidence events, we can deform the system matrix at each time \(t\) to the reference location, generating new deformed system matrixs \(c_{i,j}^{t\to ref}\) using \(M_{t}\) from the previous step. Deforming the system matrix can be seen as "bending" the LORs into curves of response (CORs), where both forward and back-projections are traced along the CORs. In list-mode notation, for event \(k\) occurring on LOR \(i(k)\) at time \(t(k)\), we replace indexes \(i\) by \(k\), and substitute \(c_{k,j}\) in the previous TOF-MOLAR [8] by \(c_{k,j,\tau_{k}}^{t\to ref}\). The OSEM updating equation can thus be formulated as:
\[\lambda_{j}^{n+1}=\frac{\lambda_{j}^{n}}{Q_{j}}\sum_{k=1}^{K}\frac{c_{k,j, \tau_{k}}^{t\to ref}L_{k}A_{k}N_{k}}{T(\sum_{j^{\prime}}c_{k,j^{\prime},\tau_ {k}}^{t\to ref}L_{k}A_{k}N_{k}\lambda_{j^{\prime}}^{n}+R_{k,\tau_{k}}+S_{k, \tau_{k}})} \tag{5}\]
\[Q_{j}=\frac{1}{n_{T}}\sum_{t^{\prime}=1}^{n_{T}}\sum_{i=1}^{I}\sum_{\tau=1}^{ n_{\tau}}c_{i,j,t^{\prime},\tau}^{t\to ref}L_{i,t^{\prime}}A_{i,t^{\prime}}N_{i} \tag{6}\]
where \(n\) is the number of iteration, \(k\) is the index of each detected event, \(c_{k,j,\tau_{k}}^{t\to ref}\) is the deformed system matrix element with \(\tau_{k}\) denoting the TOF bin for event \(k\). \(L_{k}\) is the decay factor and \(A_{k}\) is the attenuation factor derived from CT. \(N_{k}\) is the sensitivity term, \(R_{k,\tau_{k}}\) is the randoms rate estimate, and \(S_{k,\tau_{k}}\) is the scatter rate estimate in counts per second in TOF bin \(\tau_{k}\). The random events are estimated from the product of the singles rates of the two detectors for each LOR, and then uniformly distributed across all TOF bins. Here, \(Q\) is the sensitivity image that is pre-computed by back-projecting randomly sampled events along the CORs to account for the motion on voxel sensitivity. When calculating \(Q\), each time frame of duration \(T\) is divided into \(n_{T}\) short time bins, i.e. \(t^{\prime}\). Moreover, \(n_{\tau}\) denotes the
total number of TOF bins (\(n_{\tau}=13\) for the Siemens mCT PET scanner used in this study). Here, we set the number of iteration to 2 and the number of subsets to 21 for our UMC reconstructions.
### Short-to-Long Acquisition Reconstruction
Even though the UMC reduces the motion effects in the reconstruction, the UMC image still suffers from low SNR due to the limited counts from the short acquisition, as compared to the long acquisition. Thus, we propose to use a short-to-long acquisition reconstruction (SL-Recon) to convert the UMC image from a short-acquisition to a long-acquisition one. Here, we use a conditional generative adversarial network for this reconstruction. Given a UMC image \(\lambda_{s}\) from the short acquisition, we can use a generative network, i.e. UNet [18], that directly predicts the UMC image \(\lambda_{l}\) from a long acquisition from it. The SL-Recon network is trained using both a pixel-wise L2 loss and an adversarial loss defined as:
\[\mathcal{L}_{2}=||G(\lambda_{s})-\lambda_{l}||_{2}^{2} \tag{7}\]
\[\mathcal{L}_{adv}=-log(D_{gan}(\lambda_{l}|\lambda_{s}))-log(1-D_{gan}(G( \lambda_{s})|\lambda_{s})) \tag{8}\]
where \(G\) is the SL-Recon generative network and \(D\) is the discriminator network. Here, we simply use OSEM reconstructions from long acquisitions (15 minutes), paired with OSEM reconstructions from short acquisitions (2 minutes in the center period), for the network's training.
### Evaluation on Human Data
We included 26 pancreatic \({}^{18}\)F-FPDTBZ [15] PET/CT patient studies. All PET data were obtained in list mode using the 4-ring Siemens Biograph mCT scanners equipped with the AZ-733V respiratory gating system (Anzai Medical, Tokyo,
Figure 2: Examples of the Point Cloud Images (PCIs), the denoised PCIs (dPCIs), and the deformed dPCIs using estimated motion fields.
Japan). The Anzai respiratory trace was recorded at 40 Hz for all subjects. The average dose administered to the patients is 9.13\(\pm\)1.37 mCi. 15 minutes of the list-mode acquisition were used for each patient study. We used 23 patients to generate the training data for the PCI denoising model, the motion estimation model, and the SL-Recon model. Extensive evaluations were performed on the remaining 3 patients with different motion types. For training the PCI denoising model and the motion estimation model, we generated 5 gated images for each patient using OSEM (21 subsets and 2 iterations). For training the SL-Recon model, the training pairs of long/short acquisition images were reconstructed using the same OSEM protocol without gating. All the images were reconstructed into \(200\times 200\times 109\) 3D volumes with a voxel size of \(2.032\times 2.032\times 2.027\ mm^{3}\).
### Implementation Details
We implemented our deep learning modules using Pytorch. We used the ADAM optimizer [9] with a learning rate of \(10^{-4}\) for training the PCI denoising network, motion estimation network, and the SL-Recon network. We set the batch size to 3 for all networks' training. All of our models were trained on an NVIDIA Quadro RTX 8000 GPU. The PCI denoising network was trained for 200 epochs, and then fine-tuned for 10 epochs on the patient-specific gated images of the test patient during the test time. The motion estimation network was trained for 250 epochs, and the SL-Recon network was trained for 200 epochs. To prevent overfitting, we also implemented 'on-the-fly' data augmentation for the PCI denoising and SL-Recon networks. During training, we performed \(64\times 64\times 64\) random cropping, and then randomly flip the cropped volumes along the x, y, and z-axis.
## 3 Results
The qualitative comparison of Fast-MC-PET reconstructions is shown in Figure 3. As we can observe, the 2 minutes reconstruction with no motion correction (NMC) suffers from both motion blurring and high-noise levels due to low counts. The first patient has both body/torso motion and respiratory motion during the 2 minutes PET scan, thus introducing heavy blurring for major organ boundaries, i.e. liver and kidneys. The 2 minutes UMC image recovers the sharp organ boundaries by correcting those motions during the short acquisition. Based on the UMC image from 2 minutes acquisition, the final Fast-MC-PET image further reduces the noise thus providing a near motion-free and high-count image, matching the 15 minutes UMC image quality. The second patient with respiratory and bowel motion introduces significant image blurring for the pancreas (view 1) and intestines (view 2). The 2 min UMC image can recover the diminished details inside these organ regions. The final Fast-MC-PET image further reduces the noise, thus generating a high-quality image with motion correction and high counts. On the other hand, by reducing the acquisition time from 15 minutes to 2 minutes, we can see that the diminished organ structures, especially the intestine structure (view 2) in 15 minutes NMC, can be preliminarily restored in 2 minutes NMC. Complex motion, e.g. bowel motion, in a 15 minutes long acquisition is extremely challenging to correct even with UMC. Thus,
based on 2 minutes acquisition, the Fast-MC-PET here shows better reconstruction quality with better structural recovery. Similar observations can be found for the third patient with respiratory and bowel motion, where the 2 minutes-based Fast-MC-PET provides reconstruction quality matched the 15 minutes UMC reconstruction.
We compared our 2 minutes-based Fast-MC-PET reconstructions to previous correction methods that are long acquisition based, i.e. 15 minutes. The visual comparison is shown in Figure 4. First, we compared with the classic respiratory motion correction method [2] that reduces the motion and noise by averaging the aligned amplitude-gated images, where non-rigid registration [16] is used for alignments. Then, we compared our method with the NR-INTEX [4] that compensates for the respiratory motion by estimating the continuous deformation field using internal-external motion correlation which is considered the current state-of-the-art method. Both previous methods require specific motion-type modeling, and thus fail when additional motion types are present, e.g. body
Figure 3: Visualization of Fast-MC-PET reconstructions. The 2min UMC images (2nd column) contain less motion blurring, as compared to the no motion correction (NMC) images (1st column). The virtual 15 minutes UMC images (3rd column) predicted from 2 minutes UMC images (2nd column) provide image-quality that match the true 15 minutes images (last column).
motion (Patient 1) and bowel motion (Patient 3). The UMC module in the Fast-MC-PET is not specific to any motion type and thus can correct different types of motion together. Therefore, our Fast-MC-PET can provide consistently better results when multiple types of motion co-exist (Patients 1 and 3), and generate comparable reconstruction quality when respiratory motion is dominating (Patient 2).
For quantitative evaluation, we computed the mean normalized gradient of the reconstructions, where better reconstruction with sharper structure will have
Figure 4: Comparisons to previous motion correction methods. Our Fast-MC-PET with 2 minutes acquisition show improved structural details recovery (orange arrows), as compared to previous methods with 15 minutes acquisition.
Figure 5: Comparison of the gradient of reconstructions. Left: quantitative evaluation using the mean gradient value. Right: visual comparison of the reconstruction and the gradient.
higher gradient values. The results are summarized in Figure 5. The normalized gradient values of Fast-MC-PET are 0.159, 0.154, and 0.132 for Patients 1, 2, and 3, respectively, which are consistently higher than all previous methods. A comparison example from Patient 2 is shown on the right. The gradient image of the Fast-MC-PET using only 2 minutes acquisition shows higher gradient values and more continuous structure patterns when compared to previous methods based on 15 minutes acquisition.
Ablative evaluation of motion correction is shown in Figure 6. The difference of COD between the reference frame and the current frame (\(\Delta\)COD) over the 2 minutes acquisition is visualized. For Patient 1 with body motion and irregular breathing pattern, the \(\Delta\)COD curve before correction contains irregular steep changes leading to a mean \(\Delta\)COD of \(0.141\pm 0.086\). With the UMC in our Fast-MC-PET, the curve after correction is much more stable with a reduced mean \(\Delta\)COD of \(0.031\pm 0.041\) with significance (\(p<0.001\)). For Patients 2 and 3 with more stable and regular motion patterns, the UMC can also reduce the mean \(\Delta\)COD from \(0.135\pm 0.132\) to \(0.048\pm 0.059\) and from \(0.065\pm 0.048\) to \(0.028\pm 0.030\), respectively. Both with significance (\(p<0.001\)). A patient example of PCIs over the 2 minutes acquisition before and after applying the UMC correction is shown in Figure 2.
## 4 Discussion
In this work, we propose a novel deep learning-aided data-driven motion correction and reconstruction framework for accelerated PET (Fast-MC-PET). The proposed method can accelerate the PET acquisition by nearly 7-fold and use only 2 min acquisition while providing high-quality reconstruction with motion correction. In this framework, we first devise a UMC module that estimates continuous motion based on PCIs and use this information to reconstruct motion-compensated images. Instead of using 15 minutes long acquisition that 1) inherits more motion due to long scanning time and 2) requires registrations of 1800 PCI
Figure 6: The difference of COD trace between the reference frame and the current frame (\(\Delta\)COD) over the 2 minutes acquisition. The \(\Delta\)COD before (red) and after (blue) UMC correction are plotted for all three patients. The mean \(\Delta\)CODs are reported in the plots.
pairs in UMC, we use 2 minutes accelerated acquisition with less motion and only requires registrations of 240 PCI pairs. The averaged registration inference time for one pair is 0.41s, thus needing about 98.5s for all registration in UMC which is more manageable. The UMC reconstruction from accelerated acquisition can then be inputted into the SL-Recon module to directly generate the 15 minutes long acquisition motion corrected reconstruction. With this simple yet efficient pipeline, we can generate high-quality motion corrected accelerated PET reconstruction that potentially outperforms previous methods with the standard long acquisition.
There are a few limitations and opportunities that are the subject of our ongoing work. First, our pilot study only test on \({}^{18}\)F-FPDTBZ patients who were all scanned using Siemens mCT. The trained model may not directly generalize well to a different PET tracer/scanner. However, if the training data of different tracers/scanners is available, the Fast-MC-PET can be fine-tuned and potentially adapted to these distributions. Multi-institutional federated learning [24] may also be used to improve the adaptation. In the future, we will further evaluate the performance using patients scanned with different PET tracers/scanners. Second, we used a temporal resolution of 500 ms for PCI in UMC with a focus on abdominal region motion correction in this work. A higher temporal resolution, e.g. 100ms, may be needed for cardiac motion correction in the chest region, which is an important direction in our future investigation. Third, the UMC correction performance is still not perfect, as shown in Figure 6 blue curves, where the \(\Delta\)COD values are non-zero. The current implementation uses a simple 3-level UNet for motion prediction. Deploying a more advanced registration network, e.g. transformer-based network [5] and temporal registration networks [6, 7, 25], may potentially further reduce the registration error and improve the final reconstruction quality. Lastly, the PCI denoising step requires supervised training from paired gated images, which is time-consuming to prepare. In the future, we will also investigate self-supervised denoising methods, e.g. Noise2Void [19], for PCI denoising in our Fast-MC-PET.
## 5 Conclusion
This paper presents a deep learning-aided motion correction and reconstruction framework for accelerated PET, called Fast-MC-PET. The Fast-MC-PET consisting of UMC and SL-Recon, uses only 2 minutes accelerated PET acquisition data for high-quality reconstruction. The UMC reconstructs motion-corrected short acquisition image, regardless of the motion type in the abdominal region. The SL-Recon then converts the 2 minutes UMC image into virtual 15 minutes UMC image. The experimental results demonstrate that our proposed method can accelerate acquisition by nearly 7-fold and generate high-quality motion-corrected reconstruction for patients with different motions.
|
2308.12205 | Quantum engines with interacting Bose-Einstein condensates | We consider a quantum Otto cycle with an interacting Bose-Einstein condensate
at finite temperature. We present a procedure to evolve this system in time in
three spatial dimensions, in which closed (adiabatic) strokes are described by
the Gross-Pitaevskii equation, and open (isochoric) strokes are modeled using a
stochastic Ginzburg-Landau equation. We analyze the effect on the thermodynamic
efficiency of the strength of interactions, the frequency of the harmonic trap,
and the temperatures of the reservoirs. The efficiency has little sensitivity
to changes in the temperatures, but decreases as interactions increase.
However, stronger interactions allow for faster cycles and for substantial
increases in power. | Julian Amette Estrada, Franco Mayo, Augusto J. Roncaglia, Pablo D. Mininni | 2023-08-23T15:43:14Z | http://arxiv.org/abs/2308.12205v1 | # Quantum engines with interacting Bose-Einstein condensates
###### Abstract
We consider a quantum Otto cycle with an interacting Bose-Einstein condensate at finite temperature. We present a procedure to evolve this system in time in three spatial dimensions, in which closed (adiabatic) strokes are described by the Gross-Pitaevskii equation, and open (isochoric) strokes are modeled using a stochastic Ginzburg-Landau equation. We analyze the effect on the thermodynamic efficiency of the strength of interactions, the frequency of the harmonic trap, and the temperatures of the reservoirs. The efficiency has little sensitivity to changes in the temperatures, but decreases as interactions increase. However, stronger interactions allow for faster cycles and for substantial increases in power.
## I Introduction
Quantum thermodynamics [1; 2] has emerged as a captivating field of research that bridges the fundamental principles of quantum mechanics and the laws of thermodynamics. In recent years, there has been a growing interest in the study of quantum thermal machines [3; 4; 5; 6], which are devices that utilize quantum systems to convert heat into work and vice versa. In this context, how genuine quantum effects, such as quantum coherence [7; 8], correlations [9], and measurements [10; 11; 12] can be exploited to improve the performance of these machines has been the subject of intense study. In addition to these theoretical studies, several experimental implementations of different quantum thermodynamic cycles have also been implemented using single quantum systems, such as trapped ions and atoms [13; 14; 15].
Quantum many-body systems have also been proposed as the working medium for engines and refrigerators. In this context, Bose-Einstein condensates (BECs) have emerged as a prominent candidate due to their remarkable macroscopically observable quantum properties and controllability. BECs, are formed by cooling a gas of bosonic particles to extremely low temperatures and are characterized by a high degree of coherence, where a significant fraction of the particles occupy the same quantum state. The precise control achieved over BECs through techniques such as laser cooling and magnetic trapping, allows for the manipulation of their properties and opens up exciting possibilities for exploring quantum thermodynamics. Recent works have designed various engines that utilize BECs to extract work. For instance, in [16] it was considered an endoreversible Otto cycle with non-interacting Bose gas, showing that the power output can be enhanced in a regime when the working medium is in the BEC phase. In [17], an interacting BEC engine was explored, and their performance was addressed through the experimental determination of the equation of state. Interacting BECs were also considered in [18; 19], for a cycle working with a particle reservoir at zero temperature, and where the interaction strength between atoms is controlled by Feshbach resonances. In [20], a strategy for using a mixture of two atomic gases as a quantum refrigerator is outlined. Additionally, in [21] there is a proposal on building quantum engines using one-dimensional ultracold gases and illustrates its use in the cooling process.
In this paper, we study a quantum Otto cycle using a three dimensional interacting Bose-Einstein condensate as a working medium. To do so we perform direct numerical simulations in which the closed (adiabatic) strokes are described by the Gross-Pitaevskii equation (GPE) and the open (isochoric) strokes, that occur at finite temperature, are modeled using a stochastic Ginzburg-Landau equation. This approach allows us to obtain the complete dynamics of the BEC during the whole cycle, and provides a highly detailed description of the quantum-many body engine. Therefore, we not only obtain the whole thermodynamic description of the system, but we also are able to track the evolution of the different contributions to the energy, as well as the state of the BEC consistently. We aim to uncover the underlying principles governing the efficiency, power output, and other relevant features of these machines. By employing advanced theoretical models and numerical simulations, we systematically analyze how various parameters, such as the interaction strength, frequency of the harmonic trap, and reservoirs temperatures, affect the performance of a quantum Otto cycle.
## II Methods
### Thermodynamic cycle
We will consider a finite-time Otto cycle using an interacting Bose-Einstein condensate as a working medium. The thermodynamic cycle starts with the gas at temperature \(T_{h}\), in an spherical trap with frequency \(\omega_{h}\). The
first stroke is an adiabatic expansion that turns \(\omega_{h}\) to \(\omega_{c}\), with \(\omega_{h}>\omega_{c}\), thus expanding the condensate. During the second stroke the system is put in contact with an external cold source, and the gas cools down to reach a thermalized state at temperature \(T_{c}<T_{h}\) in an isochoric process. The third stroke is an adiabatic compression, changing the trap potential from \(\omega_{c}\) to \(\omega_{h}\). Finally, in the last stroke the system undergoes an isochoric process in contact with a hot source at temperature \(T_{h}\). Thus, the cycle is completely described by prescribing the time taken during each of the strokes, respectively \(\tau_{e}\) (for the expansion), \(\tau_{\rm cold}\), \(\tau_{c}\) (for the compression), and \(\tau_{\rm hot}\); together with the time dependence of the trap potential during the adiabatic strokes. In the following, we will use the same expansion and contraction times so that \(\tau_{e,c}=\tau_{e}=\tau_{c}\).
We can define \(W_{c}\) and \(W_{e}\) respectively as the works extracted in the compression and the expansion, and \(Q_{h}\) as the heat absorbed in the isochoric hot process,
\[W_{c,e} =E_{c,e}^{(i)}-E_{c,c}^{(f)}, \tag{1}\] \[Q_{h} =E_{e}^{(f)}-E_{c}^{(i)}. \tag{2}\]
Here \(E\) is the system total energy, the subindices \(e\) and \(c\) denote respectively the expansion and contraction strokes, and the superindices \(i\) and \(f\) denote respectively the initial and final states of these strokes. The efficiency of a heat engine is defined as the net yielded work (\(W=W_{c}+W_{e}\)) divided by the absorbed heat. In practice, extended systems display heat fluctuations, and variations in the work as the cycle is repeated. The efficiency of the cycle is defined as
\[\eta=\frac{W}{Q_{h}}. \tag{3}\]
This efficiency will fluctuate in different realizations of the cycle, so one usually is concerned with the mean efficiency. For non-interacting condensates in the adiabatic regime, the efficiency reduces to the Otto efficiency [16]
\[\eta_{O}=1-\frac{\omega_{c}}{\omega_{h}}. \tag{4}\]
### Adiabatic evolution
We will describe the state of the Bose-Einstein condensate in terms of a single wave function \(\psi({\bf r},t)\). For the expansion and the contraction strokes we solve numerically the Gross-Pitaevskii equation (GPE) with a time-dependent harmonic trapping potential \(V({\bf r},t)\),
\[i\hbar\frac{\partial\psi({\bf r},t)}{\partial t}=\left[-\frac{\hbar^{2}{\bf V }^{2}}{2m}+g|\psi({\bf r},t)|^{2}+V({\bf r},t)\right]\psi({\bf r},t). \tag{5}\]
Here \(m\) is the atomic mass, the interaction is controlled by \(g=4\pi a\hbar^{2}/m\), and \(a\) is the s-wave scattering length. The spherical potential is given by \(V({\bf r},t)=m\omega^{2}(t)(x^{2}+y^{2}+z^{2})/2\), and the frequency \(\omega(t)\) during the adiabatic strokes changes linearly in time from the initial to the final value. In order to evaluate the total energy of the condensate we will consider the Hamiltonian associated to Eq. (5):
\[\mathcal{H}[\psi,\psi^{*}]=\int d^{3}r^{\prime}\left[\frac{\hbar^{2}}{2m}|{ \bf\nabla}\psi|^{2}+\frac{g}{2}|\psi|^{4}+V({\bf r},t)|\psi|^{2}\right], \tag{6}\]
where the star denotes the complex conjugate.
### Thermal baths
During the isochoric strokes the system is coupled to a thermal bath, and in principle it can exchange both particles and energy. Under these conditions the possible equilibria will be characterized by a volume \(\mathcal{V}\), a chemical potential \(\mu\), and a temperature \(T\). The probability of these equilibrium states is then given by the Grand canonical ensemble,
\[\mathbb{P}=\frac{e^{-\beta(\mathcal{H}-\mu\mathcal{N})}}{\mathcal{Z}}, \tag{7}\]
where \(\beta=1/(k_{B}T)\), \(k_{B}\) is the Boltzmann constant, \(\mathcal{Z}\) is the Grand canonical partition function, and \(\mathcal{N}\) is the number of particles in the system.
The evolution of the system towards these equilibria, while in contact with a thermal bath at temperature \(T\), can be done in terms of the approach described in [22; 23]. Thus, by adding white-noise to Eq. (5) we solve the following stochastic Ginzburg-Landau equation:
\[\frac{\partial\psi}{\partial t} = \left[\frac{\hbar}{2m}{\bf\nabla}^{2}-\frac{g}{\hbar}|\psi|^{2}- V({\bf r})+\frac{\mu}{\hbar}\right]\psi+ \tag{8}\] \[\sqrt{\frac{2}{\mathcal{V}\hbar\beta}}\zeta({\bf r},t).\]
Where \(\zeta({\bf r},t)\) is a delta correlated random process such that \(\langle\zeta({\bf r},t)\zeta^{*}({\bf r}^{\prime},t^{\prime})\rangle=\delta({ \bf r}-{\bf r}^{\prime})\delta(t-t^{\prime})\), and the factor \(\sqrt{2/(\mathcal{V}\hbar\beta)}\) controls the amplitude of the fluctuations through the temperature \(T\). This equation can be obtained by performing a Wick rotation \(t\to it\) to Eq. (5), and by adding both the chemical potential and the delta correlated random forcing term. In the absence of forcing this equation evolves into solutions that are stationary solutions of GPE [24]. Note that the Ginzburg-Landau equation is also used to study non-isolated dissipative dynamics, e.g., in superconductivity [25].
We can explicitly verify that the solutions of Eq. (8) result in equilibria compatible with Eq. (7). Defining the free energy \(F=\mathcal{H}-\mu\mathcal{N}\), Eq. (8) can be written as a Langevin equation for the evolution of each Fourier mode of \(\psi\)[22],
\[\frac{\partial\hat{\psi}({\bf k},t)}{\partial t}=-\frac{1}{\mathcal{V}\hbar} \frac{\partial F}{\partial\hat{\psi}^{*}({\bf k},t)}+\sqrt{\frac{2}{\mathcal{V} \hbar\beta}}\hat{\zeta}({\bf k},t), \tag{9}\]
where \(F=F[\{\hat{\psi}(\mathbf{k},t),\hat{\psi}^{*}(\mathbf{k},t)\}]\) (i.e., it is a functional of the set of Fourier amplitudes of \(\psi\), where a Galerkin truncation up to a maximum wave-number is applied to the set of Fourier modes such that \(|\mathbf{k}|<k_{\rm max}\)). The resulting stochastic process has a total state probability \(\mathbb{P}[\{\hat{\psi}(\mathbf{k},t),\hat{\psi}^{*}(\mathbf{k},t)\}]\) whose evolution is described by a corresponding multivariate Fokker-Planck equation [26]
\[\frac{\partial\mathbb{P}}{\partial t}=\sum_{|\mathbf{k}|<\mathbf{k}_{\rm max}}\frac{ \partial}{\partial\hat{\psi}_{\mathbf{k}}}\left[\frac{1}{\mathcal{V}\hbar}\frac{ \partial F}{\partial\hat{\psi}_{\mathbf{k}}^{*}}\mathbb{P}+\frac{1}{\mathcal{V} \hbar\beta}\frac{\partial\mathbb{P}}{\partial\hat{\psi}_{\mathbf{k}}^{*}}\right]+ \text{c.c.} \tag{10}\]
where \(\hat{\psi}_{\mathbf{k}}\) is shorthand for \(\hat{\psi}(\mathbf{k},t)\), and c.c. denotes the complex conjugate. This equation evolves into the Grand canonical distribution in Eq. (7) provided that \(\beta F\) is positive defined. Thus, by integrating numerically Eq. (8) we can evolve the system towards states with different temperatures \(T\) under the Grand canonical constraints.
For the isochic strokes, the system evolves at constant volume \(\mathcal{V}\) and fixed number of particles \(\mathcal{N}\) (or equivalently, at fixed mean density \(\bar{\rho}\) in the total volume that contains the gas). This corresponds to working on the Canonical ensemble, and can be achieved by solving Eq. (8) coupled with [22; 23]
\[\frac{\partial\mu}{\partial t}=-\gamma(\bar{\rho}-\rho_{m}). \tag{11}\]
This equation adjust the chemical potential such that the mean density \(\bar{\rho}\) remains close to the target mean density in the trap \(\rho_{m}\); \(\gamma\) is a parameter that controls how fast convergence to the desired mean density takes place.
It is worth noting that there exist other formulations to describe condensates at finite temperature, such as the stochastic Gross-Pitaevskii equation [27; 28], or coupled kinetic equations [29]. While the method used here generates the correct thermal states and has long been used to study the dynamics of dissipative systems at finite temperature [30], a stochastic Gross-Pitaevskii or kinetic formulation could better describe nonequilibrium dynamics although at a larger computational cost. For a comparison between these methods see [31].
### Energetics
The total energy of the system can be decomposed into several components that provide information on excited ordered and disordered modes in the gas, such as potential and internal energies, or a compressible kinetic energy that can be associated to sound waves and phonons. To this end we use the Madelung transformation,
\[\psi(\mathbf{r},t)=\sqrt{\rho(\mathbf{r},t)/m}\,e^{iS(\mathbf{r},t)}, \tag{12}\]
which maps GPE to the Euler equation for an isentropic, compressible and irrotational gas with an extra term that accounts for quantum pressure [24]. This allows for a continuum medium description of the system. In Eq. (12) the transformation \(\rho(\mathbf{r},t)\) is the fluid mass density, and \(S(\mathbf{r},t)\) is the phase of the order parameter. Using the momentum density
\[\mathbf{j}(\mathbf{r},t)=-\frac{i\hbar}{2}\left(\psi^{*}\mathbf{\nabla}\psi-\psi\mathbf{ \nabla}\psi^{*}\right), \tag{13}\]
the gas velocity can then be defined as \(\mathbf{v}(\mathbf{r},t)=\mathbf{j}(\mathbf{r},t)/\rho(\mathbf{r},t)=(\hbar/m)\bm {\nabla}S(\mathbf{r},t)\).
Thus, in terms of the fluid mass density, the total energy of the system per unit volume (see Eq. (6)) can be decomposed as
\[E=E_{\rm k}+E_{\rm q}+E_{\rm int}+E_{\rm V}, \tag{14}\]
where the kinetic energy is \(E_{\rm k}=\langle\rho v^{2}\rangle/2\), the quantum energy is \(E_{\rm q}=\hbar^{2}/(2m^{2})\langle(\nabla\sqrt{\rho})^{2}\rangle\), the gas internal (or interaction) energy is \(E_{\rm i}=g/(2m^{2})\langle\rho^{2}\rangle\), and the trap potential energy is \(E_{\rm V}=\langle\rho V\rangle\). In all cases the angle brackets denote volume average. Using the Helmholtz decomposition \((\sqrt{\rho}\mathbf{\nabla})=(\sqrt{\rho}\mathbf{v})^{(\rm c)}+(\sqrt{\rho}\mathbf{ v})^{(\rm i)}\)[24], where the superindices \(\rm c\) and \(\rm i\) denote respectively the compressible and incompressible components (i.e., such that \(\nabla\cdot(\sqrt{\rho}\mathbf{\nabla})^{(\rm i)}=0\)), the kinetic energy can be further decomposed into the compressible \(E_{\rm k}^{(\rm c)}\) and incompressible \(E_{\rm k}^{(\rm i)}\) kinetic energy components. This decomposition is used to study classical compressible gasses [32], as well as quantum fluids [33; 23; 34], and thus provides information that can be compared with the classical picture of thermal engines.
### Numerical methods
We solve numerically Eqs. (5) and (8) to simulate respectively the adiabatic and isochic strokes of the cycle. In order to do so we use a pseudospectral Fourier-based method in a spatial grid of \(N^{3}=64^{3}\) points, with the \(2/3\) rule for dealiasing, a fourth-order Runge-Kutta method for the time evolution of GPE, and an Euler time stepping method for the stochastic Ginzburg-Landau equation. In all cases we use the parallel code GHOST, which is publicly available [35], in a cubic domain of dimensions \([-\pi,\pi]L\times[-\pi,\pi]L\times[-\pi,\pi]L\), so that the domain has length \(2\pi L\) (with \(L\) a unit length). To deal with the non-periodic trapping potential in the Fourier representation, while avoiding Gibbs phenomenon, we use a continuation method as described in [36; 34].
Results are shown in units of a characteristic speed \(U\) (the speed of sound), the unit lenght \(L\) (proportional to the condensate mean radius), and a unit mean density \(\rho_{0}\). Temperatures are written in units of \(T_{\lambda}\), the condensate critical temperature (see the Appendix for its estimation, and for the range of temperatures considered in this study). Except when explicitly stated (e.g., when we study the effect of varying \(T_{h}\)), we consider \(T_{h}\approx 0.012T_{\lambda}\) and \(T_{c}\approx 0.003T_{\lambda}\). Thus, the simulations have \(T\ll T_{\lambda}\). The speed of sound is \(c=(g\rho_{0}/m)^{1/2}=1U\) and the condensate healing length is \(\xi=\hbar/(2m\rho_{0}g)^{1/2}=0.0707L\)
except in simulations in which we artificially decrease the interaction strength. In most simulations we use trapping frequencies \(\omega_{c}\approx 0.334638\,U/L\) and \(\omega_{h}=0.337613\,U/L\). These frequencies are chosen close enough to reduce the computational cost of performing the slow expansions and contractions, and we indicate explicitly when other values of \(\omega_{c}\) and \(\omega_{h}\) are used. Quantities can be scaled using dimensional values for \(U\), \(L\), and \(M\). In experiments typical dimensional values are \(L\approx 10^{-4}\) m and \(c=U\approx 2\times 10^{-3}\) m/s [37]. This results in \(\xi\approx 1.12\times 10^{-6}\) m and a mean trap frequency \(\omega\approx 4.7\) Hz. For the typical mass of a gas of \({}^{87}\)Rb atoms in a BEC, peak densities of \(\approx 10^{13}\) cm\({}^{-3}\) atoms are also compatible with our simulations and with experiments [38].
For a given set or parameters, each cycle is repeated 4 times. This results in several values for the energies \(E_{i}^{(j)}\) (with \(i=c\) or \(e\), and \(j=i\) or \(f\)), and thus for \(W\) and \(Q_{h}\) at the end of each cycle. To compute efficiencies we assume these quantities have a Gaussian distribution, and use a Montecarlo method to generate a random set of \(W\) and \(Q_{h}\) values compatible with the fluctuations observed in the 4 explicitly integrated cycles. From these values, the distribution of the efficiency \(\eta\) and its mean value are finally obtained.
Each realization of the cycle is performed with the following protocol: Given a state at temperature \(T_{h}\) (which can be generated for the first cycle by integrating the stochastic Ginzburg-Landau equation, or can be the result of the final state of a previous cycle), we integrate the expansion stroke using GPE. The frequency of the trap is decreased linearly in time from \(\omega_{h}\) to \(\omega_{c}\) with a time step \(dt=2.5\times 10^{-3}L/U\); the length of this simulation depends on the speed of the expansion. When the expansion finishes, the system is evolved towards the lower temperature \(T_{c}\) using the stochastic Ginzburg-Landau equation. Time integration is performed until the system reaches a stationary regime. Then, the contraction is integrated using GPE with a linear ramp in the trap frequency from \(\omega_{c}\) to \(\omega_{h}\), with the same \(dt\) and total time as in the expansion. Finally, the system is again coupled to the hot source at temperature \(T_{h}\), and integrated using the stochastic Ginzburg-Landau equation until a new stationary state is reached.
## III Results
### Analysis of the system evolution on a cycle
We first analyze the evolution of the system in each stroke, and the evolution of the different energy components, for the set of parameters introduced in Sec. II.5. Then, we evaluate the efficiency of the cycle in terms of the variation of these parameters.
A diagram of several consecutive cycles in the energy-frequency plane, and their time evolution (with the time \(t=0\) set arbitrarily at the beginning of each cycle expansion, and with the time given in units of the inverse of the frequency difference \(\Delta\omega=\omega_{h}-\omega_{c}\)), are shown in Fig. 1. The numerical simulations agree with the usual picture of an Otto cycle. An abrupt change in energy can be seen as soon as the condensate gets in contact with the thermal sources. The full dynamics allows us to see fluctuations, both in the energy after the adiabatic phases, as well as those produced by the thermalization process. These fluctuations also result in slightly different values of the energy along each of the expansions and contractions in the different cycles.
During the isochoric strokes a long integration time is necessary for the system to thermalize at the new temperature. The inset in Fig. 1 shows the tails of the probability density functions (PDFs) of the mass density in the trap at different times after the system is coupled to the cold source. At early times, as the condensate is still hot, the PDF displays strong tails, associated to strong fluctuations in the mass density. Shortly after these regions with strong fluctuations disappear and as the condensate cools down, the PDFs converge to new stationary solutions with weaker tails. Similar results are obtained for the evolution with the hot source. In the next subsection we vary the strength of the interaction in the BEC and verify that even for weak interactions the system thermalizes. We also ensured that the isochoric branches were integrated long enough to achieve stationary and accurate convergence of the PDFs.
The final states of the isochoric strokes are shown in Fig. 2. The top panels show the mass density in a two-dimensional slice in the \(xy\) plane, at the end of the hot
Figure 1: Top panel: Energy as a function of the trap frequency, \(\omega\), for several consecutive cycles. Expansions and contractions are plotted in gray, cooling and heating strokes are in cyan and orange, respectively. Bottom panel: Time evolution (with time in units of the inverse of \(\Delta\omega=\omega_{h}-\omega_{c}\)) of the total energy in the same cycles, setting time \(t=0\) for all at the beginning of the expansion. The inset shows the probability density function (PDF) of the condensate mass density in the trap at different times; colors of the lines match the times of the diamond markers in the main figure.
and cold branches solved with the stochastic Ginzburg-Landau equation. At higher temperature the gas displays stronger fluctuations in the mass density at the center of the trap as well as in the borders of the condensate where irregularities can be seen. Density fluctuations are associated with more energy in compressible modes (sound waves or phonons), and with an increase in the quantum energy (caused by gradients in the mass density). The bottom panel shows the compressible kinetic energy spectrum in both cases. Note that this spectrum measures the energy in sound waves. Two interesting features are worth mentioning. First, the spectra are proportional to a \(k^{2}\) power law, which corresponds to the equipartition of energy in 3D modes (i.e., thermalisation). Second, the amplitude of sound waves increases with temperature.
Now, we will analyse the behaviour of the energy components during the adiabatic strokes. Naturally, this depends on whether we consider a compression or an expansion, as well as on the speed of the stroke. For the sake of clarity we now consider shorter strokes than in Fig. 1 (i.e., faster compressions and expansions), as they result in more evident effects. Figure 3 shows the time evolution of the different energy components, averaged over four cycles. The energy variations are normalized by the absolute value of the total energy difference during the stroke. During the expansion, the interaction and trap potential energies decay rapidly as the condensate expands. Both quantities also oscillate with the frequency of the breathing mode of the condensate in the trap and have, due to the nature of each energy, almost opposite phases. Meanwhile, the compressible kinetic energy grows as sound waves are excited during the expansion. The incompressible and quantum energies remain almost constant. During the compression, the interaction and trap potential energies grow as the condensate contracts. In this case the compressible kinetic energy remains almost constant, with a small increase of the quantum energy as density gradients grow due to the contraction.
### Efficiency
Now, we will analyze the efficiency of the cycle. In particular, we will be focused on its behaviour in terms of the speed of the expansion and compression (i.e. \(\tau_{e,c}\)), the temperature, and the interaction strength.
Let us first consider the impact of the speed of the adiabatic stroke. To this end, for the set of parameters introduced in Sec. II.5, we performed several cycles with adiabatic strokes of different lengths \(\tau_{e,c}\) (longer \(\tau_{e,c}\) corresponds to slower expansions and contractions). In this case, we expect to attain the utmost efficiency for grater values of \(\tau_{e,c}\), since the dynamics gets closer to the adiabatic limit. Figure 4 shows the efficiency distribution of the cycles for different values of \(\tau_{e,c}\) (in units of \(\Delta\omega^{-1}\)). As expected, we find that the mean efficiency grows with this time and, for times \(\tau_{e,c}\) much longer than the characteristic time associated to the adiabatic limit for non-interacting gases (\(\sim\omega_{h}^{-1}\)), the efficiency reaches
Figure 3: Time evolution of the different energy components, averaged over an ensemble of four cycles, during the adiabatic expansion (left) and contraction (right). The value of each energy component at the beginning of the strokes (here arbitrarily labeled as \(t=0\)) is subtracted from the energies, and the energy variations are then normalized by the absolute value of the total energy difference during the entire stroke.
Figure 2: Top panel: Two-dimensional slices of the mass density in the \(xy\) plane, \(\rho(x,y,z=0)\), for temperatures \(T_{h}/T_{\lambda}\approx 0.012\) and \(T_{c}/T_{\lambda}\approx 0.003\). Bottom panel: Spectrum of the compressible kinetic energy in the hot and cold cases. A \(\sim k^{2}\) power law corresponding to equipartition of compressible three-dimensional modes is shown as a reference.
a value that is independent of \(\tau_{e,c}\). In this regime, the efficiency is roughly half that of the ideal Otto efficiency for a non-interacting gas.
On the other hand, when only the temperature of the hot reservoir, \(T_{h}\), is varied we find that the efficiency remains approximately constant (i.e., within error bars, see Fig. 5). However, increasing \(T_{h}\) (and therefore, the difference between \(T_{h}\) and \(T_{c}\)) results in a reduction of fluctuations. Thus, larger temperature gradients leads to a better determination of the averaged efficiency. Note that this is also what happens with the ideal Otto cycle, where the efficiency is independent of the temperatures.
We will now analyse the efficiency in terms of the interaction strength. In this case, we expect that in the limit of a non-interacting gas the efficiency should approach \(\eta_{O}\) as defined in Eq. (4) [16]. However, our method can only attain this limit asymptotically. This is due to the fact that when the interaction is removed, \(g=0\), the thermalization time extends to infinity (as illustrated in Fig. 1). Therefore, we will evaluate the efficiency as the interaction strength is reduced. Reducing the interaction strength leads to a decrease of the speed of sound, and an increase in the healing length (i.e., a more dilute gas). In the following, we express the results with respect to a coefficient \(\alpha\) defined as
\[g=\alpha g_{0}, \tag{15}\]
where \(0<\alpha\leq 1\), and \(g_{0}\) corresponds to setting the speed of sound \(c=(g_{0}\rho_{0}/m)^{1/2}=1U\) and \(\xi=\hbar/(2m\rho_{0}g_{0})^{1/2}=0.0707L\) (i.e., the value used so far in this work).
First, we performed several cycles with different values of \(\omega_{c}/\omega_{h}\), varying \(\omega_{c}\) for two interaction strengths \(\alpha=1\) and \(\alpha=0.064\) (the other parameters are the same as in Sec. II.5).
Fig. 6 shows the efficiency as a function of the ideal Otto efficiency \(\eta_{O}=1-\omega_{c}/\omega_{h}\). We can see that it remains smaller than the Otto efficiency (which is indicated as a reference by a black dashed line). However, it still scales linearly with \(1-\omega_{c}/\omega_{h}\), and also gets closer to \(\eta_{O}\) when \(\alpha\) decreases. Interestingly, the behavior in Figs. 5 and 6 indicate that the efficiency is independent of the temperature and the dependence with \(\alpha\) can be factorized. This, at least in the regime of parameters that we are exploring, suggests that efficiency for the interacting gas is proportional to the ideal non-interacting Otto efficiency, with a proportionality factor that decreases with the interaction strength.
Then, we fix the value of \(\omega_{c}/\omega_{h}\) and vary the interaction strength. Note that the volume of the condensate depends on \(g\). In this case, as we consider repulsive interactions, the volume decreases with \(g\) at a fixed potential. We considered two cases different situations: one in which the total mass of the condensate is kept constant
Figure 5: Efficiency (in units of the ideal Otto efficiency) as a function of the hot temperature \(T_{h}\) using parameters the same parameters of Sec. II.5. Labels for the markers are the same as in Fig. 4.
Figure 4: Efficiency of the cycles in units of the ideal Otto efficiency \(\eta_{O}\), as a function of \(\tau_{e,c}\), for the parameters listed in Sec. II.5. The shaded areas indicate the PDFs of the efficiencies, and the error bars indicate the minimum and maximum efficiencies obtained. The vertical dashed line indicates the time \(\omega_{c}^{-1}\). In a non-interacting condensate, the expansion and contraction times must be much larger than this value to achieve adiabaticity.
Figure 6: Efficiency of the cycle \(\eta\) as a function of \(\eta_{O}=1-\omega_{c}/\omega_{h}\), as we vary \(\omega_{c}/\omega_{h}\) (symbols with the same color). For an ideal Otto cycle we expect \(\eta=\eta_{O}\). Different colors of the symbols correspond to different interaction strengths: gray for \(\alpha=1\) (\(g=g_{0}\)) and purple for \(\alpha=0.064\) (\(g=0.064g_{0}\)). Two slopes are indicated as references.
as the interaction strength is changed, and the other in which the density in the center of the trap is kept constant. We can appreciate from the top panel of Fig. 7 that both cases display similar efficiencies. However, when the mass is constant the fluctuations are larger than when the density is kept constant. This stems from the fact that as the interaction strength is reduced, the concentration of particles at the center of the trap decrease substantially, thus increasing the amount of fluctuations. In general, we observe that the efficiency grows slowly for small \(\alpha\). In both cases, as \(\alpha\) decreases the efficiency increases attaining a mean value \(\approx 65\%\) of \(\eta_{O}\) for the minimum \(\alpha\) considered in the simulations. As previously stated, we are unable to reduce the value of \(\alpha\) any further as calculating isochoric strokes becomes excessively expensive with a progressively increasing thermalisation time.
Finally, we analyze the power of this engine. Let us first look at the bottom panel of Fig. 7 where the mean extracted work as a function of the interaction strength \(\alpha\) is shown. In this case, we compare the extracted amount of work for a given value of \(\alpha\) with the work \(W^{*}\) extracted in the fully interacting case with \(\alpha=1\). Note that \(W/W^{*}\) increases by \(\approx 50\%\) for decreasing \(\alpha\), and becomes approximately constant for \(\alpha<10^{-1}\). The actual power of the cycle is determined by the ratio of the work to the time required to complete the cycle. As it occurs in both numerical simulations and in a real gases, we consider that the thermalization times in the isochoric strokes are longer than the times required for the expansion and compression. Thus, we can approximate the length of the cycle as twice the length of the thermalization process. Note that, in our simulations we use the stochastic Ginzburg-Landau equation as a multivariate Fokker-Plank equation to obtain the new equilibria (at a given temperature) of the Grand canonical ensemble, therefore the time in the simulation should not be directly associated to an actual thermalization time. However, in the non-interacting limit the thermalization time effectively goes to infinity, as the time between collisions diverges.
We can still estimate the order of the thermalization time for the interacting case from kinetic theory. Note that \(g\sim a\), i.e., it is linearly proportional to the scattering length, and thus \(g\sim a\sim\sqrt{\sigma}\) where \(\sigma\) is the collision cross section. As \(\sigma\sim 1/\tau\) where \(\tau\) is the time between collisions, for a fixed number of particles the time it takes for the system to thermalize with \(\alpha=1\) compared with the time when \(\alpha<1\) is proportional to the ratio of the times between collisions,
\[\frac{\tau_{0}}{\tau}\sim\left(\frac{g}{g_{0}}\right)^{2}=\alpha^{2}, \tag{16}\]
where \(\tau_{0}\) is the value of \(\tau\) when \(\alpha=1\). This indicates (in qualitative agreement with the results from the numerical simulations) that interactions allow for much faster cycles, and extraction of significantly more power (e.g., from the cycles in the plateau of \(W\) for \(\alpha<10^{-1}\) in Fig. 7, even with the reduction of the extracted work of \(\approx 50\%\) with respect to \(\alpha=1\)). In other words, interacting gases allow to get higher power in a finite time cycle. Moreover, in principle by adjusting the interaction strength of the condensate, a power enhancement at nearly constant efficiency can be achieved (see Figs. 6 and 7).
## IV Conclusions
In this work, we performed numerical simulations of quantum Otto engines that have an interacting BEC as its working medium. We were able to recover not only the thermodynamics of the system, but also its complete dynamics, which enable in turn to perform a detailed analysis of the engine. Analyzing, for instance, the different contributions to the energy along the adiabatic strokes.
We characterized the efficiency of the engine by performing several simulations in which we independently changed the temperatures, the trap frequencies, and the interaction strength of the gas. We found that the efficiency is independent of the temperature. However, fluctuations in the efficiency and in other observables are reduced as the difference between the temperatures of the reservoirs increases. Also, their dependence on the trap frequencies turns out to be similar to the non-interacting case, but with a proportionality factor that depends on the interaction strength.
We also show that the efficiency and work output of the engine decrease as the interaction strength of the BEC becomes larger. However, the timescale it takes
Figure 7: Top: Efficiency in units of \(\eta_{O}\) for different interaction strengths \(\alpha=g/g_{0}\). We compare situations in which the total mass in the condensate is constant (labeled as \(M\)), and in which the density in the center of the trap is constant (labeled as \(\rho_{0}\)). Labels for the markers are the same as in Fig. 4. Bottom: Mean work extracted by the engine as a function of \(\alpha\), in units of the work \(W^{*}\) for the fully interacting case with \(\alpha=1\).
to the system to thermalize is inversely proportional to the square of the interaction strength. Thus, for small interactions, we find a regime in which increasing the interaction of the BEC allows for a considerable increase in power, while the efficiency is only slightly reduced. Since the interaction strength of the BEC can be experimentally tuned, our results provide a possible way to improve the power of a quantum engine at a small cost in efficiency.
###### Acknowledgements.
J.A.E. and F.M. thank Muriel Bonetto and Facundo Pugliese for useful discussions and suggestions. J.A.E. and P.D.M. acknowledge financial support from UBACyT Grant No. 20020170100508BA and PICT Grant No. 2018-4298. F.M. and A.J.R acknowledge financial support from PICT Grant No. 2019-4349 and No. 2021-01288.
|
2301.05272 | Inaccessible Neural Language Models Could Reinvigorate Linguistic
Nativism | Large Language Models (LLMs) have been making big waves in the machine
learning community within the past few years. The impressive scalability of
LLMs due to the advent of deep learning can be seen as a continuation of
empiricist lingusitic methods, as opposed to rule-based linguistic methods that
are grounded in a nativist perspective. Current LLMs are generally inaccessible
to resource-constrained researchers, due to a variety of factors including
closed source code. This work argues that this lack of accessibility could
instill a nativist bias in researchers new to computational linguistics, given
that new researchers may only have rule-based, nativist approaches to study to
produce new work. Also, given that there are numerous critics of deep learning
claiming that LLMs and related methods may soon lose their relevancy, we
speculate that such an event could trigger a new wave of nativism in the
language processing community. To prevent such a dramatic shift and placing
favor in hybrid methods of rules and deep learning, we call upon researchers to
open source their LLM code wherever possible to allow both empircist and hybrid
approaches to remain accessible. | Patrick Perrine | 2023-01-12T19:41:47Z | http://arxiv.org/abs/2301.05272v1 | # Inaccessible Neural Language Models Could
###### Abstract
Large Language Models (LLMs) have been making big waves in the machine learning community within the past few years. The impressive scalability of LLMs due to the advent of deep learning can be seen as a continuation of empiricist lingusitic methods, as opposed to rule-based linguistic methods that are grounded in a nativist perspective. Current LLMs are generally inaccessible to resource-constrained researchers, due to a variety of factors including closed source code. This work argues that this lack of accessibility could instill a nativist bias in researchers new to computational linguistics, given that new researchers may only have rule-based, nativist approaches to study to produce new work. Also, given that there are numerous critics of deep learning claiming that LLMs and related methods may soon lose their relevancy, we speculate that such an event could trigger a new wave of nativism in the language processing community. To prevent such a dramatic shift and placing favor in hybrid methods of rules and deep learning, we call upon researchers to open source their LLM code wherever possible to allow both empiricist and hybrid approaches to remain accessible.
## 1 Introduction
Large Language Models (LLMs) have been a popular topic of research among the academic community [14]. The promise of a near-general purpose neural model for a variety of language processing tasks is indeed an attractive one [13]. Deep learning has made significant developments in language tasks such as conversational language understanding [15], spoken/text-based dialog systems [1], and natural language generation from images [1]. Large Language models can be viewed as the natural progression away from the rigid rule-based systems that we've had since the 1950's [12], continuing the empiricist mentality of statistical natural language processing without the potentially costly and context-specific activity of feature engineering [10]. However, with large corporations touting their ever-growing, state-of-the-art models under closed-source code and payment walls, it could be seen that these large language models are becoming _less accessible_. Some organizations have acknowledged the potential harms that deep learning models could cause by establishing ethical frameworks [1][2], but there are still growing concerns regarding accessibility and the result of false/irreproducible science [1].
This criticism for empiricist methods is not new in linguistics-based science, in that Chomsky's Poverty of the Stimulus Argument [16] has a rich history of discussion and debate amongst linguists, scientists, and philosophers [1]. In this work, we will briefly introduce this debate over language learning between nativists and empiricists, relate these topics to research in natural language processing, and discuss how the current state of this research is reinforcing an imbalance between the two perspectives. We intend to deliver a neutral ground of analysis, as we agree that a hybrid approach for NLP research can lead to strong results. The current bias towards the highly-popular, but inaccessible empiricist methods utilizing LLMs could lead to a new wave of nativism in natural language processing work, following a large backlash against such empirical methods.
## 2 Background
We now provide a holistic background on the linguistic and scientific developments that encompass this issue.
### The Three Waves of Modern NLP
We will give a brief background on the three main waves of modern natural language processing research: the rule-based theories popularized by Noam Chomsky (Chomsky, 1965), the statistics-based empiricist experiments (Jelinek, 1976), and today's popular methodology of deep learning for natural language processing (Collobert et al., 2011). The first wave is considered to be under a nativist perspective (Laurence and Margolis, 2001), whereas the latter waves are in support of an empiricist lens (Frank et al., 2019).
#### 2.1.1 Rule-based NLP
The concept of viewing language as a static system of rules to determine interpretation has been present as early as the 1830's (Humboldt, 1836). Noam Chomsky popularized this perspective in the domain of linguistics as a challenge to an existing overbearance of empiricist methods (Chomsky, 1956; Laurence and Margolis, 2001).
This rule-based approach to linguistics dominated the field for decades, following Chomsky's multiple works emphasizing and reinforcing this doctrine (Chomsky, 1956, 1957, 1963, 1965; Chomsky and Halle, 1968). Being based in propositional logic and a fixed content, rule-based methods are arguably rather accessible to researchers with limited resources. These methods continued to be prevalent in the field until the 1970's, when statistical methods were proven to be very useful.
#### 2.1.2 Statistical NLP
The roots of statistical language processing stem from Andrey Markov's efforts in computing bigram and trigram probabilities (Jurafsky and Martin, 2022) of vowel/consonant predictions using a novel as a corpus in 1913 (Markov, 2006). This \(n\)-gram approach was later applied to predicting sequences of English words (Shannon, 1948). This popularized the notion of using Markov chains for use in a variety of applications within and outside of linguistics.
Chomsky specifically challenged this use of finite-state Markov processes, the processes that formed \(n\)-gram based approaches, to be useless in serving as a comprehensive cognitive model of grammatical knowledge in humans (Chomsky, 1956, 1957; Miller and Chomsky, 1963). This hindered the progress of probabilistic approaches in linguistics.
Over a decade later, statistical language processing was revitalized due in part to a series of successful experiments using \(n\)-gram models for speech recognition (Baker, 1975a,b; Jelinek, 1976; Bahl et al., 1983; Jelinek et al., 1990). These empiricist-based experiments showed that Chomsky's nativist theories do not extend to recognizing speech in real time as previously proposed (Chomsky and Halle, 1968).
This marked a shift towards looking at language processing through an empirical lens, where a hypothesis test primarily guides the experimentation process, rather than theoretical insights (Manning and Schutze, 1999). After the successful statistical speech recognition experiments of the mid 1970's, statistical NLP reigned as the dominant approach for decades.
#### 2.1.3 ML-based NLP
Researchers soon began to use shallow neural networks to reinforce statistical methodologies in NLP. In the late 2000's, the advent of deeper neural networks for NLP began to stir when scalable, hierarchical language models (Morin and Bengio, 2005; Mnih and Hinton, 2008) and increased computing power became available for use by researchers.
Alongside these developments, researchers became tiresome of having to hand-engineer features for neural networks to learn from, as this can be a costly and rather context-specific task (Collobert et al., 2011). In was in the 2010's that deep learning became known more globally (LeCun et al., 2015), with NLP being a highly prominent application for deep neural networks. This sparked the current practice of training large language models in efforts to create a general model for many language tasks (Srivastava et al., 2022). In essence, the empiricist era of NLP has persisted to today through the evolution of deep learning practices. Some applications of deep learning outside of language have even used empiricist terms such as _tabula rasa_ very openly (Silver et al., 2017). The use of deep neural networks for language tasks has been confirmed to reinforce empiricist ideology (Frank et al., 2019).
## 3 Deep Learning Can Be Inaccessible
Deep learning as a science has been under fire for a number of reasons. While there have been encouraging results across many application domains of deep learning and positive insights about their role in advancing empiricism (Buckner, 2018), deep learning has garnered skepticsm from both in and
outside of its community Marcus (2018); Buckner (2019).
These critcisms of deep NLP can stem from a _lack of open sourcing of model code_ and also data Klein et al. (2017); Fadel et al. (2019); Chen et al. (2021); Guo et al. (2022); Xu et al. (2022). These issues are not exclusive to language processing, as other domains have reasons to leave aspects of their experimentation private or inaccessible when publishing Siegle et al. (2015); Suresha et al. (2018); Farooq and Hafeez (2020); Zuin et al. (2020); Guo et al. (2022).
We now focus on issues with closed-source large language models due to their popularity and the recent claims of greater intelligence (even sentience), as opposed to other models y Arcas (2022).
## 4 Potential Harms
### Potential Harms of Open-Sourcing LLMs
To offer a well-rounded argument in favor of open-sourcing LLMs, we will briefly cover some intuitions behind close-sourcing them in terms of potential harms.
LLMs could be repurposed for malicious purposes, particularly in generative tasks. LLMs have been seen to learn negative human biases/patterns in speech such as hate speech, discrimination, and the promotion of misinformation Schramowski et al. (2022). If a powerful, pre-trained LLM is made open source, then it could be repurposed as an engine to cause harm across the internet at great scale Weidinger et al. (2022). It could also be argued that open sourcing LLM code that has been deployed to end-users could pose security risks Chang et al. (2020).
We counter the argument of potential LLM misuse by malicious parties by arguing that such models or derivatives of such should not be published in any form, open or closed source. We argue that LLM experimental papers that indicate such potential to cause harm at scale should be filtered out at the publication review stage, something that has been discussed in the deep learning community as of late Ashurst et al. (2022). We also counter the security concern argument by saying that this could hold true for all open source software that is deployable, not just LLMs.
### Potential Harms from Continued Close-Sourcing of LLMs
We argue that there are more potential harms in the continued prevalence of close sourced LLM code than the potential harms of open sourcing them.
#### 4.2.1 Nativist Biases
Given that LLM experiments are becoming so large, costly, and complex, it is difficult to argue that an independent researcher can stake a claim in this sub-field. With top publication venues focusing heavily on empiricist experimentation Russell and Norvig (2021), researchers outside the typical corporate scope of research could be incentivized to explore nativist, rule-based approaches to solve problems in the NLP domain. If it is the empiricist group's better interest to foster growth in their methodologies and not opposing methods, steps should be taken in order to make their approaches accessible. Also, for hybrid methods to function, an ML-based solution should be made accessible to combine with the ruleset from the nativist side. This trend could be fostering a new generation of Chomsky-following nativist NLP researchers, which would not bode well for empiricists if the public begins to lose interest in deep learning methods for NLP.
#### 4.2.2 Lack of Reproducibility
We mention reproducibility and will further clarify its meaning due to an also recent, yet broader problem in deep learning research, the reproducibility crisis Kapoor and Narayanan (2022). Not only are large language models becoming difficult to reproduce, results from other areas of ML are becoming difficult to produce de Freitas Pereira et al. (2022); Towers et al. (2022). Initiatives to measure reproducibility across publication venues have been created, such as the ML Reproducibility Challenge. LLM experiments have been specifically reviewed to have a questionable about of reproducibility Crane (2018); Wieling et al. (2018); Cahyawijaya et al. (2022); Silva et al. (2022). There is also implied to be a significant amount of computational irreproducibility of LLM experimentation, given model complexity and data, however, we leave this exploration for future work.
There is some hope in the form of positive reproducibility reports in deep learning Gibson and Cano (2022). However, this growing amount of "bad press" for deep learning, specifically LLMs, could cause the public to begin distrusting LLM
research. This, again, could trigger a revisiting of Chomsky's rule-based theories of language.
#### 4.2.3 Issues in NLP Education
Given the previously mentioned issues, this lack of accessibility could affect the education of NLP methods. If students do not have access to code of LLMs, it could be difficult for them to learn to implement complex language model code of their own and learn to keep up with the state of the art. A lack of reproducibility could also be disenfranching to a young, empircist NLP researcher, leading them to pursue nativist approaches. These issues could reinforce the use of statistical, pre-deep learning techniques in the classroom, but it is difficult to argue that publication venues are interested in shallow neural network experimentation at this time.
These issues combine to form an uneven playing field for students to study NLP in empiricist and hybrid forms. After studying NLP formally, they may be inclined to commit to nativist methods or even reinforce the popularity of them at scale.
## 5 Potential Solution
We ask that publication venues merit open source LLM experiments significantly higher than they do currently. We believe that this would mitigate the issues discussed previously in this work. There seem to be developments occuring now in the deep learning publication space to help implement this in a proper form of governance (Ashurst et al., 2022).
## 6 Conclusion
In this work, we provided a comprehensive history of natural language processing methodologies over roughly the past century. We then used this narrative to lead into today's deep learning practices used in language processing, and current issues in an excessive closed sourcing of code for LLMs. It is our hope that this work inspires researchers and reviewers to champion open source language model code in order to pave the way for a more balanced research space.
|
2310.13351 | Elasto-plastic residual stress analysis of selective laser sintered
porous materials based on 3D-multilayer thermo-structural phase-field
simulations | Residual stress and plastic strain in additive manufactured materials can
exhibit significant microscopic variation at the powder scale, profoundly
influencing the overall properties of printed components. This variation
depends on processing parameters and stems from multiple factors, including
differences in powder bed morphology, non-uniform thermo-structural profiles,
and inter-layer fusion. In this research, we propose a powder-resolved
multilayer multiphysics simulation scheme tailored for porous materials through
the process of selective laser sintering. This approach seamlessly integrates
finite element method (FEM)-based non-isothermal phase-field simulation with
thermo-elasto-plastic simulation, incorporating temperature- and
phase-dependent material properties. The outcome of this investigation includes
a detailed depiction of the mesoscopic evolution of stress and plastic strain
within a transient thermo-microstructure, evaluated across a spectrum of beam
power and scan speed parameters. Simulation results further reveal the
underlying mechanisms. For instance, stress concentration primarily occurs at
the necking region of partially melted particles and the junctions between
different layers, resulting in the accumulation of plastic strain and residual
stress, ultimately leading to structural distortion in the materials. Based on
the simulation data, phenomenological relation regarding porosity/densification
control by the beam energy input was examined along with the comparison to
experimental results. Regression models were also proposed to describe the
dependency of the residual stress and the plastic strain on the beam energy
input. | Yangyiwei Yang, Somnath Bharech, Nick Finger, Xiandong Zhou, Joerg Schroeder, Bai-Xiang Xu | 2023-10-20T08:34:56Z | http://arxiv.org/abs/2310.13351v2 | Elasto-plastic residual stress analysis of selective sintered porous materials based on 3D-multilayer thermo-structural phase-field simulations
###### Abstract
Residual stress and plastic strain in additive manufacturing materials can exhibit significant microscopic variation at the powder scale, profoundly influencing the overall properties of printed components. This variation depends on process parameters and stems from multiple factors, including differences in powder bed morphology, non-uniform thermo-structural profiles, and inter-layer fusion. In this research, we propose a powder-resolved multilayer multi-physics simulation scheme tailored for porous materials through the process of selective sintering. This approach seamlessly integrates finite element method (FEM)-based non-isothermal phase-field simulation with thermo-elasto-plastic simulation, incorporating temperature- and phase-dependent material properties. The outcome of this investigation includes a detailed depiction of the mesoscopic evolution of stress and plastic strain within a transient thermal-microstructure, evaluated across a spectrum of beam power and scan speed parameters. Simulation results further reveal the underlying mechanisms. For instance, stress concentration primarily occurs at the necking region of partially melted particles and the junctions between different layers, resulting in the accumulation of plastic strain and residual stress, ultimately leading to structural distortion in the materials. Based on the simulation data, regression models were proposed to describe the dependency of the residual stress and the plastic strain on densification or beam energy input, along with the comparison to experimental results.
keywords: additive manufacturing, powder bed fusion, heat transfer, residual stress, microstructure +
Footnote †: journal:
## 1 Introduction
In the recent decade, additive manufacturing (AM) emerged from a niche technology to a widely applicable means of production. Among a variety of AM techniques, powder bed fusion (PBF) stands out as a strong technique for producing intricate geometries while maintaining good structural integrity and superior material properties [1, 2, 3, 4, 5]. In PBF, layers of powder get fused by a beam one after another to build up three-dimensional (3D) geometries. Compared to conventional ones, this manufacturing process offers several advantages, such as great design flexibility, reduction in waste, enabling rapid tooling and decreasing production cycles.
Selective sintering (SS) is one prominent example of PBF, which utilizes the tuned incident beam (mostly continuous laser scan or laser pulse) to bind the free powders layerwisely. Due to its well-controlled powder bed temperature compared to other PBF techniques like selective melting, where the significant melting phenomenon can be captured, SS enables the partial melting of particles and produces samples with relatively high porosity [6, 7, 8, 9, 10]. In this regard, SS has great potential in applications that require complex geometries with high porosity. For instance, SS is applicable in manufacturing porous components for medication applications, especially medical scaffolds and artificial bones [11, 12, 13]. It is feasible in manufacturing functional materials with a low processing temperature, such as ferromagnetic materials [14, 15, 16, 17]. By precisely controlling the geometry and structure of the
printed material, it may also be possible to create the desired strain gradients and electric polarization necessary to generate the flexoelectric effect [18, 19, 20].
Residual stress has been a critical issue since it can affect the mechanical properties, dimensional accuracy, corrosion resistance, crack growth resistance, and performance of AM parts [21]. There are already some researches investigating the residual stress distribution across the parts developed using AM techniques, using experimental measurements and theoretical estimations [22, 23, 24, 25]. Mercelis et al. explained the origins of residual stress in parts produced by AM. They experimentally measured the residual stress distribution in a selective laser sintering produced part and compared it with the analytical and numerical solutions [22]. Pant et al. employed a layer-by-layer finite element approach to predict the residual stress and validated the model using measurements from neutron diffraction [24]. Ibraheem et al. predicted the thermal profile and residual stress of SLS processed H13 tool steel using a finite element model. High porosity was observed in the range 25-40%, yet a homogenized powder bed was utilized to simulate the thermal evolution and subsequently the residual stress [26].
The calculation of residual stress in literature is often performed in two sequential steps: first the transient temperature is simulated in the entire domain and then the thermal history is used to calculate the stress and strain evolution in thermomechanical simulations. The accuracy of these calculations critically rely on the quality of the thermal history. The temperature gradient mechanism (TGM) model and the cool-down stage (CDS) model are two commonly used models to explain the development of plastic strain and residual stress formation mechanisms [22, 27, 28]. In TGM model, the deformation of the material in the molten pool/fusion zone is restrained by surrounding materials during both the heating and cooling stages due to a large temperature gradient inside and outside the overheated region (where the on-site temperature is beyond melting point). In the heating stage, the plastic strain is developed in the surrounding material due to an expansion of the heated material. As the heated material cools down, it starts to shrink but is again contracted by the plastically deformed surrounding material. Thus a tensile residual stress is formed. The CDS model, on the other hand, was proposed to elucidate the residual stress due to the layerwise features of the AM process and the AM-manufactured parts. This occurs because, in both the heating and cooling stages, the deformation of the upper-layers is restrained by the fused lower-layers and/or substrate. Meanwhile, melted material in the lower-layers will undergo a remelting and re-solidification cycle. Both factors result in tensile residual stress in the top layer due to the shrinkage of the material during the cooling stage. These two models provide phenomenological aspects by employing the idealized homogeneous layers for the understanding of the formation mechanism of residual stresses. They have been widely adopted in numerical analysis of residual stress at single layer scale models [29, 30, 31, 32] and part-scale multilayer models [33, 34, 35].
In these aforementioned works, homogeneous layers are used for analyses of the thermal history and the mechanical response in an AM-manufactured part. However, due to the complex morphology of the powder bed on the powder scale and the resultant inhomogeneity of the local thermal properties, the thermal profile is subjected to a high level of non-uniformity as well. This implies heterogeneity of the thermal stress, the plastic deformation and the residual stress on the powder scale. In fact, mechanical properties of AM-built parts, such as fracture strength and hardening behavior, are significantly influenced by the local defects and local stress. The diversity in the powder bed structure and the packing density introduces thermal heterogeneity in the form of mesoscopic high-gradient temperature profile and asynchronous on-site thermal history. Such variability leads to varying degrees of thermal expansion and, therefore, thermal stresses. Additionally, the presence of stochastic inter-particle voids and lack-of-fusion pores contribute to evolving disparities in thermo-mechanical properties [36, 37, 7]. Thermal gradient across the powder particles is very sharp as per the modeling results by Panwisawas et al. [38], which is experimentally observed in [39] and also recaptured in our former numerical works [36, 7, 37]. Furthermore, the layerwisely build-up process also results in local interaction between the newly deposited and previously deposited layers with high surface roughness. Thus, the residual stress formation on the powder scale should be revisited for the SS process.
For this purpose, the powder bed morphology, the heat transfer and the pore structure evolution during the SS process should be primarily considered. One promising approach is the phase-field simulation. For instance, phase-field simulations of the AM process can reveal the in-process microstructure evolution, providing insights into key features such as porosity, surface morphology, temperature profile, and geometry evolution [40, 41]. Based
on a non-isothermal phase-field multilayer simulation results of selective sintered porous structures, thermo-elastic simulations and homogenization of the elastic properties have been investigated in our previous work [36]. In combination with the single layer phase-field thermo-microstructural simulations, the first powder-resolved elasto-plastic simulations of the residual stress in a SS-processed magnetic alloy has also been performed [37]. Results show that during the cooling stage, the partially melted and inter-connected particles result in plastic deformation due to the shrinkage of the fusion zone, but mostly at the necking region because of stress concentration in porous microstructures. By combining the thermo-structural phase-field methods and the thermomechanical calculations, this work demonstrates the promising capability of the multi-physics simulation scheme for the local residual stress analysis of AM-processed materials on the powder scale. In the current article, we extend this multi-physics simulation scheme for elasto-plastic multilayer SS process, providing systematic simulations of the local plastic deformation and the residual stress. These allow us to reveal the influential aspects related to the printing process and the powder bed.
## 2 Theoretical framework
The simulation workflow is arranged in three stages, as shown in the Fig. 1a. The powders were first deposited into a prime simulation domain under a given gravitational force based on the discrete element method (DEM), then non-isothermal phase-field simulations were performed for the coupled thermo-structural evolution during the SS process. Upon completion of a layer, the resulting microstructure is voxelized and re-imported into the DEM program for the deposition of the next powder layer until reaching the desired layer number or height. Meanwhile, the subsequent thermo-elasto-plastic calculations were performed to investigate the development of plastic strain and stress under the quasi-static thermo-structures by mapping the nodal values of the order parameters (OP, indicating the chronological-spatial distribution of the phases) and temperature to a subdomain. It is worth noting that the thermo-elasto-plastic calculation of next layer was restarted from the former one with continuous history of the residual stress and accumulated plastic strain, though the OPs and temperature were reiterated in the prime domain. As it is usually done in the literature, we assume thereby that heat transfer is only strongly coupled with microstructure evolution (driven by diffusion and underlying grain growth) but is not impacted by mechanics.
### Non-isothermal phase-field model for multilayer SS processes
The non-isothermal phase-field model employed in this work to simulate the multilayer is based on our former works [7, 36, 37, 42]. For the sake of completeness, in this section we summarize the essentials of the employed model.
The model adopts both conserved and non-conserved OPs for representing the microstructure evolution of polycrystalline material. The conserved OP \(\rho\) indicates the substance, including the unfused and partially melted regions, while the non-conserved OPs \(\{\eta_{i}\},\ i=1,2,\dots\) distinguish particles with different crystallographic orientations. To guarantee that the orientation fields (\(\{\eta_{i}\}\)) get valued only in the substance (\(\rho=1\)), a numerical constraint \(\sum_{i}\eta_{i}+(1-\rho)=1\) is imposed to the simulation domain. This constraint also applies to the substrate where \(\rho=1\) always holds.
The thermo-structural evolution is governed by
\[\frac{\partial\rho}{\partial t}=\nabla\cdot\mathbf{M}\cdot\nabla \left[\frac{\partial f}{\partial\rho}-\underline{\kappa}_{\rho}(T)\nabla^{2} \rho\right], \tag{1}\] \[\frac{\partial\eta_{i}}{\partial t}=-L\left[\frac{\partial f}{ \partial\eta_{i}}-\underline{\kappa}_{\eta}(T)\nabla^{2}\eta_{i}\right],\] (2) \[c_{\mathrm{r}}\left[\frac{\partial T}{\partial t}-\mathbf{v} \cdot\nabla T\right]=\nabla\cdot\mathbf{K}\cdot\nabla T+q_{\mathbf{v}}, \tag{3}\]
where the local free energy density is formulated as
\[\begin{split} f(T,\rho,\{\eta_{i}\})&=h_{\text{ss}}( \rho)f_{\text{ht}}(T)+\underline{W}_{\text{sf}}(T)\left[\rho^{2}(1-\rho)^{2} \right]+\\ &\underline{W}_{\text{gb}}(T)\left[\rho^{2}+6(1-\rho)\sum_{i} \eta_{i}^{2}-4(2-\rho)\sum_{i}\eta_{i}^{3}+3\left(\sum_{i}\eta_{i}^{2}\right)^ {2}\right],\\ f_{\text{ht}}(T)&=c_{\text{r}}\left[(T-T_{\text{M} })-T\ln\frac{T}{T_{\text{M}}}\right]+f_{\text{ref}}^{T_{\text{M}}}-\frac{T-T_{ \text{M}}}{T_{\text{M}}}\mathcal{L}_{\text{M}}h_{\text{M}}(T).\end{split} \tag{4}\]
Here \(f(T,\rho,\{\eta_{i}\})\) contains multiple local minima, representing the thermodynamic stability of the states, i.e., substance, atmosphere/pore and grains with distinct orientations, with the coefficients \(\underline{W}_{\text{sf}}(T)\) and \(\underline{W}_{\text{gb}}(T)\) related to the barrier heights among minima, varying with temperature [7, 37]. The heat contribution \(f_{\text{ht}}\) thereby manifests the stability of states by shifting the minima according to the local temperature field. When the material reached melting point \(T_{\text{M}}\), the extra contribution due to the latent heat \(\mathcal{L}_{\text{M}}\) is mapped by the interpolation function \(h_{\text{ht}}\), which approaches unity when \(T\to T_{\text{M}}\)[7, 37]. It is worth noting that \(\underline{W}_{\text{sf}}(T)\), \(\underline{s}_{\text{sf}}(T)\), \(\underline{W}_{\text{gb}}(T)\), and \(\underline{g}_{\text{gb}}(T)\) inherit their temperature dependencies from the surface energy \(\gamma_{\text{sf}}(T)\) and grain boundary energy \(\gamma_{\text{gb}}(T)\), respectively (see Sec. 3.1). At the equilibrium, they can be related to \(\gamma_{\text{sf}}(T)\), \(\gamma_{\text{gb}}(T)\), and a diffuse-interface width of the grain boundary \(l_{\text{gb}}\), which have been explained in our former work [7, 37].
The Cahn-Hilliard mobility employed in Eq. (1) equation adopts the anisotropic form. Contributions considered in this work contain not only the mass transfer through substance, atmosphere, surface and grain boundary, but also the diffusion enhancement due to possible partial melting [37, 42, 43], i.e.
\[\mathbf{M}=\frac{1}{2(\underline{W}_{\text{sf}}+\underline{W}_{\text{gb}})} \left[h_{\text{ss}}D_{\text{ss}}\mathbf{I}+h_{\text{at}}D_{\text{at}} \mathbf{I}+h_{\text{sf}}D_{\text{sf}}\mathbf{T}_{\text{sf}}+h_{\text{gb}}D_{ \text{gb}}\mathbf{T}_{\text{gb}}\right]+h_{\text{M}}M_{\text{M}}\mathbf{T}_{ \text{sf}}, \tag{5}\]
where \(D_{\text{path}}\) is the effective diffusivity of the mass transport via path \(=\text{ss}\), at, sf, gb, and \(h_{\text{ss}}\), \(h_{\text{at}}\), \(h_{\text{sf}}\) and \(h_{\text{gb}}\) are also interpolation functions to indicate substance (including solid and liquid), atmosphere/pore, surface and grain boundary, respectively, which obtain unity only in the corresponding region [37, 42, 43]. Mass transport along surface and grain boundary is also regulated by corresponding projection tensor \(\mathbf{T}_{\text{gb}}\) and \(\mathbf{T}_{\text{sf}}\). It is worth noting that the partial melting contribution \(M_{\text{ml}}\) is treated as an enhanced surface diffusion when \(T\to T_{\text{M}}\) due to the assumption of a limited melting phenomenon around the surface of particles. In this sense, formulations in Eqs. (1) and (5) disregard the contributions from melt flow dynamics as well as from inter-coupling effects between mass and heat transfer, i.e., Soret and Dufour effects [42]. The isotropic Allen-Cahn mobility is explicitly formulated by the the temperature-dependent GB mobility \(G_{\text{gb}}\), GB energy \(\Gamma_{\text{gb}}\) and gradient coefficient \(\underline{\kappa}_{\eta}\) as [44, 45]
\[L=\frac{G_{\text{gb}}\Gamma_{\text{gb}}}{\underline{\kappa}_{\eta}}. \tag{6}\]
The phase-dependent thermal conductivity tensor takes the continuity of the thermal flux along both the normal and tangential directions of the surface into account, and is formulated as [46, 47]
\[\mathbf{K}=\left[h_{\text{ss}}K_{\text{ss}}+h_{\text{at}}K_{\text{at}}\right] \mathbf{N}_{\text{sf}}+\left[\frac{K_{\text{ss}}K_{\text{at}}}{h_{\text{ss}}K_ {\text{at}}+h_{\text{at}}K_{\text{ss}}}\right]\mathbf{T}_{\text{sf}}, \tag{7}\]
where \(K_{\text{ss}}\) and \(K_{\text{at}}\) are the thermal conductivity of the substance and pore/atmosphere. \(\mathbf{N}\) is the normal tensor of the surface [37, 43, 46]. It is worth noting that radiation contribution via pore/atmosphere is considered in \(K_{\text{at}}\) as \(K_{\text{at}}=K_{0}+4T^{3}\sigma_{\text{B}}\ell_{\text{rad}}/3\) with \(K_{0}\) the thermal conductivity of the Argon gas and \(\sigma_{\text{B}}\) the Stefan-Boltzmann constant. \(\ell_{\text{rad}}\) is the effective radiation path between particles, which usually takes the average diameter of the powders [48].
The beam-incited thermal effect is equivalently treated as a volumetric heat source with its distribution along the depth direction formulated in a radiation penetration fashion, as the powder bed is regarded as an effective homogenized optical medium, i.e.
\[q(\mathbf{r},t)=Pp_{xy}[\mathbf{r}_{\text{O}}(\mathbf{v},t)]\frac{\text{d}a}{ \text{d}z},\]
in which the in-plane Gaussian distribution \(p_{xy}\) with a moving center \(\mathbf{r}_{O}(\mathbf{v},t)\). \(P\) is the beam power and \(\mathbf{v}\) is the scan velocity with its magnitude \(v=|\mathbf{v}|\) as the scan speed, which are two major processing parameters in this work. The absorptivity profile function along depth \(\mathrm{d}a/\mathrm{d}z\) which is calculated based on Refs. [7, 49]. It is obvious that when one ignores the effects of those latent heat induced by microstructure evolution (i.e. the evolution of the pore/substance as well as unique grains), Eq. (3) can be degenerated to the conventional Fourier's equation for heat conduction with an internal heat source.
### Elasto-plastic model for thermo-mechanical analysis
The transient temperature field and substance field \(\rho\), resulting from the non-isothermal phase-field simulations, are then imported into the quasi-static elasto-plastic model as the thermal load as well as the phase indicator for the interpolation of mechanical properties. The linear momentum equation reads
\[\nabla\cdot\mathbf{\sigma}=\mathbf{0}, \tag{8}\]
where \(\mathbf{\sigma}\) is the \(2^{\mathrm{nd}}\)-order stress tensor. Taking the Voigt-Taylor interpolation scheme (VTS), where the total strain is assumed to be identical among pores/atmosphere and substance, i.e. \(\varepsilon=\varepsilon_{\mathrm{ss}}=\varepsilon_{\mathrm{at}}\). [50, 51, 52]. In this regard, the stress can be eventually formulated by the linear constitutive equation
\[\mathbf{\sigma}=\mathbf{C}(\rho,T):\left(\varepsilon-\varepsilon^{\mathrm{th}}- \varepsilon^{\mathrm{pl}}\right) \tag{9}\]
where the \(4^{\mathrm{th}}\)-order elastic tensor is interpolated from the substance one \(\mathbf{C}_{\mathrm{ss}}\) and the pores/atmosphere one \(\mathbf{C}_{\mathrm{at}}\), i.e.,
\[\mathbf{C}(\rho,T)=h_{\mathrm{ss}}(\rho)\mathbf{C}_{\mathrm{ss}}(T)+h_{\mathrm{at}}( \rho)\mathbf{C}_{\mathrm{at}}. \tag{10}\]
In this work, \(\mathbf{C}_{\mathrm{ss}}\) is calculated from the temperature-dependent Youngs' modulus \(E(T)\) and poison ratio \(\nu(T)\), while \(\mathbf{C}_{\mathrm{at}}\) is assigned with a sufficiently small value.
The thermal eigenstrain \(\varepsilon^{\mathrm{th}}\) is calculated using interpolated coefficient of thermal expansion, i.e.,
\[\varepsilon^{\mathrm{th}} =\alpha(\rho,T)[T-T_{0}]\mathbf{I}, \tag{11}\] \[\alpha(\rho,T) =h_{\mathrm{ss}}(\rho)\alpha_{\mathrm{ss}}(T)+h_{\mathrm{at}}( \rho)\alpha_{\mathrm{at}}.\]
Here \(\mathbf{I}\) is the \(2^{\mathrm{nd}}\)-order identity tensor. Meanwhile, for plastic strain \(\varepsilon_{\mathrm{pl}}\) and isotropic hardening model with the von Mises yield criterion is employed. The yield condition is determined as
\[f(\mathbf{\sigma},p_{\mathrm{e}})=\sigma_{\mathrm{e}}-[\sigma_{\mathrm{y}}(T)+ \mathsf{H}(T)p_{\mathrm{e}}]\leq 0 \tag{12}\]
with
\[\sigma_{\mathrm{e}}=\sqrt{\frac{3}{2}\mathbf{s}\cdot\mathbf{s}},\quad p_{\mathrm{e}} =\int\sqrt{\frac{2}{3}\;\mathrm{d}\varepsilon^{\mathrm{pl}}:\mathrm{d} \varepsilon^{\mathrm{pl}}},\]
where \(\sigma_{\mathrm{e}}\) is the von Mises stress. \(\mathbf{s}\) is the deviatoric stress, and \(\sigma_{\mathrm{y}}\) is the isotropic yield stress when no plastic strain is present. The isotropic plastic modulus \(\mathsf{H}\) can be calculated from the isotropic tangent modulus \(E_{\mathrm{t}}\) and the Young's modulus \(E\) as \(\mathsf{H}=E_{\mathrm{t}}/(E-E_{\mathrm{t}})\). \(p_{\mathrm{e}}\) is the effective (accumulate) plastic strain, which is integrated implicitly by the plastic strain increment \(\varepsilon^{\mathrm{pl}}\) employing radial return method and will be elaborated in Sec. 3.2.
## 3 Simulation setup
### Simulation domain and model parameters
In this work, SS processes with a constant beam spot diameter \(D_{\mathrm{L}}\), a beam power \(P\) as well as a scan speed \(v\) were simulated. The processing window \(P\in[15,\ 30]\) W and \(v\in[75,\ 150]\) mm s\({}^{-1}\) was set in order to compare it with previous studies [7, 36]. \(D_{\mathrm{L}}=D_{\mathrm{FWE2}}=200\) um (i.e., full-width at \(1/e^{2}\)) was adopted as the nominal diameter of
the beam spot, within which around \(86.5\%\) of the power is concentrated. The full-width at half maximum intensity (FWHM) then equals to \(0.588D_{\text{FWE2}}=117.6\)\(\upmu\)m, characterizing \(50\%\) power concentration within the spot. The powder bed is preheated with a preheating temperature \(T_{0}=0.4T_{\text{M}}=680\) K, which was embodied by temperature initial condition (IC) and boundary condition (BC). Both powder and substrate are SS316L. The powder size follows Gaussian distribution with a mean diameter of \(35\)\(\upmu\)m, a standard deviation of \(10\)\(\upmu\)m, and a cut-off bandwidth of \(15\)\(\upmu\)m, which are consistent with our former works [7, 36]. In-total four layers of SS processes were performed for each pair of processing parameters (i.e., \(P\) and \(v\), hereinafter as \(P\)-\(v\) pair). After each layer's beam scan (i.e., the beam leaves the prime domain), the simulation continues, and the domain undergoes a natural cooling stage, lasting twice as long as the scanning period.
For the non-isothermal phase-field simulations, the powders are firstly deposited into a \(250\times 500\times 400\)\(\upmu\)m\({}^{3}\) prime simulation domain under a given gravitational force based on the discrete element method (DEM). The prime domain contains the free space of \(150\)\(\upmu\)m height for positioning powders and a substrate of \(250\)\(\upmu\)m thickness. Together with the governing equations shown in Eqs. (1)-(3), the following BCs are also employed for the process simulations
\[\nabla\rho|_{\Gamma}\cdot\mathbf{\hat{n}}=0, \tag{13}\] \[\mathbf{K}\cdot\nabla T|_{\Gamma_{\text{S}}\cup\Gamma_{\text{T}} }\cdot\mathbf{\hat{n}}=h_{\text{at}}\left[\underline{h}\left(T|_{\Gamma_{ \text{S}}\cup\Gamma_{\text{T}}}-T_{0}\right)+\varepsilon\sigma_{\text{B}} \left(T|_{\Gamma_{\text{S}}\cup\Gamma_{\text{T}}}^{4}-T_{0}^{4}\right)\right],\] (14) \[T|_{\Gamma_{\text{B}}}=T_{0} \tag{15}\]
with the connectivity \(\underline{h}\), Stefan-Boltzmann constant \(\sigma_{\text{B}}\), the hemispherical emissivity \(\varepsilon\), and the pre-heating (environmental) temperature \(T_{0}\). \(\Gamma_{\text{T}}\) and \(\Gamma_{\text{B}}\) are correspondingly the top and bottom boundaries of the simulation domain, and \(\Gamma_{\text{S}}\) is the set of all surrounded boundaries. \(\Gamma_{\text{B}}\cup\Gamma_{\text{S}}\cup\Gamma_{\text{T}}=\Gamma\), as the schematic shown in the inset of Fig. 1b. \(\mathbf{\hat{n}}\) is the normal vector of the boundary. Eq. (13) physically shows the close condition for the mass transfer, restricting no net mass exchange with the environment. Eq. (14) shows the heat convection and radiation, which are only allowed via the pore/atmosphere at the boundary (masked by \(h_{\text{at}}\)). Eq. (15) emulates a semi-infinite heat reservoir under the bottom of the substrate with constant temperature \(T_{0}\), consistently draining heat from the simulation domain and reducing its temperature back to \(T_{0}\).
As summarized in Sec. 2.1, this simulation requests following parameters/properties: the thermodynamic parameters \(\underline{W}_{\text{sf}}\), \(\underline{W}_{\text{gb}}\), \(\underline{\kappa}_{\rho}\), and \(\underline{\kappa}_{\eta}\); the kinetic properties (diffusivities and GB mobility) \(D_{\text{path}}\) (path = ss, at, sf, gb) and \(G_{\text{gb}}\); and the thermal properties \(K_{\text{ss}}\) and \(K_{\text{at}}\). Among them, \(\underline{W}_{\text{sf}}\), \(\underline{W}_{\text{gb}}\), \(\underline{\kappa}_{\rho}\), and \(\underline{\kappa}_{\eta}\) are parameterized by the temperature-dependent surface as well as GB energies, and a nominal diffuse interface width \(l_{\text{gb}}\) as
\[\gamma_{\text{sf}}(T)=\frac{\sqrt{2}}{6}\tau_{\text{sf}}(T)\sqrt {(W_{\text{sf}}+7W_{\text{gb}})(\kappa_{\rho}+\kappa_{\eta})},\] \[\gamma_{\text{gb}}(T)=\frac{2\sqrt{3}}{3}\tau_{\text{gb}}(T)\sqrt {W_{\text{gb}}\kappa_{\eta}}, \tag{16}\] \[l_{\text{gb}}\approx\frac{2\sqrt{3}}{3}\sqrt{\frac{\kappa_{\eta} }{W_{\text{gb}}}}\]
with normalized tendencies \(\tau_{\text{sf}}(T)\) and \(\tau_{\text{gb}}(T)\) that reach unity at \(T_{\text{M}}\). \(\underline{W}_{\text{sf}}=W_{\text{sf}}\tau_{\text{sf}}(T)\), \(\underline{W}_{\text{gb}}=W_{\text{gb}}\tau_{\text{gb}}(T)\), \(\underline{\kappa}_{\rho}=\kappa_{\rho}\tau_{\text{sf}}(T)\), and \(\underline{\kappa}_{\eta}=\kappa_{\eta}\tau_{\text{gb}}(T)\). Constants \(W_{\text{sf}}\), \(W_{\text{gb}}\), \(\kappa_{\rho}\), \(\kappa_{\eta}\) also satisfy a constraint \((W_{\text{sf}}+W_{\text{gb}})/\kappa_{\rho}=6W_{\text{gb}}/\kappa_{\eta}\) derived from the coherent diffuse-interface profile at equilibrium (Supplementary Note 1 of Ref. [7]). These parameter/properties are collectively shown in Table 1.
For the thermo-elasto-plastic calculations, a \(250\times 250\times 250\)\(\upmu\)m\({}^{3}\) subdomain was selected from the center of the prime domain to eliminate the boundary effects (Fig. 1a). The above momentum balance is subjected to the following rigid support BC as
\[\mathbf{u}|_{\Gamma_{\text{S}}\cup\Gamma_{\text{B}}}\cdot\mathbf{\hat{n}}=0, \tag{17}\]
restricting the displacement \(\mathbf{u}\) along the normal direction of all boundaries except the top (\(\Gamma_{\text{T}}\)), which is traction free. As summarized in Sec. 2.2, temperature-dependent \(E\), \(v\), \(\alpha\), \(E_{\text{t}}\), and \(\sigma_{\text{y}}\) are as listed in Table 2, where piecewise linear interpolation was employed to implement their temperature dependence.
### Numerical implementation and parallel computing
The theoretical models are numerically implemented via the finite element method within the program NIsoS, developed by authors based on MOOSE framework (Idaho National Laboratory, ID, USA) [53]. 8-node hexahedron Lagrangian elements are chosen to mesh the geometry. A transient solver with preconditioned Jacobian-Free Newton-Krylov method (PJFNK) and backward Euler algorithm was employed for both problems.
For non-isothermal phase-field simulations, the Cahn-Hilliard equation in Eq. (1) was solved in a split way, The constraint of the order parameters was fulfilled using the penalty method. To reduce computation costs, h-adaptive meshing and time-stepping schemes are used. The additive Schwarz method (ASM) preconditioner with incomplete LU-decomposition sub-preconditioner was also employed for parallel computation of the vast linear system, seeking the balance between memory consumption per core and computation speed [54]. Due to the usage of adaptive meshes, the computational costs vary from case to case. The peak DOF number is on order of 10,000,000 for both the nonlinear system and the auxiliary system. The peak computational consumption is on the order of 10,000 core-hour. More details about the FEM implementation are shown in the Supplementary Note 4 of Ref. [7].
For thermo-elasto-plastic simulations, a static mesh was utilized to avoid the hanging nodes generated from h-adaptive meshing scheme. In that sense, the transient fields \(T\) and \(\rho\) of each calculation step were uni-directionally mapped from the non-isothermal phase-field results (with h-adaptive meshes) into the static meshes, assuming a weak coupling between thermo-structural and mechanical problems in this work. This is achieved by the MOOSE-embedded SolutionUserObject class and associated functions. The parallel algebraic multigrid preconditioner BoomerAMG was utilized, where the Eisenstat-Walker (EW) method was employed for determining linear system convergence. It is worth noting that a vibrating residual of non-linear iterations would show without the employment of EW method for this work. The DOF number of each simulation is on the order of 1,000,000 for the nonlinear system and 10,000,000 for the auxiliary system. The computational consumption is on the order of 1,000 CPU core-hour.
A modified radial return method [55, 56] was employed to calculate the integral of the plasticity as well as to determine the yield condition during the process with the temperature-dependent elasto-plastic properties at any time \(t\) with the time increment \(\Delta t\). This method employs the trial stress \(\mathbf{\sigma}^{\star}\) calculated assuming an elastic new strain increment \(\Delta\mathbf{\varepsilon}^{\star}\)
\[\mathbf{\sigma}^{\star}=\mathbf{C}(T_{t}):\left[\mathbf{\varepsilon}_{t}^{\text{el}}+ \Delta\mathbf{\varepsilon}^{\star}\right], \tag{18}\]
where the elasticity tensor is obtained under the current temperature field \(T_{t}\). Once the trial stress state is outside of the yield condition (Eq. (12)), i.e., the plastic flow exists, the stress is then projected onto the closet point of the expanded yield surface with the normal direction determined as \(\mathbf{\hat{n}}_{\text{y}}=3\mathbf{s}^{\star}/2\sigma_{\text{e}}^{\star}\) with the von Mises \(\sigma_{\text{e}}^{\star}\) and deviatoric trial stress \(\mathbf{s}^{\star}\). Meanwhile, assuming isotropic linear hardening under \(T_{t}\) of every timestep, the amount of the effective plastic strain increment \(\Delta p\) for returning the stress state back to the yield surface is calculated in an iterative fashion adopting Newton's method
\[\text{d}\Delta p=\frac{\sigma_{\text{e}}^{\star}-3G(T_{t})\Delta p_{t}-\left[ \mathsf{H}(T_{t})p_{t}+\sigma_{\text{y}}(T_{t})\right]}{3G(T_{t})+v(T_{t})}, \tag{19}\]
\[\Delta p_{(t+\Delta t)}=\Delta p_{t}+\text{d}\Delta p, \tag{20}\]
\[p_{(t+\Delta t)}=p_{t}+\Delta p_{(t+\Delta t)} \tag{21}\]
where \(p_{t}\) and \(\Delta p_{t}\) are updated as \(p_{(t+\Delta t)}\) and \(\Delta p_{(t+\Delta t)}\) at the end of the timestep (\(t+\Delta t\)). \(G(T_{t})\) and \(\mathsf{H}(T_{t})\) are shear and isotropic plastic modulus calculated under. Here \(G(T_{t})=E(T_{t})/[2+2v(T_{t})]\) with \(E(T_{t})\) and \(v(T_{t})\) correspondingly the temperature-dependent Young's modulus and Poisson ratio. Knowing that the plastic strain increment \(\Delta\mathbf{\varepsilon}^{\text{Pl}}=\Delta p\mathbf{\hat{n}}_{\text{y}}\) following the normality hypothesis of plasticity [55], the updated stress and plastic strain
at the end of timestep are thereby obtained by
\[\begin{split}\sigma_{(t+\Delta t)}&=\begin{cases} \sigma^{\star}-2G(T_{t})\Delta p_{(t+\Delta t)}\mathbf{\hat{n}}_{y},&T<T_{ \text{A}},\\ \mathbf{0},&T\geq T_{\text{A}},\end{cases}\\ \epsilon^{\text{pl}}_{(t+\Delta t)}&=\begin{cases} \epsilon^{\text{pl}}_{t}+\Delta p_{(t+\Delta t)}\mathbf{\hat{n}}_{y},&T<T_{ \text{A}},\\ \mathbf{0},&T\geq T_{\text{A}},\end{cases}\end{split} \tag{22}\]
in which the vanishing of the stress as well as the plastic strain beyond a stress-free temperature \(T_{\text{A}}\) (in this work \(T_{\text{A}}=T_{\text{M}}\)) is also considered.
Eqs. (18)-(22) are sequentially executed on every timestep and update the quantities under \(T_{t}\). It is worth noting that the linear-interpolated H is implemented for both \(p\)- and \(T\)-dependence
\[\mathsf{H}(p_{t},T_{t})=(1-\tau_{t})f(\hat{T}_{i},p_{t})+\tau_{t}f(\hat{T}_{i +1},p_{t}) \tag{23}\]
with
\[\tau_{t}=\frac{T_{t}-\hat{T}_{i}}{\hat{T}_{i+1}-\hat{T}_{i}},\]
where \(f(\hat{T}_{i},p_{t})\) is a piecewise function with the grids \(\hat{T}_{i}\) (\(i=1,2,...\)) and \(T_{i}\) is inside the section bound by \(\hat{T}_{i}\) and \(\hat{T}_{i+1}\), i.e., \(T_{t}\in(\hat{T}_{i},\hat{T}_{i+1}]\). Considering reduction of the non-linearity, the \(p\)-independent \(\mathsf{H}(T)\) was practically employed in the simulations, as formulated in Eq. (19).
## 4 Results
### Thermo-structural evolution during multilayer SS
Fig. 2a\({}_{1}\)-a\({}_{4}\) present the transient thermo-structural profiles for a single scan of \(P=20\) W and \(v=100\) mm s\({}^{-1}\) where the beam spot is consistently positioned along scan direction (SD) across various building layers. In the overheated region (\(T\geq T_{\text{M}}\)), particles can undergo complete or partial melting. This prompts molten materials to flow from convex to concave points, making the fusion of particles possible and, therefore, forging the fusion zone. In regions where the temperature remains below the melting point (\(T<T_{\text{M}}\)), however, the temperature is still high enough to cause diffusion as the diffusivity grows exponentially w.r.t. temperature. This can be evidenced by the necking phenomenon among neighboring particles. Temperature profiles demonstrate a strong dependence on the local morphology. Notably, the concentrated temperature isolines can be observed around the neck region among particles, indicating an increased temperature gradient (up to around 50-100 K \(\upmu\)m\({}^{-1}\), comparing to 1 K \(\upmu\)m\({}^{-1}\) in the densified region). It is worth noting that such thermal inhomogeneity induced by stochastic transient morphology can be hardly resolved by the simulation works employing homogenized powder bed [25, 28, 57].
The heat dissipation in the powder bed also changes considerably as upper-layers are continuously built and processed. This can be noticed by a varying shape and significance of the overheated region across layers. As a comparison, the overheated region at 4th layer (Fig. 2a\({}_{1}\)) is greater than at initial layer (Fig. 2a\({}_{4}\)). This can simply be reasoned by increasing porosity in fused lower-layers that block the heat dissipation. At the initial layer, this dissipation is affected the least as there is no lower-layer but substrate. We further probed the temperature history at surface points located at the center of the scan path, as shown in Fig. 2c\({}_{1}\), where the recurring peaks in every single-layer SS stage (shaded section) present the thermal cycle. The first peaks of distinctive cycles (probed on different surface points) nearly match. In an identical cycle (e.g., C1), the peaks gradually decrease as the SS processes advance. Once the spot leaves the probing point, the temperature drop is rapid at first (mostly due to the heat conduction driven by the high-gradient temperature profile) and then becomes moderate (due to heat convection and radiation) as the cooling stage begins, when the heat convection and radiation are effective. This steep temperature drop gradually disappears at points from fused lower-layers as the upper-layers are continuously
built, comparing the thermal cycles at point C1. Meanwhile, the temperature drop at distinct points gradually coincides as the cooling stage proceeds, comparing the thermal cycles at points C1-C4.
The multilayer thermo-structural evolution is also significantly affected by processing parameters, as the profiles of varying beam powers and scan speeds presented in Fig. 2b1-b4. Increasing the beam power and/or decreasing the scanning speed is observed to intensify heat accumulation at the beam spot, resulting in an enlargement of the overheated region. When \(v\) is held constant at 100 mm s\({}^{-1}\), decreasing \(P\) from 30 W (Fig. 2b1) to 15 W (Fig. 2b2) results in a more pronounced overheated region. On the other hand, maintaining a constant \(P=20\) W and increasing \(v\) from 75 mm s\({}^{-1}\) (Fig. 2b3) to 150 mm s\({}^{-1}\) (Fig. 2b4) leads to a reduction in the overheated region. It is also evident that increasing \(P\) and/or decreasing \(v\) produces less porosity in fused lower-layers, which may further change the thermal conditions and improve the heat dissipation. Fig. 2c2 presents the probed thermal cycle of point C1 under various \(P\) with \(v=100\) mm s\({}^{-1}\) maintained. One can tell that the temperature of the case \(P=30\) W quickly drops from a higher peak to the value coincided with one of the case \(P=20\) W at the end of scan duration, implying an improved heat dissipation. It should also be noted that surface point C1, located at the initial layer, experienced three peaks of temperature that are beyond \(T_{\text{M}}\) due to the extended depth of the fusion zone at higher beam power. In Sec. 5.1 we will continue the discussion regarding the relation between porosity and processing parameters.
### Stress and plastic strain evolution during multilayer SS process
To analyze the overall stress and plastic strain evolution during SLS, the average quantities (incl. von Mises stress \(\sigma_{\text{e}}\), effective plastic strain \(p_{\text{e}}\), and temperature \(T\)) in the powder bed are defined as
\[\bar{\sigma}_{\text{e}}^{\text{P}}=\frac{\int_{\Omega^{\prime}}\rho\sigma_{ \text{e}}\text{d}\Omega}{\int_{\Omega^{\prime}}\rho\text{d}\Omega},\quad\bar{ p}_{\text{e}}^{\text{P}}=\frac{\int_{\Omega^{\prime}}^{\text{P}}\rho p_{ \text{e}}\text{d}\Omega}{\int_{\Omega^{\prime}}\rho\text{d}\Omega},\quad\overline {\mathbb{T}}^{\text{P}}=\frac{\int_{\Omega^{\prime}}^{\text{P}}\rho T\text{d} \Omega}{\int_{\Omega^{\prime}}\rho\text{d}\Omega}, \tag{24}\]
where \(\Omega^{\prime}\) is the volume of the simulation domain without the substrate meshes, and \(\rho\) is the OP indicating the substance with \(\rho=1\). They are hereinafter termed as PB-averaged quantities. Fig. 3a presents that \(\bar{\sigma}_{\text{e}}^{\text{P}}\) develops as the SS process advances on distinctive layers. When the beam spot enters the domain, accompanied by the appearance of the overheated region, \(\sigma_{\text{e}}^{\text{P}}\) begins to decrease while \(\overline{\mathbb{T}}^{\text{P}}\) continues to rise to its peak. This is attributed to losing structural integrity caused by full/partial melting within the overheated region, resulting in zero stress as shown in Fig. 3b1. The areas surrounding the overheated region also exhibit relatively low stress due to a significant reduction in stiffness at elevated temperatures. Stress accumulates faster around concave features, such as surface depressions and particle sintering necks, compared with the stress around convex features as well as unfused powders away from the fusion zone. This is due to localized temperature gradients, as explained in Sec. 4.1. As cooling progresses, stress decreases in convex features and unfused powders, yet remains concentrated around concave features (see Figs. 3b2-3b3). The stress at the end of each cooling stage (Figs. 3b2-3b6) serves as the residual stress of corresponding processed layers. After the deposition of new layers, \(\bar{\sigma}_{\text{e}}^{\text{P}}\) has an instant drop due to the addition of stress-free substances (powders), then a new cycle begins. It is worth noting that the peak values of the stress cycle (which are also residual stresses of each processed layer) are almost identical, implying that each single-layer SS generates nearly the same amount of residual stress by average in the powder bed. For upper-layers, \(\sigma_{\text{e}}\) experiences an extra reduction in the cooling stage, which may be attributed to the more delicate variations in the on-site stress components and will be discussed in Fig. 4.
The accumulated plastic strain \(p_{\text{e}}\), on the other hand, presents an overall increasing tendency vs time during the single-layer SS and cooling stage, as shown in Fig. 3c. This distinguishes the plastic strain history from the stress cycle, which suffers a reduction during SS due to loss of structural integrity. The continuous accumulation of \(p_{\text{e}}\) leads to a locally concentrated profile in the vicinity of the melt zone (see Fig. 3d2-3d3). The high temperature gradient at the front and bottom of the overheated region during single layer SS also results in the localized \(p_{\text{e}}\) surrounding the fusion zone (see Fig. 3d2-3d6), where the existing high thermal stress locally activates the plastic deformation. A similar explanation applies to the concentrated \(p_{\text{e}}\) around pores and concave features such as sintering necks near the fusion zone. In contrast, unfused powders and the substrate away from the melt zone show
minimal \(p_{\rm e}\), since the thermal stress at these locations is insufficient to induce material plastification due to lower local temperatures.
We probed the history of the stress components at six points, where L\({}_{1}\)0-L\({}_{4}\)0 locates at the center of 1\({}^{\rm st}\)-4\({}^{\rm th}\) layers, L\({}_{1}\)1 locates at the boundary of the fusion zone, and L\({}_{1}\)2 locates outside the fusion zone in the 1\({}^{\rm st}\) layer, as shown in the inset of Fig. 4. The results mainly show the difference of the stress field inside and outside the fusion zone. Since the temperature cools down from above the melting temperature to pre-heating temperature, the significant increase of the elasticity leads to a significant increase of \(\sigma_{\rm e}\) for temperature above \(0.65T_{M}\). As a comparison, the increment of the \(\sigma_{\rm e}\) becomes slower for the point \(L_{2}\)0 and the point \(L_{2}\)0. It can also be seen that \(\sigma_{\rm e}\) for the mentioned three points during SS of 2\({}^{\rm nd}\)- and 3\({}^{\rm rd}\) layers are very similar when they are all outside the fusion zone. Similar results can be observed for the development of normal stress components \(\sigma_{ii}\) (\(i=x,~{}y,~{}z\)). The main differences in normal stress for the three points are mostly for the SS process of the 1\({}^{\rm st}\) layer. \(\sigma_{ii}\) are tensile in the fusion zone while compressive outside the fusion zone. This is due to the thermal stress in the fusion zone being tensile while it is compressive outside the fusion zone. Curves for the 2\({}^{\rm nd}\) and 3\({}^{\rm rd}\) layers are similar because the three points are all outside the fusion zone. Shear stress components \(\sigma_{ij}\) (\(i,j=x,~{}y,~{}z\); \(i\neq j\)) remain small inside and outside the fusion zone. The variation of the shear stress is more visible at the boundary of the fusion zone, as shown in Fig. 4a\({}_{2}\). This is due to a large gradient of the thermal strain and material mechanical properties across the boundary of the fusion zone. With a larger variation of the shear stress components, variation of \(\sigma_{\rm e}\) at the highest temperature during SS process of each layer is also more visible. The saturation of the stress field can be observed after the 2\({}^{\rm nd}\) layer. It means that the upper layer has limited influence on the plastic deformation of previous layers which experience lower temperatures and lower temperature gradients outside the fusion zone.
For points in the layer-wise direction (\(L_{2}^{0}\) to \(L_{4}^{0}\) at the boundary of each layer), the results in Fig. 4b show a direct comparison between the development of the stress field of a point in the microstructure during SS process of different layers. It is worth noting that the development of the \(\sigma_{\rm e}\) of each point is very similar at the beginning. The saturation of the \(\sigma_{\rm e}\) for each point can be observed in the layer-wise direction when the points are outside the fusion zone. However, variation of the \(\sigma_{\rm e}\) is much more visible when the fusion zone is moving close to the point. This indicates that the point is located around the boundary of the fusion zone, which encounters a large variation of the mechanical properties of the material w.r.t. the high-gradient temperature temperature, and thus undergoes a large variation of the shear stress.
### Powder-resolved mesoscale residual stress formation mechanism
For SS-processed porous microstructures, the residual stress is generally related to stress concentration under thermal loading in the microstructure. In Fig. 5a\({}_{3}\) and 5a\({}_{5}\), the particles undergo a low degree of fusion (due to low level of overheating) due to relatively low beam power or high scan speed. The stress concentration mostly occurs at the necking region, leading to accumulated plastic strain and residual stress. In Fig. 5a\({}_{1}\) and 5a\({}_{4}\), with a higher degree of fusion due to a relatively high beam power or low scan speed, accumulated plastic strain and residual stress tends to concentrate on both necking region and inter-layer region (as indicated by the black lines in Fig. 5c and 5d). Note that the inter-layer boundary has a high surface roughness which also causes stress concentrations. In Fig. 5a\({}_{2}\), with the highest beam power within the processing window, residual stress can be observed in the whole fusion zone, which makes it difficult to identify the residual stress concentration. Besides, accumulated plastic strain and residual stress can be observed on the boundary of the substrate and the 1\({}^{\rm st}\) layer, especially at high beam power. This is because a high beam power generates a much larger fusion zone, which even penetrates into the substrate. When the substrate with continuum material is melted, it generates a large plastic deformation and thus the residual stress, which can be significantly larger than those in the porous microstructure, as shown in Fig. 5a\({}_{2}\). These results show that the residual stress is directly related to the degree of fusion and thus related to SS processing parameters.
To gain a better understanding of the distribution of residual stress, we analyzed the stress state of a few representative positions in the microstructure, as shown in Fig. 5c and 5d. The former surfaces of fused layers are marked by black lines, and the bottom of the fused strut, a.k.a., the fusion zone boundary (FZB) of the 1\({}^{\rm st}\) layer, is indicated by white lines. Fig. 5c\({}_{1}\) and 5d\({}_{1}\) show that the residual stress and the accumulated plastic strain are solely
concentrated in the necking region and inter-layer region for the case with a low degree of fusion. In Fig. 5c2, with the high degree of fusion (the smallest porosity acheived in the selected processing window), the residual stress and the accumulated plastic strain still tend to be solely concentrated in the inter-layer region, yet less distinguishable compared to the former case. By comparing Lame's stress ellipsoids at six points on the boundaries of different layers, it is worth noting that the stress states at the boundary of the two layers are very similar, e.g. the stress states of P2 and P3 are similar to that of P5 and P6, no matter how high the degree of fusion is. It means that the residual stress is still formulated due to stress concentration at the inter-layer region. On the other hand, the stress states of P1 and P4 at the top layer are quite different. This is because the residual stress of the top layer is directly determined by the thermal loading which depends on the surface morphology and morphology-induced thermal inhomogeneity.
Based on the above observations, we propose a powder-resolved mesoscale residual stress formation mechanism, which is summarized in the schematic illustrated in Fig. 6. The residual stress formulated in porous microstructures manufactured by a multilayer SS process contains two primary contributions:
1. Residual stress is directly caused by the stress concentration at the necking region of partially melted particles under thermal loading. The partially melted particles are inter-connected via necking regions, which is the weak link of the microstructure. Therefore, both the thermal expansion in the overheated region and the thermal contraction during the cooling stage cause severe plastic deformation at the necking region, which is one primary source of the accumulated plastic strain and residual stress.
2. Residual stress due to interaction between the upper- and lower-/fused layers in the layerwisely build-up process. In the cooling stage, the shrinkage of the upper-layer after overheating results in tensile stress on itself and compressive stress on the lower-/fused layer, which causes plastic deformation of the porous structure, especially at the inter-layer region with stress concentration due to a high surface roughness, which is the other primary source of the accumulated plastic strain and residual stress, as indicated by the white lines in Fig. 6a.
The proposed mechanism is based on the detailed simulation results using the powder-resolved thermo-mechanical model, which evidently demonstrates the formation and distribution of residual stress in the porous structure. The accumulated plastic strain results in structural distortion of the fused strut, which is schematically illustrated in Fig. 6a and demonstrated by the contour plot of deformation component \(u_{z}\) in Fig. 6b. Compared to the aforementioned TGM and CDS models proposed by Mercelis and Kruth [22] correspondingly, which ignored the difference between the part manufactured by SM and SS, the partial melting as the main fusion mechanism plays an important role in the residual stress in the microstructure. There are some similarities, the concept of the TGM model also applies to the proposed mechanism because the temperature gradient indeed directly leads to residual stress. However, in the SS process, since the power bed is fully relaxed before fusion, the particle provides very weak constrain on the fusion zone and thus with limited plastic deformation in the particle. The plastic deformation is majorly accumulated during the cooling stage due to thermal contraction and stress concentration at the necking region. For the CDS model, the proposed model also considered the shrinkage of the top layer resulting in a tensile residual stress in the top layer and a compressive residual stress in the previous layers. However, the top layer in the simulated SS process has very complex boundaries with previous layers. These boundaries are the source of the stress concentration which leads to the dominant residual stress at the inter-layer regions.
Based on the TGM model and the CDS model, Mercelis and Kruth [22] also introduced a theoretical model to predict the relationship between residual stress and the part height. Similarly, we can use the proposed powder-resolved model and mesoscale residual stress formation mechanism to predict the dependence of the residual stress on the microstructure porosity, which is also a characteristic geometric parameter of a porous microstructure. Since the porous structure induces complex inhomogeneity of both material properties and thermal and mechanical loading, and we also need to consider the porosity is directly related to SS process parameters, it is more straightforward to propose phenomenological models based on our simulation results to predict the relationship between the residual stress and SS process parameters, as will be discussed in the next section.
## 5 Discussion
### Phenomenological relation for porosity control
Controlling the porosity of processed sample through the tuning of the processing parameters plays a central role in tailoring the end-up performance of the porous materials in AM, as many properties are determined by (or related to) porosity, including but not limited to mechanical strength, permeability, acoustic/optical absorptivity and various effective conductivities. We start the discussion with phenomenologically relating the porosity to the processing parameters (in this work \(P\) and \(v\)). In this work, a nominal porosity is defined as
\[\varphi=\frac{\int_{\Omega^{\prime\prime}}\rho\sigma_{\text{e}}\text{d}\Omega}{ \int_{\Omega^{\prime\prime}}\rho\text{d}\Omega} \tag{25}\]
with the substance OP \(\rho\). \(\Omega^{\prime\prime}\) represents the volume of a post-processed simulation domain with surface and unforced powders sufficiently removed (termed as "virtual polishing"). Fig. 7a presents the local porosity that evaluated segment-wise along \(z\)- (BD) and \(y\)-direction (perpendicular to SD) to help identify the representative domain with a width \(W_{\text{R}}\) and a height \(H_{\text{R}}\) for porosity calculation, while the complete length (250 um) along \(x\)-direction (SD) is included due to exhibited relatively minor variations in local porosity [36]. We then conduct the virtual polishing on simulated multilayer SS-processed microstructures, and proceed calculating their porosity using Eq. (25). The resulting \(P\)-\(v\) map of porosity is in shown Fig. 7c. Selected microstructures illustrated in Figs. 7b\({}_{2}\)-7b\({}_{4}\) for the cases varying \(P\) while maintaining \(v\); Figs. 7b\({}_{5}\)-7b\({}_{7}\) for the cases varying \(v\) while maintaining \(P\). Fig. 7b\({}_{1}\) is the microstructure under a reference processing parameter (\(P=20\) W, \(v=100\) mm s\({}^{-1}\)), while Figs. 7b and 7b\({}_{9}\) are respectively the one with maximum and minimum porosity. It demonstrates that improved densification can be achieved by either increasing \(P\) or decreasing \(v\), where smoothed surface morphology implies an enhanced partial melting. Porosity drops from 31% to 18% for the increase of \(P\) from 15 to 30 W (\(v=100\) mm s\({}^{-1}\)) and from 29% to 21% for the decrease of \(v\) from 150 to 75 mm s\({}^{-1}\) (\(P=20\) W). The presented tendencies imply a possible allometric relation \(\varphi\propto P^{-m}v^{n}\) with positive indices \(m\) and \(n\). On the other hand, combining beam power and scan speed as one characteristic quantity, \(P/v\) is widely adapted to define a specific energy density, notably the volumetric energy input as
\[U=\frac{P}{HWv}, \tag{26}\]
where \(H\) is the powder layer thickness (here \(H=60\) um) and \(W\) is the scan track width (here \(W\) takes \(D_{\text{FWHM}}\)). Evidently, \(\varphi\) exhibits an overall decrease tendency with rising \(U\), meaning lower porosity can be achieved as higher specific energy input.
We then examined the simulation results with a proposed phenomenological relation following Refs. [7, 58], read as
\[\ln[1-\varrho(\varphi)]=-K_{\text{e}}U \tag{27}\]
with a densification factor defined as
\[\varrho(\varphi)\equiv\frac{\varphi_{0}-\varphi}{\varphi_{0}-\varphi_{\text{ min}}}.\]
\(\varrho\) indicates the ratio between a reduced porosity w.r.t. the maximum porosity reduction in the chosen processing window, as \(\varphi_{0}\) and \(\varphi_{\text{min}}\) represent respectively the initial and minimum achieved porosity. \(\varphi_{\text{min}}\) usually varies between 3% to 30% for metallic materials [58]. In pursuit of uniformity, we have chosen \(\varphi=3\%\) in this study, aligning with our previous single-layer SS simulation [7]. \(\varphi_{0}\) is, however, difficult to be determined for multilayer SS simulation since there is sorely one or two particles in thickness for every new layer while the old layers have been fused. In this sense, we evaluate \(\varphi_{0}\) by three routes:
1. Assuming that the porosity of the powder bed region, which is away from the beam spot (i.e., without significant densification), is the initial one of the powder bed, \(\varphi_{0}\) can be read as the converged value from segment-wise porosity evaluation along \(y\)-direction (Fig. 7a\({}_{2}\)), which is \(\varphi_{0}=27.8\%\sim 33.6\%\).
2. As the particle size distribution in this work is assumed to be Gaussian-type, \(\varphi_{0}\) is then estimated by statistic random-close-packing model of spherical particle as \(\varphi_{0}=0.366-0.0257\) (\(\varsigma_{d}/\mu_{d}\)) with \(\mu_{d}\) and \(\varsigma_{d}\) the mean and standard deviation of the particle size [57, 59]. Taking \(\mu_{d}=20\)\(\upmu\)m and \(\varsigma_{d}=5\)\(\upmu\)m, it can be calculated as \(\varphi_{0}=36.0\%\).
3. We also piled multiple densification-free powder stacks using DEM method with the same domain volume fraction of the particles (48.0%) as the overall one of the multilayer SS simulation, in which each layer deposits powders of 12% domain volume fraction. The measured nominal porosity is thereby regarded as the initial one, as \(\varphi_{0}=40.0\%\sim 40.5\%\).
\(\varphi_{0}\) from route (i) directly reflects the porosity of on-site unfused powders, but the influences from the potential necked particles due to thermal processes cannot be well eliminated. \(\varphi_{0}\) from route (ii) is based on the statistics of random-closed-packed particles, which is rather ideal compared with the practical powder bed created by powder spreading. Since route (iii) creates particle stack via simulating the deposition in powder spreading, the calculated \(\varphi_{0}\) may be still inflated as the morphological variability of the deposition surface is missing. Considering all of these factors, we have selected \(\varphi_{0}=36.0\%\), which stands at the midpoint among all the assessed values.
In Fig. 7d, linear regression between \(-\ln(1-\varrho)\), calculated from simulated \(\varphi\), and \(U\) is presented. For comparison, regressions on data by single-layer SS simulation and experiment are also illustrated [7, 58]. The regressed line together with the 95% confidence interval (CI\({}_{95\%}\)) of the multilayer SS simulations is right in between those of the single-layer SS simulation and experiments. As the coefficient \(K_{\varrho}\) is related to the material and the size distribution of the powders, the multilayer result \(K_{\varrho}=0.016\pm 0.001\) mm\({}^{3}\)J\({}^{-1}\) demonstrates an improved coherence with the experimental one \(K_{\varrho}=0.013\) mm\({}^{3}\)J\({}^{-1}\) compared the single-layer one \(K_{\varrho}=0.019\) mm\({}^{3}\)J\({}^{-1}\), which can suffer from inflated porosity mostly due to insufficient volume microstructure for porosity calculation.
### Phenomenological relations for residual stress and plastic strain control
Taking the PB-averaged residual von Mises stress \(\bar{\sigma}_{\rm e}^{\rm P}\) and effective plastic strain \(\bar{\rho}_{\rm e}^{\rm P}\) (defined in Eq. (24)) at the end of the simulations, Fig. 8 presents the distributions of \(\bar{\sigma}_{\rm e}^{\rm P}\) and \(\bar{\rho}_{\rm e}^{\rm P}\) w.r.t. the \(P\) and \(v\) in the chosen processing window. Generally, the rise in \(\bar{\sigma}_{\rm e}^{\rm P}\) corresponds to an increase in the specific energy input \(U\) (i.e., increase \(P\) with maintained \(v\) or increase \(v\) with maintained \(P\)), resulting in further concentrated residual stress around around concave features (surface depressions and particle sintering necks) and across layers, as already presented in Fig. 5a and 5b. To understand the relationship between \(\bar{\sigma}_{\rm e}^{\rm P}\) and \(U\), we interpreted \(\bar{\sigma}_{\rm e}^{\rm P}\) as the stored mechanical energy density after the multilayer SS processes, which can be regarded as the residue of the energy density imported via the beam-induced thermal effect in the powder bed after all types of in-process dissipation. In this regard, the nonlinear regression analysis of an energy conversion law \(\bar{\sigma}_{\rm e}^{\rm P}=\sigma_{\infty}^{\rm P}\{1-\exp[-\frac{1}{UU}(U- U_{\rm th}^{\rm P})]\}\) was conducted on the simulation data. The parameters \(U_{\rm th}^{\rm P}\) and \(\sigma_{\infty}^{\rm P}\) adopt the physical meanings as the volumetric energy input at stress-free state and the saturated stress at infinity energy input, respectively, and \(\sigma_{\infty}^{\rm P}/U_{\sigma}^{\rm P}\) is the increasing rate (slope) of \(\bar{\sigma}_{\rm e}^{\rm P}\) vs. \(U\) at stress-free state, as shown in the inset of Fig. 8c. The result in Fig. 8c presents a high correlation (\(R^{2}=98.41\%\)) between simulated \(\bar{\sigma}_{\rm e}^{\rm P}\) vs. \(U\) with narrow confidence interval, demonstrating the applicability of proposed energy conversion law in predicting the residual stress in the powder bed.
Unlike the residual stress, the effective plastic strain \(p_{\rm e}\) is a measure of cumulative plastic deformation at any given moment during the process and lacks a clear physical picture of its relationship with volumetric energy input \(U\). Therefore, the nonlinear regression analysis of an allometric scaling law \(\bar{p}_{\rm e}^{\rm P}=C_{p}^{\rm P}(U)^{P}\) with parameters \(C_{p}^{\rm P}\) and \(I_{p}^{\rm P}\) was conducted to phenomenologically relate \(\bar{p}_{\rm e}^{\rm P}\) to \(U\), as shown in Fig. 8d. The analysis gives a relatively low correlation coefficient \(R^{2}=87.4\%\) of the \(\bar{p}_{\rm e}^{\rm P}(U)\) relation compared with one of \(\bar{\sigma}_{\rm e}^{\rm P}(U)\), with an expanded confidence interval at high \(U\) range. Regressed \(I_{p}^{\rm P}=1.17\pm 0.13\) indicates an almost linear scalability of accumulated plastic strain in the processed powder bed on energy input via beam scan. It is worth noting that the proposed scaling law can be challenged as \(U\) appears to not being able to uniquely identify \(\bar{p}_{\rm e}^{\rm P}\), in other words, \(U\) of certain value can correspond to multiple \(p_{\rm e}\) value, as also depicted in Fig. 8d. Nonetheless, this scaling law can find it feasible in estimating strength of in-process plastification of the microstructure at given specific energy input.
Since both \(\bar{p}_{\rm e}^{\rm F}\) and \(\bar{\sigma}_{\rm e}^{\rm F}\) consider a complete powder bed with unfused particles, we further defined two average quantities that only take the residual stress and effective plastic strain inside the fused strut into account, denoted respectively \(\bar{\sigma}_{\rm e}^{\rm S}\) and \(\bar{p}_{\rm e}^{\rm S}\). A historical indicator \(\xi\) was added in the system to emulate the phenomenological fusion of the strut during multilayer SS process following our former work [37, 57], which is initialized as zero and irreversibly transitions as one once \(T\geq T_{\rm M}\). \(\bar{\sigma}_{\rm e}^{\rm S}\) and \(\bar{p}_{\rm e}^{\rm S}\) are then calculated as
\[\bar{\sigma}_{\rm e}^{\rm S}=\frac{\int_{\Omega}\bar{\zeta}\sigma_{\rm e} \mathrm{d}\Omega}{\int_{\Omega}\bar{\xi}\mathrm{d}\Omega},\quad\bar{p}_{\rm e} ^{\rm S}=\frac{\int_{\Omega}\bar{\zeta}p_{\rm e}\mathrm{d}\Omega}{\int_{\Omega} \bar{\xi}\mathrm{d}\Omega}. \tag{28}\]
Fig. 9 presents the distribution of \(\bar{\sigma}_{\rm e}^{\rm S}\) and \(\bar{p}_{\rm e}^{\rm S}\) w.r.t. the \(P\) and \(v\) in the chosen processing window. Contrasting with those shown in Fig. 8, a notable distinction in the maps presented in Fig. 9c and 9d is the emergence of regions where the selected \(P\) and \(v\) fail to form continuous fused strut. Meanwhile, the profiles of \(\bar{\sigma}_{\rm e}^{\rm S}\) and \(\bar{p}_{\rm e}^{\rm S}\) receive significant influences from the size of the strut. Comparing Fig. 9a\({}_{1}\) with 9a\({}_{2}\)-9a\({}_{3}\) and 9a\({}_{4}\)-9a\({}_{5}\), the increase of \(\bar{\sigma}_{\rm e}^{\rm S}\) follows the direction of increasing \(U\) as well, which simultaneously leads to the enlargement of the fused strut. As depicted in Figs. 5c and 9a, concentrated residual stress is primarily found within the upper-layers of the strut. When the strut size is enlarged, it encompasses a larger volume with elevated maximum \(\sigma_{\rm e}\) and high-\(\sigma_{\rm e}\) region, resulting in an increased \(\bar{\sigma}_{\rm e}^{\rm S}\) with the rise in \(U\). \(\bar{p}_{\rm e}^{\rm S}\) exhibits similar tendency w.r.t. \(P\) and \(v\) as \(\bar{\sigma}_{\rm e}^{\rm S}\), with an increase in \(U\) resulting in an increase in \(\bar{p}_{\rm e}^{\rm S}\). Nonetheless, regions with highly accumulated \(p_{\rm e}\) locates at the bottom of each layer's fusion zone, as illustrated in Figs. 5d and 9b. At high \(U\), such accumulation intensifies, especially at the bottom of the strut (which is also the bottom of the initial layer's fusion zone). Simultaneously, enlarged depth of a fusion zone indicates an extended remelting in the fused lower-layers, which removes some accumulated \(p_{\rm e}\) by the process or former layer within the strut, as shown in Fig. 6a. It also concentrates the high-\(p_{\rm e}\) region further to the bottom of the strut, notably the profile in Fig. 9b\({}_{4}\). Eventually, rise of \(\bar{p}_{\rm e}^{\rm S}\) w.r.t. \(U\) is relatively less "rapid" then one of \(\bar{\sigma}_{\rm e}^{\rm S}\), evident in slightly sparser contours in Fig. 9d compared to 9c.
Nonlinear regression analyses were also conducted on \(\bar{\sigma}_{\rm e}^{\rm S}\) and \(\bar{p}_{\rm e}^{\rm S}\) employing the proposed energy conversion law for residual stress and scaling law for effective plastic strain. Results are correspondingly presented in Fig. 9e and 9f. Notably, comparing to those of \(\bar{\sigma}_{\rm e}^{\rm F}(U)\) and \(\bar{p}_{\rm e}^{\rm F}(U)\), the correlation of \(\bar{\sigma}_{\rm e}^{\rm S}(U)\) and \(\bar{p}_{\rm e}^{\rm S}(U)\) present decline, respectively, with enlarged confidence interval at both low and high \(U\) ranges. This attributes to the removal of the influences from the unfused particles. Moreover, for \(\bar{\sigma}_{\rm e}^{\rm S}(U)\), regressed threshold energy input presents an negative value as \(U_{\rm th}^{\rm S}=-11.11\pm 17.43\;\mathrm{J}\,\mathrm{J}\,\mathrm{J}\, \mathrm{m}^{-3}\), indicating a required energy output to achieve stress-free state. This is explainable as the \(\sigma_{\rm e}\) already exists in a just-formed strut. In other word, primary SS process (i.e., the SS without subsequent thermal post-process to release residual stress) cannot achieve samples with stress-free strut. Regressed saturated stress \(\sigma_{\rm e}^{\rm S}=260.05\pm 24.5\;\mathrm{MPa}\) also presents decline comparing to the regressed one on PB-averaged data. For \(\bar{p}_{\rm e}^{\rm S}(U)\), a reduced regressed index of the scaling law \(\bar{p}_{\rm e}^{\rm S}(U)\), i.e., \(I_{p}^{\rm R}=0.30\pm 0.06\) in Fig. 9f, demonstrates a sublinear scalability of \(\bar{p}_{\rm e}^{\rm S}(U)\) compared with one of \(\bar{p}_{\rm e}^{\rm F}(U)\), which is almost linear. It also reflects a reduced growth rate of \(\bar{p}_{\rm e}^{\rm S}\) at high \(U\), and the underlying intensified remelting, which removes some accumulated \(p_{\rm e}\) within the strut while further concentrates \(p_{\rm e}\) around strut bottom, and the comparably faster increase in strut size shall be the reason. Nonetheless, information conveyed by \(\bar{\sigma}_{\rm e}^{\rm S}(U)\) and \(\bar{p}_{\rm e}^{\rm S}(U)\) is more relevant for practical application, as unfused particles shall be removed during post-process of a practical SS. Further examination and validation of the proposed laws for residual stress and plastic strain w.r.t. specific energy input are expected in the future numerical and experimental studies.
## 6 Conclusion
In this work, we proposed a powder-resolved multilayer simulation scheme for producing porous materials using selective sintering, combing FEM-based non-isothermal phase-field simulation and thermo-elasto-plastic simulation with temperature-dependent material properties. This work has presented the mesoscopic evolution of stress and plastic strain on a transient thermal-microstructure under various beam power (\(P\)) and scan speed (\(v\)). Process-property relationships between porosity, residual stress and effective plastic strain and the volumetric energy input
(\(U\propto P/v\)) have also been demonstrated and discussed. The following conclusions can be drawn from the present work:
1. We proposed in this work a new powder-resolved mesoscopic residual stress formation mechanism for porous materials manufactured by the SS process, which collectively lead to the structural distortion appeared in fused strut. It was demonstrated with simulation results that the stress concentration at the necking region of the partially melted particles and inter-layer region between different layers provide dominant accumulated plastic strain and residual stress in the porous material.
2. Based on the proposed residual stress formation mechanism, we examined the proposed phenomenological relation between the porosity (densification) and the volumetric energy input \(U\). Regression analysis on the resulting porosity from multilayer SS simulations suggested an improved coherence with the experimental data, as the regressed densification coefficient \(K_{\rm e}=0.016\pm 0.001\) mm\({}^{3}\) J\({}^{-1}\) in this work comparing to the experimental \(K_{\rm e}=0.013\) mm\({}^{3}\) J\({}^{-1}\) and one from our former single-layer simulation \(K_{\rm e}=0.019\) mm\({}^{3}\) J\({}^{-1}\).
3. Two types of average quantities, namely PB-averaged (\(\sigma_{\rm e}^{\rm P}\) and \(\rho_{\rm e}^{\rm P}\)) and strut averaged ones (\(\sigma_{\rm e}^{\rm S}\) and \(\rho_{\rm e}^{\rm S}\)), were defined to characterize the residual stress and plastic strain within the powder bed and fused strut, correspondingly. The relationships between these quantities and volumetric energy input (\(U\)) are unveiled by conducting nonlinear regression analyses. The average residual stress (\(\sigma_{\rm e}^{\rm P}\) and \(\sigma_{\rm e}^{\rm S}\)) relates to \(U\) by the energy conversion law, while the average effective plastic strain (\(\bar{p}_{\rm e}^{\rm P}\) and \(\bar{p}_{\rm e}^{\rm S}\)) by the allometric scaling law.
4. Attributing to the removal of influences from unfused particles, the correlation of the relations \(\bar{\sigma}_{\rm e}^{\rm S}\) and \(\bar{p}_{\rm e}^{\rm S}\) present drops compared with one of \(\sigma_{\rm e}^{\rm P}(U)\) and \(\bar{p}_{\rm e}^{\rm P}(U)\), respectively. Saturation behavior is observed on both \(\bar{\sigma}_{\rm e}^{\rm P}(U)\) and \(\bar{\sigma}_{\rm e}^{\rm S}(U)\), while the linear scalability in \(\bar{p}_{\rm e}^{\rm P}(U)\) degenerates into sublinear one in \(\bar{\sigma}_{\rm e}^{\rm S}(U)\), demonstrating a reduced growth rate of \(\bar{p}_{\rm e}^{\rm S}\) at high \(U\).
Despite the feasibility of the multilayer simulation scheme in recapitulating mesoscopic formation of porosity, residual stress and plastic strain under given processing parameters; and the proposed mechanism in explaining the structural distortion of SS-produced samples, several points should be further examined and discussed in future works:
1. The present work omits the consideration of chronological-spatial distribution of the thermal-elasto-plastic properties among polycrystals, as the properties such as elasticity and crystal plasticity vary from grain to grain with distinct orientations.
2. The present findings are examined at relatively low specific energy input \(uv\) and, correspondingly, low generated residual stress and accumulated plastic strain. Pore formation is also limited to lack-of-fusion mechanism. It is anticipated to conduct further simulations with relatively high energy input, where the mechanisms like keyholing co-exist with high residual stress and plastic strain. Extension of the proposed mechanism into the high-\(U\) range together with the following examination and validation are also expected.
## Acknowledgements
Authors acknowledge the financial support of German Science Foundation (DFG) in the framework of the Collaborative Research Centre Transregio 270 (CRC-TRR 270, project number 405553726, sub-projects A06 and A07), the Research Training Groups 2561 (GRK 2561, project number 413956820, sub-project A4), the Priority Program 2122 (SPP 2122, project number 493889809). X. Zhou acknowledges the support from the National Natural Science Foundation of China (project number 12302231), Sichuan Science and Technology Program (project number 2023NSFSC0910), and China Postdoctoral Science Foundation (project number 2023M732433). The authors also greatly appreciate the access to the Lichtenberg II High-Performance Computer (HPC) and the technique supports from the HHLR, Technische Universitat Darmstadt. The computating time on the HPC is granted by the NHR4CES Resource Allocation Board under the project "special00007". Y. Yang also greatly thanks Dr. Binbin Lin for helping with the setup of layerwise powder deposition.
## Data Availability
The authors declare that the data supporting the findings of this study are available within the paper. Source codes of MOOSE-based application NISoS and related utilities are cured in the online repository bitbucket.org/mfm_tuda/nisos.git. The simulation results, statistics and metadata are cured in the online dataset (DOI: xxxx/zenodo.xxxxxx).
\begin{table}
\begin{tabular}{c c c c} \hline Properties & Expressions (\(T\) in K) & Units & References \\ \hline \(T_{\rm M}\) & \(\sim 1700\) & K & \\ \(\gamma_{\rm sf}\) & \(10.315-5.00\times 10^{-3}T\) & J/m\({}^{2}\) & [60] \\ \(\gamma_{\rm gb}\) & \(13.018-7.50\times\times 10^{-3}T\) & J/m\({}^{2}\) & [60] \\ \(l_{\rm gb}\) & \(2\times 10^{-6}\) & m & \\ \(D_{\rm sf}\) & \(0.40{\rm exp}\left(-2.200\times 10^{5}/\mathfrak{P}kT\right)\) & m\({}^{2}\)/s & [61] \\ \(D_{\rm gb}\) & \(2.40\times 10^{-3}{\rm exp}\left(-1.770\times 10^{5}/\mathfrak{P}kT\right)\) & m\({}^{2}\)/s & [61] \\ \(D_{\rm ss}\) & \(2.17\times 10^{-5}{\rm exp}\left(-2.717\times 10^{5}/\mathfrak{P}kT\right)\) & m\({}^{2}\)/s & [62] \\ \(G_{\rm gb}\) & \(3.26\times 10^{-3}{\rm exp}\left(-1.690\times 10^{5}/\mathfrak{P}kT\right)^{*}\) & m\({}^{4}\)/(\(\mathfrak{J}\) s) & \\ \(M_{\rm M}\) & \(\sim 3.45\times 10^{-13}\) & m\({}^{5}\)/(\(\mathfrak{J}\) s) & \\ \(K_{\rm ss}\) & \(10.292+0.014T\) & J/(s m K) & [63] \\ \(K_{\rm at}\) & \(\sim 0.06\) & J/(s m K) & [64] \\ \(c_{\rm ss}\) & \(3.61\times 10^{6}+1272T\) & J/(m\({}^{3}\) K) & [63] \\ \(c_{\rm at}\) & \(717.6\) & J/(m\({}^{3}\) K) & [65] \\ \({\cal L}_{\rm M}\) & \(2.4\times 10^{9}\) & J/m\({}^{3}\) & [63] \\ \hline \end{tabular}
* Activation energy is obtained from [66] while the prefix factor is estimated as unity at \(T_{\rm M}\) after normalization.
* Estimated as \(100D_{\rm sf}/2(\underline{W}_{\rm sf}+\underline{W}_{\rm gb})\).
\end{table}
Table 1: Material properties of the bulk SS316L used in the non-isothermal phase-field simulations. Here \(\mathfrak{R}\) represents the ideal gas constant.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(T\) (K) & 298 & 873 & 1073 & 1473 & 1623 & \(\geq 1773\) \\ \hline \(\nu\) & 0.33 & 0.35 & 0.36 & 0.38 & 0.39 & 0.40 \\ \(E\) (MPa) & \(2.00\times 10^{5}\) & \(1.35\times 10^{5}\) & \(7.75\times 10^{4}\) & \(1.21\times 10^{4}\) & \(6.14\times 10^{3}\) & 200 \\ \(\sigma_{\rm y}\) (MPa) & \(3.45\times 10^{2}\) & \(2.12\times 10^{2}\) & \(1.99\times 10^{2}\) & 100 & 50 & 5 \\ \(E_{\rm t}\) (MPa) & \(5.89\times 10^{3}\) & \(1.70\times 10^{3}\) & \(1.40\times 10^{3}\) & 100 & 10 & 1 \\ H (MPa) & \(6.07\times 10^{3}\) & \(1.72\times 10^{3}\) & \(1.43\times 10^{3}\) & 101 & 10 & 1 \\ \(\alpha\) (1/K) & \(1.20\times 10^{-5}\) & \(1.30\times 10^{-5}\) & \(1.32\times 10^{-5}\) & \(1.36\times 10^{-5}\) & \(1.38\times 10^{-5}\) & \(1.40\times 10^{-5}\) \\ \hline \end{tabular}
\end{table}
Table 2: Temperature-dependent mechanical properties of the bulk SS316L used in the thermo-elasto-plastic simulations [67, 68]
Figure 1: (a) The workflow of multilayer selective sintering (SS) simulation, incl. the layerwise powder deposition, the non-isothermal phase-field simulation and subsequent thermo-elasto-plastic calculation. (b) Prime simulation domain with four deposited-processed powder layers. Inset: domain boundaries and their denotations.
## 6 Conclusion
Figure 2: (a\({}_{1}\))-(a\({}_{4}\)) Transient thermo-structural profiles under a beam power 20 W and a scan speed 100 mm s\({}^{-1}\) with the beam spot consistently positioned across various layers. The beam spot size characterized by \(D_{\rm L}\) and \(D_{\rm FWHM}\) is indicated. (b\({}_{1}\))-(b\({}_{4}\)) Transient thermo-structural profiles of the 4\({}^{\rm th}\) layer under various beam power and scan speed with the beam spot consistently positioned. The overheated regions (\(T\geq T_{\rm M}\)) are illustrated by a continuous color map. (c\({}_{1}\))-(c\({}_{2}\)) Temperature history of the selected surface points across layers at various beam power with scan speed maintained at 100 mm s\({}^{-1}\). Single-layer SS (shaded sections) and cooling phase are also denoted. Inset: the location of the points.
## 6 Conclusion
Figure 3: (a) Calculated von Mises stress (\(\sigma_{\rm e}^{\rm P}\)) and temperature (\(\overline{T}^{\rm P}\)) in powder-bed average vs. time with the profile of \(\sigma_{\rm e}\) at the denoted states shown in (b\({}_{1}\))–(b\({}_{6}\)). (c) Calculated effective plastic strain in powder-bed average (\(\bar{p}_{\rm e}^{\rm P}\)) vs. time with the profile of \(p_{\rm e}\) at the denoted states shown in (d\({}_{1}\))–(d\({}_{6}\)).
Figure 4: Probed history of (a\({}_{1}\))-(a\({}_{3}\)) stress components at surface points L\({}_{1}\)0, L\({}_{1}\)2 and L\({}_{1}\)3, and (b) von Mises stress \(\sigma_{\rm e}\) at surface points L\({}_{1}\)0, L\({}_{2}\)0, L\({}_{3}\)0 and L\({}_{4}\)0 across layers. Inset: Location of the selected points. Fushion zone boundary (FZB) of the initial layer is also denoted.
## 6 Conclusion
Figure 5: Simulated profiles of (a\({}_{1}\))-(a\({}_{5}\)) von Mises stress \(\sigma_{\rm e}\) and (b\({}_{1}\))-(b\({}_{5}\)) effective plastic strain \(p_{\rm e}\) in the SS-processed four-layer powder bed with varying beam power \(P\) and scan speed \(v\). Sectional profiles of (c\({}_{1}\))-(c\({}_{2}\)) residual von Mises stress \(\sigma_{\rm e}\) and (d\({}_{1}\))-(d\({}_{2}\)) accumulated plastic strain \(p_{\rm e}\) among SS-processed four-layer powder bed with (c\({}_{1}\))-(d\({}_{1}\)) \(P=20\) W and \(v=100\) mm s\({}^{-1}\) and (c\({}_{2}\))-(d\({}_{2}\)) \(P=30\) W and \(v=75\) mm s\({}^{-1}\), respectively. Former surfaces of each layer are denoted by discontinuous black lines, and fused strut bottom is denoted with a solid white line. The Lame’s stress ellipsoids, representing the stress state at selected points P\({}_{1}\)-P\({}_{6}\), are also illustrated with the principle stresses denoted and colored (marine: tension; red: compression).
## References
* [1] A. A. K. K. (1979) The structure of the interstellar medium. _Astrophysical J. Phys._**41**, 103 (1979).
[MISSING_PAGE_POST]
## 6 Conclusion
Figure 7: (a) Segment-wise porosity of the sample produced under various process parameters along (a\({}_{1}\)) building direction (BD) and (a\({}_{2}\)) scan direction (SD), where the range of the substrate as well as the beam spot size (\(D_{\text{FWHM}}\)) are denoted. Representative height (\(H_{\text{R}}\)) and width (\(W_{\text{R}}\)) for porosity calculation are also selected. (b\({}_{1}\))-(b\({}_{3}\)) microstructures of SS-processed four-layer powder bed with varying beam power and scan speed, which are marked as points in the porosity _P-v_ map (c). The dotted lines represent the volumetric energy input \(U\) isolines and the dash-dotted line represents the median porosity isoline (24.1%). (d) Phenomenological relation between densification factor \(\varrho\) (calculated from porosity) and \(U\).
Figure 8: _P-v_ maps of (a) PB-averaged residual stress \(\bar{\sigma}_{\rm e}^{\rm P}\) and (b) PB-averaged plastic strain \(\bar{\rho}_{\rm e}^{\rm P}\). The dotted lines maps represent the volumetric energy input \(U\) isolines. _P-v_ pairs for simulated profiles in Fig. 5a and 5b are also denoted correspondingly. Nonlinear regressions of (c) \(\bar{\sigma}_{\rm e}^{\rm P}\) and (d) \(\bar{p}_{\rm e}^{\rm P}\) on \(U\) with the regression parameters indicated correspondingly.
## 6 Conclusions
Figure 9: Simulated profiles of (a\({}_{1}\))-(a\({}_{5}\)) residual von Mises stress \(\sigma_{\rm e}\) and (b\({}_{1}\))-(b\({}_{5}\)) effective plastic strain \(p_{\rm e}\) in the fused strut with varying beam power \(P\) and scan speed \(v\), which are marked as points in the \(P\)-\(v\) maps of (c) strut-averaged residual stress \(\sigma_{\rm e}^{\rm S}\) and (d) strut-averaged plastic strain \(\tilde{p}_{\rm e}^{\rm S}\), respectively. The dotted lines maps represent the volumetric energy input \(U\) isolines. Structural distortion of each fused strut are also illustrated. Nonlinear regressions of (e) \(\sigma_{\rm e}^{\rm S}\) and (f) \(\tilde{p}_{\rm e}^{\rm S}\) on \(U\) with the regression parameters indicated correspondingly. |
2308.06270 | Joy Learning: Smartphone Application For Children With Parkinson Disease | Parkinson's is a Neurologic disorder that not only affects the human body but
also their social and personal life. Especially children having the Parkinson's
disease come up with infinite difficulties in different areas of life mostly in
social interaction, communication, connectedness, and other skills such as
thinking, reasoning, learning, remembering. This study gives the solution to
learning social skills by using smartphone applications. The children having
Parkinson's disease (juvenile) can learn to solve social and common problems by
observing real-life situations that cannot be explained properly by
instructors. The result shows that the application will enhance their
involvement in learning and solving a complex problem. | Mujahid Rafiq, Ibrar Hussain, Muhammad Arif, Kinza Sardar, Ahsan Humayun | 2023-07-27T05:06:49Z | http://arxiv.org/abs/2308.06270v1 | # Joy Learning: Smartphone Application For Children With Parkinson Disease
###### Abstract
Parkinson's is a Neurologic disorder that not only affects the human body but also their social and personal life. Especially children having the Parkinson's disease come up with infinite difficulties in different areas of life mostly in social interaction, communication, connectedness, and other skills such as thinking, reasoning, learning, remembering. This study gives the solution to learning social skills by using smartphone applications. The children having Parkinson's disease (juvenile) can learn to solve social and common problems by observing real-life situations that cannot be explained properly by instructors. The result shows that the application will enhance their involvement in learning and solving a complex problem.
Keywords:Juvenile, Parkinson, Usefulness, Effective, Social Skills, children with Parkinson
## 1 Introduction
Parkinson's disease is a severe chronic disorder. After Alzheimer's, It is the second most frequent neurodegenerative disease [1]. According to the research, there are an estimated 1.5 million Americans living with Parkinson's disease and more than 10 million people worldwide [2]. Juvenile Parkinsonism is defined as appearing symptoms of Parkinsonism before the age of 21 years [3]. Parkinson Foundation categorized this disease in children as Young Onset Parkinson's disease (YOPD) and according to them, this occurs very rarely in children but important to address and understand [4]. The children having Parkinson having similar symptoms like Rigidity, Bradykinesia, Tremors in hands, arms legs, Jaw, and face. They have low cognition and mental ability as compared to a normal person [3, 5]. Medical literature is available but mostly tech community is ignoring this society. Only a few applications are available for them mentioned in the related work section but there is a huge gap that needs to fulfill to make them a part of normal society.
The children having Parkinson's feel shy, They avoid socializing, feeling disconnect moreover face the difficulty in learning and dealing with daily life situations like asking for help, solving a problem, asking permission in class and many other related issues. Social interaction is a vital problem for children with Parkinson's disease, mainly when they need memorizing or performing certain tasks [6]. The motive behind this current study is to handle the social situations, improve confidence and enhance knowledge of a child.
In this paper, a smartphone application "Joy learning" introduced for children with Parkinson will help them to learn according to their specialized needs Our focus is on Parkinson children's who avoid going outside, explore new things and are unable to handle altered situations. They usually scared to communicate and interact. Through this application, these limitations can be overcome.
The contribution of work includes providing an effective and useful learning platform for the children having Parkinson's disease. Children can learn different real-life objects, and to deal with real-life problems with accurate examples and situations without any extra effort. This work will also contribute to instructors to instruct the student in a new and easy way.
The poster work is divided into V sections. Section I is about the introduction of work, Section II is on the Related work, III section elaborating the experiment design & methodology we used in this study, Section IV is showing the Analysis & Results and finally, the section V concluded the works and enlightened the future directions of this work
## 2 Related Work
In recent years, there has been an increasing amount of computerized technology use for remedial and educational purpose to serve people with Parkinson's disease. Due to specialized requirements, simple learning applications are not so much use for that kind of child. The most popular and useful applications that are specifically for the learning purpose are discussed in this section.
Kindeo [7], is a private space that provides sharing stories, memories or family knowledge. This application provides a strong understanding of the future generation about their family knowledge already saved here. Cove [8], captures the mood and expresses the way one feels. If one likes specific music/songs, they can add description, picture or save. BeatPanic [9], uses soothing colors and provides positive mantras that help one to calm down and control
panic attacks by helping regulate breathing and deviate/ divert your focus from a panic attack. Beats Medical [11], helps improve their speech, motor skills, and mobility. Voice Analyst [12] a self- monitoring app works as an instrument for measuring the pitch and volume of one's voice. This application provides remarkable information regarding speech quality. Parkinson's easy call [13] is useful for dexterity patients for making calls with one touch on the screen. Names and numbers added by the user are then displayed as prominent round buttons. A phone call will be initiated by tapping on a specific button.
The applications presented thus far provide evidence that there is a keen need to make specialized applications for the People having Parkinson's disease to make them a part of the society. Recently a detailed review conducted on applications for Parkinson's disease patients [14].
## 3 Experiment Design & Methodology
The procedures of this study were approved by the ethical body of our department furthermore a formal consent was taken from the doctors and teachers dealing with students under-researched. The interface of the application is properly designed by reading careful design guidelines provided by [15], in their work and then the final design was properly reviewed by 2 Human-Computer Interaction (HCI) experts and 1 doctor and 1 educator that particularly deal with children having Parkinson's.
The images used in application were taken by our self to maintain the familiarity of the context of the situation and to avoid the irrelevant situation that is not present actually. Pictures and videos were taken in natural situations that occurred in schools and the environment.
We carefully conducted the experiments on 3 children (2 male and 1 female) having Parkinson's disease and equal intellectual level as told by their instructor. They all belong to Faisalabad, Pakistan. 2 students were aged below 10 and 1 student belongs to the age group from 10-15 years. We assigned tentative names to these children as child A, child B, child C to maintain privacy. Overview of Methodology is showing in Figure 1.
The experiment divided into two session follow-ups, one session is about taking immediate feedback after using the application for the first time and the second session was held after 2 weeks of using application with the help of instructors dealing with these children. In the first session of this experiment, we evaluated the facial expressions, task performance with respect to time, and involvement with respect to interest. We assigned a score out of 100 to these parameters for better analysis, understanding, and measurements. If the score lies in 0-20 % considered worst-performing, 21-40 % bad performing, 41-60 % Average/Not good performing, 61-80 % good performing and 81- 100 % considered as best performance.
Both sessions were performed with the help of 2 instructors and 2 HCI experts. In the second session, improvement in social and common skills was observed. This similar methodology was used in many previous HCI related studies
Design of Application
In figure 2 the core components of "Joy learning" are shown. The application is divided into three parts\modules. We choose the language of the application's interface "English" because existing applications that were used by children and instructors are mostly available in English to avoid leaning effort in the usage of the application. The first part is about "supporting situation" based on video modeling, which has multiple social real-life situati
Figure 1: Overview of Research Methodology
Figure 2: Joy Learning core components and communication framework.
teaches the rules and then provides a similar situation with a different scenario to perform and compare the performance. In this way, we improve social, learning and thinking skills like how to deal with different real-life situations. The second part is "supporting knowledge" that is based on the 360-degree view, characters provide the information of different people, objects and events with an interactive way for those children who are unable to go out and explore objects and people because of bradykinesia and other related symptoms and disorders [3, 15]. Third and the last component is about "supporting solutions" in which we check the children's mental level by providing a picture of a situation based on a specific problem and three options to pick one of the best possible solutions. The selected solution must solve the overall situation. This component also records previous log data of children so that parents and teachers can check their improvement levels in learning. This study primarily evaluates social, mental ability, reasoning, and learning skills by using video modeling, 360-degree view, and specialized pictures.
## 4 Analysis & Results
According to the findings of the first session, child A performed well in comparison with the other 2 children throughout the experiment. Child A had no hesitation and anger on his face with up to 75% positive expressions which are considered well in our case, the task-related performance was up to 66% which is also good and showing 89% involvement in performing tasks that lie in the best category. So the mean of all 3 factors of child A is m=76.6 which means the performance of child A lies in the good category.
The performance of child B was a little bit low in comparison with child A as she was scared while performing the tasks and the score was 36% which was worst in our case. Frustration was showing clearly on her face that's why scored 48% in expressions which are considered bad and 50% score in involvement during performing tasks. The overall mean value of child B= 44.6 which lies in Average/not good category. Evaluation of child C was difficult as he was a quiet child with fewer facial expressions up to 34% that resulted in not gaining any feedback, although he was performing his tasks with up to 60% and showed 70% involvement. The overall mean of child C is m = 54.6 which lies under the Average Performance.
These three children used our application continuously for 2 weeks with the help of the instructor as mentioned in Section III. In the result of the second session, which was based on the measurement of common skills improvements and effects on the social life on the same children, child A showed improvement in learning of common skills up to 83% and learnability on social skills with up to 78% which lies in good and best category and ultimately resulted in better interaction with the society. Child B performed well in the second session with up to 69% of improvement in learning common skills and 60% improvement in the learnability of social life skills. Child C also improved learning with 61% common skills and social skills with up to 54% (which is slightly average/ not good).
The overall results depict that continuous usage of application will definitely effective in the improvement of social as well as common skills for which the application is purely built.
Disclaimer: The results are purely based on user observations. Although the process of data collection is carefully refined and handled with the help of experts there might be chances of minor error and ambiguities which is usually cannot be ignored in human-based experiments.
## 5 Conclusion and Future work
Our purposed smartphone application "Joy learning" is considered useful because as result highlights. It is beneficial in improving the social as well as common skills.
Results measure the expressions, performance, and involvement of children while using this application for the very first time. Its interface is designed in a way to handle movement disorders in children with properly suggested guidelines. To make this application more productive, our future aim is to enhance the features of this application, implement it on Virtual Reality platform to give a more real feel about situations and environments. Proper usability experiments will be conducted and will extend the work in a more authentic way. The moreover detailed and extended version of this work will submit in Journal
## Acknowledgment
We would like to thanks all the persons involved in the survey process and other research-related tasks. Special thanks to Dr. Sonia from Zunnurain Foundation Faisalabad, Pakistan for guidance and opinions related to handling children with Parkinson's disease and help us in conducting tests.
|
2305.11928 | Energy-frugal and Interpretable AI Hardware Design using Learning
Automata | Energy efficiency is a crucial requirement for enabling powerful artificial
intelligence applications at the microedge. Hardware acceleration with frugal
architectural allocation is an effective method for reducing energy. Many
emerging applications also require the systems design to incorporate
interpretable decision models to establish responsibility and transparency. The
design needs to provision for additional resources to provide reachable states
in real-world data scenarios, defining conflicting design tradeoffs between
energy efficiency. is challenging.
Recently a new machine learning algorithm, called the Tsetlin machine, has
been proposed. The algorithm is fundamentally based on the principles of
finite-state automata and benefits from natural logic underpinning rather than
arithmetic. In this paper, we investigate methods of energy-frugal artificial
intelligence hardware design by suitably tuning the hyperparameters, while
maintaining high learning efficacy. To demonstrate interpretability, we use
reachability and game-theoretic analysis in two simulation environments: a
SystemC model to study the bounded state transitions in the presence of
hardware faults and Nash equilibrium between states to analyze the learning
convergence. Our analyses provides the first insights into conflicting design
tradeoffs involved in energy-efficient and interpretable decision models for
this new artificial intelligence hardware architecture. We show that frugal
resource allocation coupled with systematic prodigality between randomized
reinforcements can provide decisive energy reduction while also achieving
robust and interpretable learning. | Rishad Shafik, Tousif Rahman, Adrian Wheeldon, Ole-Christoffer Granmo, Alex Yakovlev | 2023-05-19T15:11:18Z | http://arxiv.org/abs/2305.11928v1 | # _Energy-frugal_ and _Interpretable_ AI Hardware Design using Learning Automata
###### Abstract
Energy efficiency is a crucial requirement for enabling powerful artificial intelligence applications at the microedge. Hardware acceleration with frugal architectural allocation is an effective method for reducing energy. Many emerging applications also require the systems design to incorporate interpretable decision models to establish responsibility and transparency. The design needs to provision for additional resources to provide reachable states in real-world data scenarios, defining conflicting design tradeoffs between energy efficiency. is challenging.
Recently a new machine learning algorithm, called the Tsetlin Machine, has been proposed. The algorithm is fundamentally based on the principles of finite-state automata and benefits from natural logic underpinning rather than arithmetic. In this paper, we investigate methods of energy-frugal artificial intelligence hardware design by suitably tuning the hyperparameters, while maintaining high learning efficacy. To demonstrate interpretability, we use reachability and game-theoretic analysis in two simulation environments: a SystemC model to study the bounded state transitions in the presence of hardware faults and Nash equilibrium between states to analyze the learning convergence. Our analyses provides the first insights into conflicting design tradeoffs involved in energy-efficient and interpretable decision models for this new artificial intelligence hardware architecture. We show that frugal resource allocation coupled with
systematic prodigality between randomized reinforcements can provide decisive energy reduction while also achieving robust and interpretable learning.
## 1 Introduction
Minimizing energy consumption is a primary design objective in embedded artificial intelligence (ai) applications, such as image and voice recognition [24, 16, 2, 25]. In many applications, hardware acceleration is preferred over software implementation as the former is significantly more energy efficient [7]. To reduce energy in hardware, architectural resource pruning is an effective method which aims to cut down the non-critical data computation and movement costs. Examples in traditional neural networks (nns) include approximate arithmetic design [1, 21], network sparsification [14, 5], network compaction using hyperparameter search [3] and mixed signal design [17, 23]. However, architectural changes such as these affect the learning accuracy and make the decision process sensitive to parametric and data variations [31, 34].
Interpretability is another property of the ai design that allows for explaining the method of learning (i.e. training) and the decision models (i.e. classification) from data in a humanly intelligible way [4]. It is an important design objectives to establish responsibility in autonomous applications, particularly for those that are safety-critical. Currently there is growing interest in interpretable ai systems design implemented using nns. However, this has remained non-trivial for their complex arithmetic underpinning, including variable gradient-descent based learning behavior during the training regime [12, 6, 33]. This is further exacerbated by design approximation methods for energy efficiency [31].
Learning automata (las) constitutes a class of machine learning (ml) with unique discrete reinforcement characteristics that can address the above design objectives [11]. Originally proposed by Mikhail Tsetlin, it uses the finite-state automaton as the basic learning unit [8]. Each automaton reinforces the current action using the past history, following the trajectory of a probability distribution. This probability distribution is updated based on the environmental responses the automaton obtains by performing a particular action. However, as the number of actions, and their probability distribution trajectories have a very large number of combinations, designing compact decision systems using la has been challenging [20, 18].
Recently, Tsetlin machine (tm) has been proposed as a promising new ml
algorithm [10] that simplifies the traditional learning automata by combining the state-bounded action updates with contemporary game theory [30]. Each automaton, defined as the finite automata with linear tactics or Tsetlin automaton (TA), can independently "play games", i.e. update its internal states and actions, using the newly refined reinforcement mechanisms (see Table 1). These have enabled the formulation of a learning problem through hierarchical and powerful propositional logic expressions [9, 15, 13]. Exploiting these the first-ever tm hardware architecture was proposed, which demonstrated significantly higher energy efficiency than state-of-the-art NNs[32]. A brief description of the Tsetlin Machine is provided in Section 2.
The efficacy of hardware Tsetlin machines depends on a number of hyperparameters (see Table 2). These are often inter-dependent and intertwined due to stochastic nature of propositional logic based learning within the reinforcement components of the tm algorithm [9]. For understanding the relationships between them and thereby achieving the maximum efficacy, a systematic design space exploration is needed.
In a tm, when the energy efficiency objective is coupled with learning efficacy, this can indeed be significantly more challenging. On the one hand, energy-frugality favors using the least amount of resources (e.g. the minimum number of clauses and reinforcement events per learning epoch). On the other hand, accuracy requires significantly higher stochastic diversity between the reinforcement components (using higher number of clauses as well as concurrent learning events). To determine a balance between these conflicting requirements, designing a suitable prodigality is essential. Prodigality allows the system to navigate through the maximum number of bounded state transitions and tries not to miss its best scenarios. One key mechanism for achieving prodigality in tm is to enable the system to perturb its the state transitions with an aim to performing the optimal number of reinforcement steps.
In this paper, we investigate the method of leveraging the la hyperparameters to resolve this natural conflict in the best possible way. For that we systematically study reachability under controlled redundancy and randomization. Further, we analyze the state reachability and convergence in a game-theoretic setting (with Nash equilibria). Our overall aim is to demonstrate the methods of energy-frugal and explainable ml designs that are pivotal for the growth of this new ai algorithm.
The paper is organized as follows. Section 3 introduces la hyperparameters in the context of hardware architecture. Section 4 studies their impact on the
conflicting tradeoffs between energy, accuracy and performance. Sections 5 and 6 provide a state transition based reachability and convergence analysis with and without faults present. Finally, Section 7 summarizes the analysis and discusses future work.
## 2 The Tsetlin Machine
Figure 1 depicts a schematic diagram of the TM hardware architecture consisting of 3 structural components [10, 32]: data encoding, reinforcement and inference. These are briefly described below.
**Data encoding**: TMs encode the input data as a set of Boolean digits with equal significance, which we term as _Booleanization_. It is different from Binarization in NNs, where data are encoded in binary numbers with positional significance of digits. Details of the method of Booleanization are outside the scope of this paper; interested readers can refer to [32]. The encoded Boolean digits and their complements define a set of input literals for the TM. Each clause, representing a propositional logic unit, consists of all input literals and their corresponding TAs -- with a selected number of literals included in the conjunctive logic expression. The number of clauses in each class is an important design parameter that depends on complexity of the ML problem and is determined at design time.
**Reinforcement**: The combination of input literals are then included or excluded in the clause output definition depending on the internal states of the reinforcement components, TAs. These are the finite automata with linear tactics, implemented as finite state machines (FSMs) in the hardware. Each automaton
Figure 1: A schematic diagram of TM, showing 3 different structural components: reinforcement components primarily used in the learning/training process, inference components which are crucial for classification and data encoding that allows parallel inputs both during training and inference.
constitutes a set of states that define the discrete action space. During training rewards are used to reinforce the states towards an action and penalties are used to transition the states for weakening automaton confidence in performing an action. The action updates take place in discrete space, rather than in gradient-descent steps, which is a major differentiator when compared to traditional neural networks. This feature can be exploited for discernible and explainable ai hardware design. This requires understanding reachability of TAs states and clause outputs in relation to the Boolean literals during the training and inference exercises.
**Inference**: After training, an ensemble of ta states (i.e. their actions) define the selection of the literals as well as the output of a clause. To implement a propositional structure, the clauses are divided in two groups: positive clauses and negative clauses. Using a majority voting mechanism, the group of clauses with the most number of logic 1 outputs is able to infer a class definition.
A detailed introduction to tm hardware architecture and the original algorithm can be found here [10, 32].
## 3 Tsetlin Machine Hyperparameters
In the following we briefly introduce the architectural and learning hyperparameters that affect learning efficacy, energy and reachability.
### Architectural Hyperparameters
tm is intrinsically hierarchical. Both reinforcement and inference components have the same number of stochastically reinforced clauses for each output class. The output of these clauses is defined by a team of ta. The number of ta is determined by the number of booleanized digits and their complements (which can be application specific [22]). Each automaton consists of \(2n\) states with \(n\) being the decision boundary between actions: _include_ and _exclude_. The choice of \(2n\) influences the register sizing within each automaton fsm as well as the number of reachable state transitions for convergence (see Section 5).
Typically, for an application the numbers of booleanized digits (\(L\)) and output classes (\(M\)) are pre-defined and fixed. The designer then suitably allocates \(U\) clauses per output class. Thus, a total of \(2L\times U\times M\) ta are needed. Choosing the number of clauses is non-trivial as there are conflicting design tradeoffs
between learning accuracy and energy efficiency (see Section 4). Higher number of clauses offers more stochastic diversity in the propositional logic and as such favors better learning accuracy, while affecting the system energy consumption. Conversely, lower number of clauses reduces the energy consumption at the cost of less stochastic diversity and thereby degraded learning efficacy (see Section 4).
### Learning Hyperparameters
With the given architectural parameters, the actual learning process in tm involves a game theory inspired randomized state transitions between the automata [19]. In each round of the game, a selection of these automata independently decide the next state transitions and actions within their respective clauses. This selection process is governed by the following feedback steps in the iterative learning process, as follows:
1. The current automata actions (_include_ or _exclude_) form a propositional logic expression between the literals, which defines the clause output using the training datapoint (which is a set of booleanized literals used in training or inference).
2. For a given datapoint, the tm architecture then generates two groups of clause outputs with equal number of clauses: positive polarity clauses (\(C_{j}^{i+}\)) and negative polarity clauses (\(C_{j}^{i-}\)). By deducing the sum of all (\(C_{j}^{i-}\)) from the sum of all (\(C_{j}^{i+}\)) the learning classification is produced [10].
3. If the output is false negative or true positive (i.e.expected output is 0 or 1 but the current output is 1), then type I feedback is required for automaton
\begin{table}
\begin{tabular}{c|c|c c c|c c c}
**Feedback Type** & \multicolumn{4}{c}{**Type I**} & \multicolumn{4}{c}{**Type II**} \\ \hline \multicolumn{2}{c|}{_Truth Value of Clause \(C_{j}^{i+}\)_} & \multicolumn{2}{c|}{1} & \multicolumn{2}{c|}{0} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{0} \\ \multicolumn{2}{c|}{_Truth Value of Literal \(l_{k}\)_} & \multicolumn{2}{c}{1} & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{1} & \multicolumn{2}{c}{0} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{0} \\ \hline \hline \multirow{2}{*}{**Include Literal (\(l_{k}\in L_{j}^{i+}\))**} & \(P(\text{Reward})\) & \(\frac{s-1}{s}\) & NA & 0 & 0 & NA & 0 & 0 \\ & \(P(\text{Inaction})\) & \(\frac{1}{s}\) & NA & \(\frac{s-1}{s}\) & \(\frac{s-1}{s}\) & 1.0 & NA & 1.0 & 1.0 \\ & \(P(\text{Penalty})\) & 0 & NA & \(\frac{1}{s}\) & \(\frac{1}{s}\) & 0 & NA & 0 & 0 \\ \hline \multirow{2}{*}{**Exclude Literal (\(l_{k}\notin L_{j}^{i+}\))**} & \(P(\text{Reward})\) & 0 & \(\frac{1}{s}\) & \(\frac{1}{s}\) & \(\frac{1}{s}\) & 0 & 0 & 0 & 0 \\ & \(P(\text{Inaction})\) & \(\frac{1}{s}\) & \(\frac{s-1}{s}\) & \(\frac{s-1}{s}\) & \(\frac{s-1}{s}\) & 1.0 & 0 & 1.0 & 1.0 \\ & \(P(\text{Penalty})\) & \(\frac{s-1}{s}\) & 0 & 0 & 0 & 0 & 1.0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Reinforcement feedback in a ta deciding on _include_ or _exclude_ of a given literal \(l_{k}\) in the clause \(C_{j}^{i+}\). NA refers to _no action_.
corresponding to the literal \(l_{k}\) within a \(C_{j}^{i+}\) (Table 1).
4. If the output is false positive (i.e.expected output is 1 but the current output is 0), then type II feedback is required for automaton corresponding to the literal \(l_{k}\) within a \(C_{j}^{i+}\) (Table 1).
The selection of the clauses that are reinforced in steps 3 and 4 above depends on a feedback threshold (\(T\)). A higher \(T\) value forces a larger randomly selected team of clauses to participate in the reinforcement process providing with more stochastic diversity in the reinforcement steps or vice versa. Each automaton within the selected clauses also updates the state transitions based on the stochastic variable called learning sensitivity (\(s\)), which controls the level of agility in issuing a reward or penalty to each ta. Random number generation is crucial for controlling these stochastic variables: \(s\) and \(T\). With suitably chosen hyperparameters, rewards and penalties allow for the state transitions, while no action is crucial for learning stability.
Table 2 summarizes the Tsetlin machine hyperparameters with their associated symbols and impacts. In the next section their impact on energy-frugality and prodigality will be investigated in detail.
## 4 Energy-Frugal Design
### Design Exploration Framework
Figure 2 shows the hardware/software co-design framework used in our design exploration exercises. In this framework, software (SW) models are used for validating training and inference accuracy, typically for the larger ml datasets.
\begin{table}
\begin{tabular}{l l l}
**Hyperparameter** & **Symbol** & **Impact** \\ \hline Number of booleanized inputs & \(L\) & Application-specific and fixed \\ Number of classes & \(M\) & Application-specific and fixed \\ Number of clauses per class & \(U\) & Influences accuracy/energy/prodigality \\ Number of automaton states & \(2n\) & Influences reachability and FSM architecture \\ Automaton decision boundary & \(n\) & Influences reachability and FSM architecture \\ Automaton initialization state & \(\phi_{\text{Init}}\) & Influences reachability (typically \(\phi_{\text{Init}}\)=\(S_{n}\) or \(S_{n+1}\)) \\ Feedback threshold & \(T\) & Influences concurrent learning events of clauses \\ Learning Sensitivity & \(s\) & Influences the penalty/reward probabilities in automaton \\ \hline \end{tabular}
\end{table}
Table 2: Tsetlin machine parameters, their symbols and system-wide impact.
However, they have limited low-level hardware validation capability. field-programmable gate arrays (FPGAs) prototype models are used to facilitate accelerated design explorations with refined hardware configurations. These are ideal for resource-frugality as well as hardware-level learning accuracy validations. However, FPGAs have limited flexibility and scalability of ML datasets and as such scaled down datasets are used. By transferring the configurations onto register transfer level (RTL) models, low-level hardware prototype is designed, which are validated for high-fidelity figures of energy-frugality, accuracy and performance. Since low-level hardware simulations are computationally expensive, FPGA prototype models are iterated with different parametric values for faster design exploration.
For a set of ML datasets, each hyperparameter is iterated for its allowable values on the framework. In each iteration a training, followed by an inference experiment are carried out to study the tradeoffs between accuracy, performance and energy. In our energy-frugality investigations, we use three characteristically different datasets as follows (also see Figure 3). _Iris1_ is a small flower detection dataset with 16 booleanized digits; the dataset has 3 output classes which are evenly distributed in the 150 datapoints but with high correlations between two (_versicolor_ and _virginica_) output classes. _Breast Cancer2_ is a diagnostic dataset with 300 booleanized digits; the dataset has 2 output classes (_malignant
Figure 2: A hardware/software co-design framework for TM design automation and exploration. The aim is to achieve energy-frugality, performance and accuracy in the hardware TM.
and _benign_) with a 9:7 bias towards _benign_ in the 569 datapoints with minor correlations between them. _MNIST_3 is a handwritten digit recognition dataset with 784 booleanized digits; the dataset has 10 output classes without any particular bias between them. However, it features high correlations between some digits, e.g. between 5 and 6, and between 1 and 7.
Footnote 3: MNIST: [https://tinyurl.com/cpbyrhs](https://tinyurl.com/cpbyrhs)
### Hyperparameter Search and Design Exploration
A number of hyperparameter search experiments were carried out using the framework (Figure 2). Figure 4 illustrates the impact of hyperparameter \(s\) on the inference accuracy with varying number of clauses (\(U\)) and learning threshold (\(T\)) values for 3 datasets. Each inference experiment was run with 80%-20% split between training and inference for 100 training epochs, which are large enough for learning convergence. This was repeated for 300 times to provide a stable mean of accuracy.
Figure 4(a) and Figure 4(d) show the impact of lower and higher \(s\) values (\(s\)=1.2 and \(s\)=10) for the _Iris_ dataset. As can be seen, smaller \(s\) value provides comparable accuracy with lower number of clauses (e.g. 50 clauses provide the same accuracy for \(s\)=1.2 as opposed to 150 clauses when \(s\)=10). This is because lower \(s\) value enables more penalty/reward events and less no action events per automaton (see Table 1 and Figure 5). In the case of higher \(s\) the impact of different \(T\) values is remarkably more visible. This is an effect of lower \(T\) values enabling less number of randomly selected concurrent ta updates in the same reinforcement cycle (Figure 5). For both experiments, when the number of
Figure 3: Principal component analyses of datasets showing class correlations between 2 components.
Figure 4: The impact of hyperparameters on the tm inference accuracy.
lauses is substantially large (e.g. \(>300\)), maximum accuracy can be achieved at higher \(T\) values. Due to sporadic ta reinforcements dispersed between the clauses, lower \(T\) values show over-fitting trends between the _verisicolor_ and _virginica_ output classes, which have high correlations (Figure 3(a)).
The over-fitting problem is however less dominant in the _Breast Cancer_ dataset as shown in Figure 4(b) and Figure 4(e) as there is less correlations between the output classes. As such, lower \(T\) and \(s\) values contribute to higher accuracy even at a lower number of clauses (Figure 3(b)). For example, despite having a larger number of datapoints this dataset can achieve more than 96% accuracy with only 60 clauses.
The hyperparameter search experiments in _MNIST_ provide similar insights as in _Iris_ as lower \(T\) values persistently suffer from over-fitting (Figure 4(c) and Figure 4(f)). With high correlations between classes in the dataset, more concurrency and stochasticity are essential through higher \(T\) values.
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c}
**Setup** & **s** & **T** & **Epoch** & **Reward Type I** & **Penalty Type I** & **Penalty Type II** \\ \hline \hline
1 & 1.2 & 20 & 3 & 62,086 & 52,258 & 15,909 \\
2 & 18 & 4 & 100 & 102,510 & 137,491 & 36,679 \\ \hline \end{tabular}
\end{table}
Table 3: Careful prodigality offers faster learning, although with more number of reinforcement events per learning epoch. Note here, the number of reinforcements through all feedback types are almost equally inflated when \(s\) is higher and \(T\) is lower as more learning epochs are necessary for comparable accuracy.
Figure 5: Number of learning events generated by different \(T\) and \(s\) values on the Iris dataset.
Figure 5 shows the number of penalty/reward reinforcement events generated by the _Iris_ dataset for different (\(s\), \(T\)) pairs. The number of clauses is fixed at 90 and the inference experiments were executed over 30 epochs. As expected, low \(T\) produces significantly less number of events when compared with higher \(T\) for a given \(s\). Higher \(s\) value reduces the number of events considerably, but introduces further instability in the learning (Figure 4(d)). Although in terms of energy consumption this is more rewarding, this can provide inferior inference performance due to over-fitting issues described earlier (see Figure 4(a)). Higher \(T\) with lower \(s\) values produce more events and provide better learning efficacy (see Figure 4(a) and Figure 4(d)) at the cost of higher energy consumption. These are examples of conflicting tradeoffs, where prodigality must be carefully designed to ensure a balance between energy and learning efficacy. Table 3 demonstrates that the prodigal allocation (lower \(s\) with higher \(T\)) exploits the stochastic diversity between ta better and achieves \(>\)93% accuracy much faster (in 3 epochs) when compared with the learning efficacy unaware and energy-frugal solution (higher \(s\) with lower \(T\)).
Table 4 shows the optimized hyperparemeters that offer the best learning efficacy and energy-frugality objectives. To generate the energy figures, we used low-level hardware design experiments in Cadence Innovus using scaled-down datasets (e.g. _Iris_). A 65 nm low-power technology node based design was synthesized with necessary peripherals and input/output (io) to produce normalized energy in terms of the energy consumed per atomic data operation [32]. These were then scaled up for the optimized architectural allocations in each dataset to estimate the energy required per inference datapoint (i.e. the number of concurrent booleanized literals).
\begin{table}
\begin{tabular}{c||c|c|c|c|c}
**Dataset** & **Clauses** & \(T\) & \(s\) & **Testing** & **Energy/** \\ & & & & **Accu-** & **datapoint** \\ & & & & **racy** & \\ \hline \hline _Iris_ & 90 & 4 & 1.2 & 94.1\% & 68.8 _pJ_ \\ _Breast Cancer_ & 60 & 5 & 1.1 & 97.0\% & 574 _pJ_ \\ _MNIST_ & 100 & 8 & 1.9 & 95.1\% & 12.5 _nJ_ \\ \hline \end{tabular}
\end{table}
Table 4: Optimized hyperparameters, learning accuracy and energy consumption for datasets.
### Impact of Pseudorandom Number Generation on Accuracy
Pseudorandom number generation is an important means for ensuring stochastic diversity between clauses in tm. Combined with the agility in issuing reinforcement actions (by means of parameter \(s\)), this becomes an effective mechanism to control the prodigality and hence accuracy of a tm. In the hardware implementation of tm, one psuedorandom number generator (prng) is instantiated for each ta to allow maximum concurrency in learning. In its current form, the software tm produces random numbers using a 64-bit permuted congruential generator (pcg). Whilst the pcg offers great statistical properties, it requires a \(64\times 64\)-bit multiplication, addition and shifts. These operations are very costly to implement in hardware in terms of area and power.
In the tm we are not concerned with the unpredictability properties of the prng, but only the stochastic diversity between the clauses. So as an alternative to the pcg, we instead use linear feedback shift registers (lfs) which require only a shift register and xor operations. The area and power of an lfsr scales with its bit width, therefore minimization of bit width is paramount for energy frugality.
In order to assess the impact of stochasticity on the tm, we first find the \((U,T,s)\) parameter combination giving the highest accuracy on the Iris dataset, using the original pcg prng method, 50 epochs and 1000 ensembles: \((140,11,10)\). Keeping these parameters constant, in Figure 5(a) we investigate how differently sized lfsr compare in accuracy with pcg. The 8-bit lfsr maintains similar
Figure 6: The impact of LFSR allocations on learning efficacy.
accuracy to the pcg--in fact, any lfsr size from 8 to 32 bits exhibit accuracy within the margin of error. With a 7-bit lfsr there is a small but noticeable drop in accuracy. Below 7-bit width we see a huge and unacceptable loss in accuracy as the diversity in clause learning drops.
Figure 6b shows that, for this dataset and tm configuration, it is possible to regain some accuracy lost by the 7-bit lfsr by reducing the value of the \(s\) parameter, encouraging more learning steps to take place within the same number of epochs. It is noticed that this loss in accuracy cannot be reclaimed by either increasing the number of clauses or varying the \(T\) parameter. In the former case, clauses are simply duplicated and do not add any extra information to the learning process. This means that the optimal tm specification using pcg is also the optimal specification for lfsr of width 8-bits or greater with the Iris dataset. It is not clear how datasets with higher dimensionalities will be affected by the precision of lfsr; we aim to study this in the future.
## 5 Explainability and Dependability using Reachability Analysis
Modeling learning capability is vital for understanding explainability [4]. A crucial component of this capability is reachability analysis. Reachability is traditionally defined as a process of exploring the set of states that a (usually discrete-event) system can visit while performing a set of permitted actions. Often this process aims to check for and prove certain properties of the system. In our research, we define reachability slightly more specifically, as the property of the system that allows it to navigate through the finite state-space produced by the composition of finite-state automata, namely tas. This property is crucial for the hardware to generate the intended and bounded outputs by relating them to sequences of the input data points.
To investigate reachability of the ai hardware using the principles of learning automata (Figure 1), a key hardware block is the team of ta within the reinforcement component. As described earlier, the overall operational cycle in reinforcement involves the work of both sequential (tas) and combinational parts (clauses, classifiers and feedback). As input data sequences are applied, the whole system evolves in the TA state-space and eventually reaches the subset of states (trained states) where the system can perform its most advantageous classification decisions. The latter property, convergence to the stable trained
state, is crucial for the accuracy and efficiency of the tm in terms of performance and energy. Besides, reachability becomes a measure of explainability because the trajectories of states through which the system converges can be easily traced.
Figure 7 shows a high-level state transition diagram of each automaton with \(2N\) internal states in a 2-action environment. We denote the TA states as **S**=\(\{s_{1},s_{2},\ldots s_{n}\ldots s_{2N}\}\), where \(s_{n}\) is the \(n\)-th state. Each automaton initially starts with a random state near the action boundary, e.g. \(\phi_{init}\)=\(s_{N}\) or \(s_{N+1}\). This allows for the ta to make less number of state transitions to reinforce an action. After each reinforcement step, a reward is used to strengthen an action or a penalty is used to weaken the automaton confidence in performing the current action [10]. Since state transitions take place in discrete single steps, \(s_{n}\) is the ta state resulting from a transition from either of \(s_{n-1}\) or \(s_{n+1}\). For a given state of \(s_{n}\), the action performed by the automaton is given as:
\[G(s_{n})=\begin{cases}\alpha_{1};&\text{if }1\leq n\leq N\\ \alpha_{2};&\text{if }(N+1)\leq n\leq 2N\end{cases} \tag{1}\]
To demonstrate the number of reinforcement steps needed to fully converge to the final state as well as the corresponding action, we consider an automaton with \(2N=6\) internal states and 2 actions. The state transition equations of all automaton states are given as below:
\[s_{1} =(s_{1}\text{ AND }R)+(s_{2}\text{ AND }R); \tag{2}\] \[s_{2} =(s_{1}\text{ AND }P)+(s_{3}\text{ AND }R);\] (3) \[s_{3} =(s_{2}\text{ AND }P)+(s_{4}\text{ AND }P);\] (4) \[s_{4} =(s_{3}\text{ AND }P)+(s_{5}\text{ AND }P);\] (5) \[s_{5} =(s_{4}\text{ AND }R)+(s_{6}\text{ AND }P); \tag{6}\]
Figure 7: A Tsetlin automaton for 2-action environment with \(2N\) states
Figure 8: An illustrative example of TA state changes in a 2-input binary XOR
\[s_{6}=(s_{5}\text{ AND }R)+(s_{6}\text{ AND }R), \tag{7}\]
where R and P are the reward and penalty signals generated by the state update circuit depending on the randomized reinforcement trajectory (Table 1). From Eqns. (2)-(7), given the random \(\phi_{init}\) of either \(s_{3}\) or \(s_{4}\), the automaton needs minimum 3 or 4 reinforcement steps. In Section 6, we provide a game theoretic analysis of state convergence using Nash equilibria for a binary XOR example.
The states of the whole tm are formed as Cartesian products of the states of individual tas. To illustrate how bounded ta state transitions contribute to reachable learning formulation in the tm algorithm, we simulate a 2-input XOR using a SystemC description of the same. The inputs and their complements constitute 4 literals and as such 4 ta are used in each clause. Each automaton consists of 6 states as exemplified above. A total of 4 clauses are used in the inference circuit, of which 2 are positive clauses and 2 are negative clauses into the majority voting (i.e. classification) circuit. Figure 8 shows the internal states of 4 ta, defining one clause output only.
The state transitions in an epoch correspond to 4 datapoints (which are the set of literals), but only 2 are shown. The ta start with the same initial states of \(s3\). After the first datapoint (\(\mathbf{X}\):\([X_{0},X_{1}]\)=\([0,0]\)) reinforcement into literals \(l_{0}\), \(l_{0}^{\prime}\), \(l_{1}\) and \(l_{1}^{\prime}\) (including the original booleans and their complements), the clause sees an output of 1 as all ta states suggest no inclusion of 0 literals. Overall, this results in a false positive classification and as such 2 penalties in TA\({}_{0}\) and TA\({}_{2}\) through type II feedback (Table 1), causing them to transition to \(s_{4}\) (Figure 8(a)). After the second datapoint (\(\mathbf{X}\):\([0,1]\)), the clause output is 0 as the TA\({}_{2}\) state favors the inclusion of a 0 literal (\(X_{0}^{{}^{\prime}}\)). However, due to false negative classification, TA\({}_{0}\) is penalized to \(s_{3}\) and TA\({}_{3}\) is rewarded to \(s_{2}\) through type I feedback (Figure 8(b)). With more datapoints and their associated single-step reinforcements (Eqns.(2)-(7)), the ta continue to settle for the states with higher reward probabilities, e.g. \(s_{1}\) and \(s_{6}\) (Figure 8(c)). This guarantees convergence during training (also see Section 6).
The above finite-state reachability analysis shows an important property of the tm, where the (integer) vector of states of tas is effectively mapped (contracted) onto the (binary) vector of actions include/exclude. This mapping allows us to define the notion of equivalence between the states of tas, and hence define the conditions for detecting convergence to the trained state as soon as possible, thus improving the efficiency of the system and its performance.
We continue our reachability analysis in the presence of faults. For this, we
introduce fault injection handles in the SystemC model using fault-enabled data types [26]. For demonstration purposes, our fault injection campaign includes a stuck-at 1 fault model, applied to the reinforcement part, i.e. TA.
We inject a stuck-at 1 fault in the least significant bit (i.e. bit position 0) of automaton 1 (i.e. TA\({}_{1}\)) within the first clause. This is done to observe how this fault can change TA\({}_{1}\) state transitions (see Figure 9) when compared with the same in Figure 8. As can be seen, the automaton assumes an initial state of \(s_{3}\) and does not change the state after the iteration step 1. This is equivalent to a no-action reinforcement of 4 datapoints. In the iteration step 2, the automaton state is penalized towards \(s_{4}\) through an increment operation (i.e. from register value of 011 to 100). However, due to the fault the automaton transitions to \(s_{5}\) (i.e. a register value of 101). After iteration step 3, the automaton is rewarded towards \(s6\). However, the faulty automaton state tries to transition to an unreachable state of \(s_{7}\). As the state bounds are protected through a [_modulus 6 +1_] operation internally, the automaton changes the state to \(s_{1}\). The automaton retains this state until convergence in all clauses (after 18 epochs). Note that unlike the TA\({}_{1}\) state in the first clause of the fault-free tm (Figure 8), the faulty automaton excludes the associated Boolean literal, \(X_{0}^{{}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
with limited state transitions without other means of fault mitigation.
Next, we will explore tm's fault masking capability under increased number of clauses, from from 4 to 12 each with 6 automaton states (Figure 1). Figure 10 presents the results in terms of the maximum training accuracy and the corresponding number of iteration steps to convergence. To observe the significance of fault positions, we injected stuck-at 1 faults in different positions of the TA\({}_{1}\) register: at bit positions 0, 1 and 2. As expected, when with 8 clauses or more, the training accuracy increases to 100% (Figure 10(a)). Provisioning more clauses in tm (i.e. prodigality) allows for increased stochastic variations to ascertain the overall reachability properties [16]. tm features majority voting in the classification circuit and as such mitigation of faults is achieved without any
Figure 10: The impact of stuck-at 1 faults in TA\({}_{1}\) at different bit positions in terms of accuracy and performance; number of clauses are varied to observe how clause redundancy naturally masks the faults.
further redundancy control.
The fault positions influence the training times (Figure 10(b)) as they can constrain the number of state transitions available to an automaton, often with an action bias. Thus, the number of reinforcement steps needed to increase the automaton action confidence is affected. For example, a stuck-at fault in bit position 2 is more challenging to mask as it only allows for the include action states: \(s_{4}\) (100), \(s_{5}\) (101) and \(s_{6}\) (110). The other automata within the clause take more reinforcement steps to converge their states diverging from this bias. This also explains the longer convergence time with lower number clauses. However, as the number of clauses is increased, the training convergence times decrease due with more redundancy and diversity between clauses.
Finally, we use an alternative means of fault mitigation by provisioning more states (\(2n\)) per automaton and study its impact on the reachability. For this, we repeat the stuck-at 1 fault injection in TA\({}_{1}\) register in bit position 0 for 4
Figure 11: Impact of stuck at faults in \(TA_{1}\) in terms of accuracy and performance with variable number of TA states.
different state sizes: from 6 to 12, each with a 4-clause configuration. Figure 11 shows the maximum training accuracy as well as their convergence times. As can be seen, the accuracy increases from 75% to 100% when the number of states is increased from 6 to 8, corresponding to a 1-bit increase in the automaton register size (Figure 11(a)). High \(2n\) allow each automaton to explore a larger state-space. Note that, with one clause unable to provide correct outcomes, the 6-state automaton converges faster than the 8-state automaton. However, as more state values are allowed in the automaton, the learning converges faster to the maximum accuracy of 100% (Figure 11(b)).
\begin{table}
\begin{tabular}{c||c|l|l|l|l||c} \# & \(\mathbf{TA}_{1}\) – \(x_{1}\) & \(\mathbf{TA}_{2}\) – \(\neg x_{1}\) & \(\mathbf{TA}_{3}-x_{2}\) & \(\mathbf{TA}_{4}\) – \(\neg x_{2}\) & **Clause** \\ \hline \hline
1. & Exclude (\(-\)0.625) & Exclude (\(-\)0.625) & Exclude (\(-\)0.625) & Exclude (\(-\)0.625) & 1 \\
2. & Exclude (\(-\)0.375) & Exclude (0.125) & Exclude (\(-\)0.1875) & Include (0.125) & \(\neg x_{2}\) \\
3. & Exclude (0.0625) & Exclude (\(-\)0.375) & Include (0.125) & Exclude (\(-\)0.1875) & \(x_{2}\) \\
4. & Exclude (0.125) & Exclude (0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & \(x_{2}\wedge\neg x_{2}\) \\
5. & Exclude (\(-\)0.1875) & Include (0.125) & Exclude (\(-\)0.375) & Exclude (0.0625) & \(\neg x_{1}\) \\
6. & Exclude (\(-\)0.125) & Include (\(-\)0.125) & Exclude (\(-\)0.125) & Include (\(-\)0.125) & \(\neg x_{1}\wedge x_{2}\) \\ \hline
**7.** & Exclude (0.0625) & Include (0.125) & Include (0.125) & Exclude (0.0625) & \(\neg x_{1}\wedge x_{2}\) \\ \hline
8. & Exclude (0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & \(\neg x_{1}\wedge x_{2}\wedge\neg x_{2}\) \\
9. & Include (0.125) & Exclude (\(-\)0.1875) & Exclude (0.125) & Exclude (\(-\)0.375) & \(x_{1}\) \\ \hline
**10.** & Include (0.125) & Exclude (0.0625) & Exclude (0.0625) & Include (0.125) & \(x_{1}\wedge\neg x_{2}\) \\ \hline
11. & Include (\(-\)0.125) & Exclude (\(-\)0.125) & Include (\(-\)0.125) & Exclude (\(-\)0.125) & \(x_{1}\wedge x_{2}\) \\
12. & Include (\(-\)0.125) & Exclude (0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & \(x_{1}\wedge x_{2}\wedge\neg x_{2}\) \\
13. & Include (\(-\)0.125) & Include (\(-\)0.125) & Exclude (0.125) & Exclude (0.125) & \(x_{1}\wedge\neg x_{1}\) \\
14. & Include (\(-\)0.125) & Include (\(-\)0.125) & Exclude (0.125) & Include (\(-\)0.125) & \(x_{1}\wedge\neg x_{1}\wedge\neg x_{2}\) \\
15. & Include (\(-\)0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & Exclude (0.125) & \(x_{1}\wedge\neg x_{1}\wedge x_{2}\) \\
16. & Include (\(-\)0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & Include (\(-\)0.125) & \(x_{1}\wedge\neg x_{1}\wedge x_{2}\) \\ \end{tabular}
\end{table}
Table 5: Analysis of Nash equilibria for 2-input XOR
Game Theoretic Convergence Analysis
The stochasticity in tm learning comes from (i) the random arrival of training samples, (ii) the random selection of clauses for updating, and (iii) the random generation of the rewards and penalties of Type I feedback. Here, we use a game-theoretic approach to analyze how Type I and Type II feedback guide the team of Tas associated with a clause towards the optimal action configuration [10]. For this analysis we use a 2-input XOR, assuming uniformly distributed inputs, \(P(x_{1}=1)=P(x_{2}=1)=0.5\).
A game of ta involves multiple automata and is played over several rounds [19, 27]. A round starts with each ta deciding upon an action, which taken together govern rewarding of the individual Tas. Responding to the rewards, the Tas perform a random walk over the joint state space [29]. The interaction between the Tas can thus be accurately represented by the payoff matrix of the game [19, 27].
A general analysis of the payoff matrix of the tm can be found in [10], while Table 5 contains the payoff matrix for 2-input XOR. The table shows 4 Tas navigate the joint action space to produce a clause, through trajectories that always lead to the optimal action configuration. Each row of the table specifies an action configuration. By assigning rewards a value of \(1\) and penalties a value of \(-1\), we can calculate the expected payoff of each ta, per action configuration. The calculation is based on the Type I and Type II feedback tables (Table 1), assuming uniformly distributed inputs and an \(s\)-value of \(4\). Along with each action, in parentheses, the expected payoff of the action is listed.
We have a Nash equilibrium if none of the Tas can do better by unilaterally switching action [19, 30]. For example, for the action configuration in row 1 of the table, TA\({}_{1}\) has selected _exclude_, which provides the expected payoff \(-0.625\). The ta would thus benefit by instead selecting _include_, jumping to the action configuration of row 9. This would give an expected payoff of \(0.125\) instead. Accordingly, the action configuration of row 1 is not a Nash equilibrium.
By nature, a ta pursues the action with the highest expected payoff (see Section 5), in this way seeking one of the Nash equilibria of the payoff matrix. However, if the probability of receiving a reward is smaller than that of receiving a penalty (the expected payoff is negative), the action is rejected [19, 28]. Thus, the Tas only accept a Nash equilibrium if all of the actions have positive expected payoff. From Table 5, this is the case only for the action configurations of row 7 and row 10. Both configurations produce a clause appropriate for the 2-input
XOR problem. All of the other action configurations introduce prediction errors. However, these are transient configurations because they all contain actions with negative expected payoffs, thus repelling the tas to other configurations for convergence. Hence, the power of the scheme!
## 7 Summary and Conclusions
We presented the first insights into energy-frugality and explainability of learning automata based AI hardware design using hyperparameter search and reachability analysis. Our key findings are as follows.
_Energy-Frugality_: For datasets with minor inter-class correlations, low learning sensitivity (\(s\)) and learning threshold (\(T\)) hyperparameter values provide robust learning in tm with less number of clauses, thus providing energy-frugality. However, when inter-class correlations persist, increasing \(T\) is essential for providing higher stochastic variations (i.e. prodigality) during learning to avoid over-fitting and learning instability. For ensuring high-fidelity stochastic variations at low-cost, the precision of low-level randomization circuits needs to be suitably optimized.
_Explainability and Dependability_: With a bounded state-space, tm can start from random initial ta states and yet reach a learnt state with incremental, discrete-event reinforcements. The LA algorithm guarantees convergence towards this learnt state as confirmed by our game theoretic analysis. With suitably chosen redundant clauses or automation state register bit-width and thereby more prodigality, faults can be fully masked and reachability property can be retained without requiring any additional fault mitigation strategy. Compared with clause redundancy approach, expanding the state register sizes provides more energy-frugality.
Our future work includes the development of a formal explainability analysis tool with comprehensive fault injection campaigns and energy optimization mechanisms.
## Acknowledgments
The authors would like to gratefully acknowledge the funding support from the UK Northern Accelerator (ref: NACCF 220), Lloyds Registers Foundation (ref: 5thICON-12) and Norwegian Research Council (ref: AIEverywhere project). |
2303.04715 | Extending the Pre-Training of BLOOM for Improved Support of Traditional
Chinese: Models, Methods and Results | In this paper we present the multilingual language model BLOOM-zh that
features enhanced support for Traditional Chinese. BLOOM-zh has its origins in
the open-source BLOOM models presented by BigScience in 2022. Starting from
released models, we extended the pre-training of BLOOM by additional 7.4
billion tokens in Traditional Chinese and English covering a variety of domains
such as news articles, books, encyclopedias, educational materials as well as
spoken language. In order to show the properties of BLOOM-zh, both existing and
newly created benchmark scenarios are used for evaluating the performance.
BLOOM-zh outperforms its predecessor on most Traditional Chinese benchmarks
while maintaining its English capability. We release all our models to the
research community. | Philipp Ennen, Po-Chun Hsu, Chan-Jan Hsu, Chang-Le Liu, Yen-Chen Wu, Yin-Hsiang Liao, Chin-Tung Lin, Da-Shan Shiu, Wei-Yun Ma | 2023-03-08T16:53:19Z | http://arxiv.org/abs/2303.04715v2 | Extending the Pre-Training of BLOOM for Improved Support of Traditional Chinese: Models, Methods, and Results
###### Abstract
In this paper we present the multilingual language model BLOOM-zh that features enhanced support for Traditional Chinese. BLOOM-zh has its origins in the open-source BLOOM models presented by BigScience in 2022. Starting from released models, we extended the pre-training of BLOOM by additional 7.4 billion tokens in Traditional Chinese and English covering a variety of domains such as news articles, books, encyclopedias, educational materials as well as spoken language. In order to show the properties of BLOOM-zh, both existing and newly created benchmark scenarios are used for evaluating the performance. BLOOM-zh outperforms its predecessor on most Traditional Chinese benchmarks while maintaining its English capability. We release all our models to the research community.
## 1 Introduction
Autoregressive language models predict the future of a text sequence from its past. This simple yet powerful objective admits formulation of numerous cognitive tasks while it also enables every day text into valid training data: news, internet articles, blogs and communities chats, books, and codes. Unfortunately, large language models are often not released to the public. One exception is BLOOM [Le Scao et al., 2022]. BLOOM models are available in various sizes, ranging from 350M to 176B parameters. BLOOM was pretrained on a corpus of 46 natural languages and 13 programming languages. This multilingual training corpus makes BLOOM very versatile, as the high-resource languages aid the performance of the low- and very low-resource counterparts.
At the time of this writing, we are unaware of public available, open-sourced language models specifically targeting Traditional Chinese. We observe that the Traditional Chinese Natural Language Processing (NLP) community would benefit greatly from having such models. Although the BLOOM models performed admirably, due to Traditional Chinese being under-represented in its training data, we felt that we could still meaningfully enhance the models by extending the pretraining over a dataset that is primarily Traditional Chinese. In this paper, we present a series of BLOOM-based language models with enhanced Traditional Chinese capabilities which we intend to release publicly.
All original BLOOM models were pretrained with more than 300 billion tokens. Although one could argue that there should be near endless language resources for Traditional Chinese to constitute a billion token-scale data set, the reality is that there are no high-quality data set of this size that are freely available to the public. Compounding our challenges is the fact that there are scarcely any language model benchmarks for Traditional Chinese, much less any for _generative_ scenarios or for the evaluation of toxicity and bias.
To overcome these issues, we curated over public available materials as our training data pool. To have a set of high quality data at the core, we furthermore obtained a billion token-scale corpora from National Academy for Educational Research1 and Academia Sinica2. Compared to data combed from the web, these datasets are considered to be of higher quality. For performance evaluation, we not only gathered the available language model benchmarks, but also created a number of tests ourselves with the hope that the evaluation suite can be at a similar level to the one employed to evaluate another Eastern Asia language model, HyperCLOVA (Kim et al., 2021). For toxicity and bias evaluation, we constructed tests in the manner of state-of-the-art tests for English (Gehman et al., 2020).
Footnote 1: [https://www.naer.edu.tw/eng/PageFront](https://www.naer.edu.tw/eng/PageFront)
Footnote 2: [https://www.sinica.edu.tw/en](https://www.sinica.edu.tw/en)
Starting from the BLOOM checkpoints, we extended the pretraining over the aforementioned dataset. Our series of the extended model is named BLOOM-zh. We evaluated BLOOM-zh on the performance benchmarks, showing a marked increase in Traditional Chinese capability over the original BLOOM models while maintaining its English capability. Aside from functional performance, we evaluated BLOOM-zh on the toxicity and bias benchmarks and disclosed the results. The result indicates that the model inherits the toxicity and bias level of BLOOM models. Our models and benchmarks are released to the public in an open-source manner.
## 2 Background
### Large Language Models and BLOOM
Large language models (LLM) have received a lot of attention in the last few years. Recent LLMs usually adopt a transformer-based (Vaswani et al., 2017) architecture that encodes and/or decodes text sequences (Roberts et al., 2019; Brown et al., 2020; Rae et al., 2021; Smith et al., 2022; Thoppilan et al., 2022; Zeng et al., 2021; Scao et al., 2022; Zeng et al., 2022).These LLMs achieved better and better performance with more and more parameters in not only language modeling tasks(Merity, 2016; Paperno et al., 2016; Rae et al., 2019) but also many other NLP benchmarks (Lai et al., 2017; Wang et al., 2018; Zellers et al., 2019). Furthermore, unforeseen capabilities can emerge by simply raising the model scale alone (Brown et al., 2020). LLMs are so versatile and so critical for state of the art results that they are sometimes referred to as _foundation models_(Bommasani et al., 2021).
Due to the enormous data and facility prerequisites and costs, hundred-billion parameter and above LLMs are mostly kept proprietary. A notable exception is BLOOM (Scao et al., 2022). It is the first multilingual LLM trained in complete transparency. In its largest configuration, BLOOM has 176 billion parameters. There are also smaller configurations, such as 1B1 and 3B, available in case one prefers the trade off for computational convenience. BLOOM model weights were trained and released by BigScience without charge to the public in 2022. Besides the original BLOOM series, BLOOMZ is its notable variant that is built by finetuning BLOOM on a collection of tasks in the same set of languages seen during pretraining (Muennighoff et al., 2022). BLOOMZ, successively open-sourced to the public in late 2022, has been observed to have zero-shot capability to follow task instructions.
### Training data requirements for large language models
Training large language models require enormous amount of data. In a well-known work regarding the scaling law of language models Kaplan et al. (2020), it was concluded that that the dataset size should scale with the model size according to a power law of \(D\propto\,N^{0.74}\), where \(D\) is the number of data tokens and \(N\) is the number of parameters in a model. Following the recommendation of this work, many subsequent large language models were trained using approximately 300 billion tokens, irrespective of the model size. The BLOOM models were also trained following the convention above. That is, all models were trained with 341 billion tokens of data irrespective of model sizes (Scao et al., 2022). However, a 2022 work Hoffmann et al. (2022) found that the compute optimal scaling law should be one in which the model size and dataset size scale at the same rate. Beyond a model size of one billion parameters, roughly 20 additional tokens should be added to the training data for each additional parameter.
The training data set for BLOOM comprises of 46 natural language and 13 programming language data. From our examination, we can identify about 150 million tokens in the training corpus to be in Traditional Chinese. Furthermore, nearly 99% of these Traditional Chinese data are identified to be taken from Wikipedia3. Going by the aforementioned compute optimal scaling law, these data are sufficient only for a relatively tiny 8 million parameter Traditional Chinese-only language model. It is reasonable to surmise that if the BLOOM models could have been pretrained on a dataset with order of magnitude more Traditional Chinese data, their performance can be meaningfully elevated.
Footnote 3: The statistics are from [https://huggingface.co/spaces/bigscience-data/corpus-map](https://huggingface.co/spaces/bigscience-data/corpus-map).
As _foundation models_, LLMs are now expected to be versatile in virtually any topics that can be documented by text. For this, the training data must include a wide variety of content and style (Bommasani et al., 2021). We therefore also surmised that we could raise the performance of BLOOM in Traditional Chinese by further pretraining the models over data that are complementary to Wikipedia.
### Evaluation Suite for Traditional Chinese processing and generation
English is the language that enjoys by far the most natural language understanding (NLU) and generation (NLG) benchmarks. Many of the benchmarks were designed to test non-generative behavior, e.g. natural language inference (NLI), and coreference resolution. To evaluate the capability of a generative model, one can convert a non-generative test into a generative one by framing a test sample as a zero-shot or few-shot text continuation question. There are abundant published results for both unmodified benchmarks and modified generative benchmarks.
For the specific case of Traditional Chinese, although one could argue that there are near endless language resources and quite many NLU and NLG researchers, the availability of benchmark tests is unfortunately quite scarce. Two well-known tests are Delta Reading Comprehension Dataset (DRCD) Shao et al. (2018) and Formosa Grand Challenge (FGC) 4. DRCD is an extractive benchmark proposed for machine reading comprehension. FGC is a passage question answering benchmark created from news articles and government announcements.
Footnote 4: [https://scidm.nchc.org.tw/dataset/grandchallenge2020](https://scidm.nchc.org.tw/dataset/grandchallenge2020)
### Post-pretraining enhancement of a target language
Multilingual language models are usually trained in a manner in which the data from different languages are interleaved before training. The amount of data for different languages can vary a lot. Though one might worry that languages of insufficient data can perform poorly, due to the transfer of knowledge and skills from other learned languages, a properly trained multilingual language model can have stronger language capabilities in all languages compared to a monolingual counterpart (Kondratyuk and Straka, 2019; Wu and Dredze, 2019; Devlin et al., 2018; Conneau et al., 2019). Several works sought to take advantage of such transfer learning effect to benefit non-English and/or non-Simplified Chinese languages. In the BLOOM model, while only a tiny fraction of the training material was Traditional Chinese, empirical evaluation is that the model outperforms all currently available open-source language models in Traditional Chinese language modeling.
It is sometimes the case that one starts from an already pretrained multilingual model and proceeds to train the model to learn some new target language. The goal is to not only learn a new language but also to preserve or even enhance the already learned language ability due to transfer learning. When the training data in the target language is sufficient, one way is to extend the pre-training with the language modeling objective over the target language data while taking care to mitigate forgetting. This scenario is referred to as _continual learning_(Chen et al., 2018; Parisi et al., 2019). In certain low resource cases when the language resource is quite scarce to not warrant adjusting every parameter of the model, one might apply techniques that are referred to as "adapter" (Houlsby et al., 2019; Stickland and Murray, 2019; Artetxe et al., 2019; Pfeiffer et al., 2020; Yong and Nikoulina, 2022; Yong et al., 2022).
Compared to continual learning, the most popular post-pretraining approach is one of _fine-tuning_. In fine-tuning the model is directly trained using task-specific data as well as task-specific
objective, often going over the task specific data multiple times. This can be done irrespective of whether the target language was pretrained or not. Fine-tuning aims to maximize the capability of the model to the target task. However, it can sacrifice the general language modeling capability outside of the task to achieve this goal.
Lastly, we note that, although the research community generally regards the use of transfer learning for lower resource language to be an all-around positive approach, negative effects have been noticed and being actively investigated (Muller et al., 2020; Suarez et al., 2019; Conneau et al., 2019).
### Fine-tuning to follow instructions
A pretrained language model can perform extremely poorly over downstream tasks, even though one can be almost certain that the model does possess the knowledge to perform correctly. To unlock the performance of a pretrained model, some post-training optimization is usually applied Wei et al. (2021). The BLOOMZ models are a result of finetuning the BLOOM models on a select small corpus of instruction data Muennighoff et al. (2022).
## 3 Methods
In this section, we present the methods with which we extended the pre-training of the 1B1 parameter BLOOM model.
### Models
For the benefit of the reader, the BLOOM model configurations are listed in Table 1. Our BLOOM-zh models share the same configurations.
### Training
To obtain BLOOM-zh, we extend the pre-training of the published BLOOMZ checkpoint aiming to improve the Traditional Chinese abilities while also maintaining the zero-shot abilities from BLOOMZ. We chose to follow the hyperparameters used to finetune BLOOM into BLOOMZ. We observed that using a lower learning rate can improve training stability and mitigate catastrophic forgetting. The trade-off of enhancing Traditional Chinese against the protection of existing 46 natural and 13 programming language capabilities were explored in this study; however, due to space limitation, we only gave the setting and the result corresponding to the released model.
For training BLOOM-zh, we used a single-precision computational and storage configuration, where all the weights and optimizer states are stored in _float32_ precision and all the multiply-and-add operations are also performed in single precision as well. Selective activation recomputation (Korthikanti et al., 2022) is enabled to reduce the memory consumption to store activations.
### Infrastructure
Pre-training any large scale language model efficiently requires very thoughtful engineering. One must judiciously apply the correct combination data, tensor, and pipeline parallelism, in a way that best suits the training facility.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Model & Layers & Number Heads & Key/Value Size & \(d_{model}\) & Sequence Length & Vocab. Size \\ \hline
1.1B & 24 & 16 & 96 & 1536 & 2048 & 250880 \\
3.0B & 30 & 32 & 80 & 2560 & 2048 & 250880 \\
176B & 70 & 112 & 128 & 14336 & 2048 & 250880 \\ \hline \end{tabular}
\end{table}
Table 1: Architecture parameters for various BLOOM models
Our training codebase is based on Microsoft's _Megatron-DeepSpeed5_ library. _Megatron-DeepSpeed_ is the _DeepSpeed_(Rasley et al., 2020) fork of NVIDIA's _Megatron-LM6_ library. _Megatron-LM_ enables data and tensor parallelism for training GPT-like language models (Radford et al., 2018). By augmenting it with _DeepSpeed_ one further speeds the training process up by optimizing the memory usage and the pipeline strategies. On top of this framework, we also used BigScience's fork7 to ensure that the model architecture used in our training program exactly matches the published information (Scao et al., 2022).
Footnote 5: [https://github.com/microsoft/Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed)
Footnote 6: [https://github.com/](https://github.com/) NVIDIA/Megatron-LM
Footnote 7: [https://github.com/bigscience-workshop/Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
We trained the 1B1 configuration of BLOOM-zh over 8 NVIDIA RTX A6000 cards. At this size, an entire model can fit in a single GPU card. Therefore, we simply applied data parallelism-only for distributed training. See for more details about our use of parallelisms.
### Training Dataset
Based on the scaling law found by Hoffmann et al. (2022), the small 1B1 and 3B BLOOM models can be regarded to be under-parameterized given the 341 billion token dataset used for training them. It is well-known that over-parameterization is a necessary condition for forgetting-free continual learning Kirkpatrick et al. (2017). Thus, we expect that further training BLOOM 1B1 and 3B on a pure Traditional Chinese dataset would lead to a certain degree of loss of English capability, however careful one might be.
Due to this under-parameterization of the 1B1 and 3B configuration, we prepared a bilingual training dataset containing primarily Traditional Chinese and minorly English to demonstrate how one can maintain the intended subset of multilingual abilities. For the English part, we subsampled tokens from _The Pile_Gao et al. (2020), the English Wikipedia Foundation, and the instruction dataset _p3_Sanh et al. (2021).
Since there is no publicly available Traditional Chinese dataset in the size we need, we curated our own dataset. We acknowledge that at the time of this writing this dataset is order-of-magnitude smaller in size compared to _MassiveText_(Rae et al., 2021). We do intend to soon build up a public Traditional Chinese dataset of a size similar to _MassiveText_ with perhaps even better diversity. Our dataset combines existing corpora, such as the corpus of contemporary Taiwanese Mandarin (COCT)8, the Academia Sinica Balanced Corpus of Modern Chinese (ASBC)9, and the Central News Agency of Taiwan (CNA) subset of Chinese Gigaword 5(Parker et al., 2011). In addition, we curated and filtered our own datasets from the CC-100 web-crawed data (Wenzek et al., 2020), Wikipedia10, abstracts of theses and dissertations11, as well as a Traditional Chinese instruction dataset inspired by xP3 (Muennighoff et al., 2022). The composition of our dataset is given in Table 2. From our Traditional Chinese and the aforementioned English dataset, we experimented with subsampling data to train BLOOM-zh. For the 1B model, a total of 7.4 billion tokens was used.
Footnote 8: Provided by National Academy for Educational Research
Footnote 9: Provided by Academia Sinica
Footnote 10: [https://dumps.wikimedia.org/zhwiki/](https://dumps.wikimedia.org/zhwiki/)
Footnote 11: Crawled from [https://ndltd.ncl.edu.tw/](https://ndltd.ncl.edu.tw/)
#### 3.4.1 Dataset Pipeline
This section describes the data pre-processing pipelines we applied to build our dataset. We mainly followed the approaches outlined in Rae et al. (2021) and Zeng et al. (2021). We note that subtle customizations were made to reflect the different characteristics among the original datasets. Our data pre-processing pipeline consists of the following stages.
Content filteringGigaword5-CNA contains two types of data, the _story_ type that corresponds to news articles, and the _other_ type that corresponds to weather forecasts. We regard the _story_ type as appropriate for language model pretraining. As for xP3-zh, we use only the Chinese "zh" subset
of the xP3mt dataset12 as published by BigScience. For CC-100, we filtered out all sources that do not have Traditional Chinese as main language leading.
Footnote 12: [https://huggingface.co/datasets/bigscience/xP3mt/](https://huggingface.co/datasets/bigscience/xP3mt/)
Text extractionFor Gigaword5-CNA, we remove the time stamps in the original documents. Other datasets have been well preprocessed into good forms by prior people who curated these datasets.
Document deduplicationWe use the MinHash algorithm to calculate 1-gram Jaccard similarities to determine which documents are near-duplicates of each other (Lee et al., 2021). After sampling a small subset from all documents, we find that the documents whose similarity scores exceed the suggested threshold 0.8 (Rae et al., 2021) count for a small percentage. We thus don't perform deduplication at this point.
Quality filteringFor Gigaword5-CNA and ASBC, following the precedence of Zeng et al. (2021), we rule out any document that contains less than 150 Chinese characters or has a symbol-to-character ratio greater than 0.4.
The web-crawled dataset CC-100-zht however contains documents with low quality content such as incomplete sentences and interrupting advertisements, so we apply a quality filter to it. We follow the same perplexity thresholding strategy that BigScience used to filter OSCAR, also a web-crawled corpus, for their ROTS corpus (Laurencon et al., 2022). For this, we use the same SentencePiece unigram tokenizer and KenLM 5-gram model which BigScience trained on Wikipedia Chinese articles (including Simplified Chinese) to calculate a perplexity score for each document, and then remove the documents with perplexity scores greater than the cutoff value manually established by BigScience13. By this thresholding, about half of tokens from CC-100-zht are filtered out.
Footnote 13: See [https://github.com/bigscience-workshop/data-preparation](https://github.com/bigscience-workshop/data-preparation)
Repetition removalA well-known work Rae et al. (2021) suggested that an indicator of poor quality data is excessive repetition of certain words or phrases within a document. However, the well curated datasets in our data set already show high quality in this aspect. Therefore we only perform repetition removal to the crawled portion.
Punctuation conversion (Gigaword5-CNA only)We convert all halfwidth punctuation marks in Gigaword5-CNA to fullwidth ones using a dictionary mapping, to reflect the usual usage in Traditional Chinese text writing.
Simplified-Traditional Chinese conversion (xP3-zht only)The "zh" subset from xP3mt originally contains mostly Simplified Chinese contents. We use OpenCC14 to convert them into Traditional Chinese, with the option for phrase-level conversion turned on so that our xP3-zht instructions are based on Taiwanese phrases and idioms.
\begin{table}
\begin{tabular}{l l r r} \hline \hline & Category & Size (tokens) & Sampling Proportion \\ \hline Gigaword5-CNA & Written (news) & 0.8B & 13.6\% \\ ASBC & Written (literature) & 0.01B & 0.3\% \\ COCT-books & Written (literature) & 0.3B & 14.0\% \\ CC-100-zht & General (web) & 2.0B & 20.2\% \\ Wikipedia-zht & Written (knowledge) & 0.4B & 7.1\% \\ Theses & Written (knowledge) & 0.4B & 7.1\% \\ xP3-zht & Instructions & 1.1B & 7.7\% \\ \hline All & & 5.2B & 70\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data composition of our Traditional Chinese data set.
## 4 New Traditional Chinese Benchmarks
Given that there are very few applicable benchmarks to evaluate Traditional Chinese language model performance, we created new benchmark tests. The details of these new tests are given below. We designed these tests to provide a quantitative metric for the ability to continue text in Traditional Chinese and for the ability to generate correct responses given instructions.
### Taiwan-specific knowledge benchmark
We introduce TTQA (Taiwanese Trivia Question Answering), a novel evaluation dataset designed to assess the common sense answering ability of large language models on Taiwanese-specific terms. The dataset consists of 64 short passages derived from carefully selected Wikipedia articles covering a wide range of topics such as Taiwanese celebrity, music, food, animals, geography, history, architecture, and more. Each passage is a detailed description of a term that requires the models to comprehend and reason about domain-specific knowledge related to Taiwanese culture.
To provide an example of the complexity of the questions, consider the following passage:
English translation:
It is a popular dim sum in Guangdong, Hong Kong, and Taiwan. It is famous for its'small body, big filling, juicy, delicious, thin skin, and beautiful shape'. One of the signature dim sum of Din Tai Fung in Taiwan.
The name of the dessert is:
The correct answer is Xiaolongbao, a type of small Chinese steamed bun traditionally prepared in a small bamboo steaming basket. Answering this question requires understanding the famous Taiwanese restaurant Din Tai Fung and recognizing the iconic dish in the restaurant with features such as "small" and "thin skin". Our choice of this dataset allows us to measure the answer generation abilities. On this dataset, we calculate the accuracy on exact matches.
### Perplexity benchmark on custom domain-specific materials
In the language modeling context, perplexity measures how close a language model fits the probabilistic properties of an evaluation corpus. A domain-specific perplexity refers to how well a language model predicts a future token given a _context_, or _prompt_, drawn from a particular topical domain. We curated data for three topical domains: books, web-encyclopedia, general question-answering. Examining domain-specific perplexities allows one to understand and predict the behavior of a language model in potential downstream domain-specific applications such as writing assistant, factual question answering, and sentence generation for educational purposes.
For the perplexity in _books_, we use a split of the COCT-books corpus. The books used for perplexity measurement was withheld from the training set. For the perplexity in _web encyclopedia_, we took a small subset from Wiki-zh which was also withheld from the training set. Finally, for perplexity in _general question answering_, we reformulated TTQA and the two existing benchmark task FGC and DRCD into a continuous text by concatenating context, questions and answers.
Our choices of these domain-specific perplexities enable us to understand the effect of pretraining materials and pretraining procedure on the innate properties of a language model.
## 5 Results
We evaluate the BLOOM-zh models on a diverse set of natural language tasks. These tasks cover both natural language understanding and natural language generation.
### English Benchmarks
During our extended pretraining of BLOOM into BLOOM-zh, we kept track of the language behavior in English as a result of this process. Ideally, one would like the extended pretraining to not compromise the existing capabilities in the model.
#### 5.1.1 English perplexity
We evaluate the English perplexity of the models on the wikitext103 dataset. This dataset is a subset of the Wikipedia corpus which contains only "good" and "featured" articles[Merity et al., 2016]15. The results presented in Table 3 show an improvement of BLOOM-zh over its predecessors BLOOM and BLOOMZ on Wikitext103 despite being mainly trained on Traditional Chinese. However, it is to mention that all three models BLOOM, BLOOMZ and BLOOM-zh have seen Wikipedia articles during training. Due to this, we also evaluated the model on the English LAMBADA benchmark.
Footnote 15: See [https://en.wikipedia.org/wiki/Wikipedia:Featured_articles](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles) for details.
#### 5.1.2 English LAMBADA
The LLanguage Modeling Broadened to Account for Discourse Aspects (LAMBADA) benchmark is an open-ended cloze task [Paperno et al., 2016a]. This benchmark consists of about 10000 passages from BooksCorpus where a missing target word needs to be predicted in the last sentence of a passage [Zhu et al., 2015]. Careful human examinations ensure that the target word is possible to predict given the passage, yet not possible to predict without the previous sentences in the passage. The LAMBADA scores are typically presented as accuracy - the percentage of correctly predicted words. The results are shown in Table 3. Here we observe a slight drop in performance of BLOOM-zh compared to BLOOM and BLOOMZ. We believe this drop is a result from a minor forgetting of its pre-trained English abilities.
### Traditional Chinese Benchmarks
To demonstrate the Traditional Chinese language capability of BLOOM-zh, we evaluate the models on several benchmarks: perplexity on selected corpora, existing benchmarks (DRCD, FGC), and the new question answering and perplexity benchmarks proposed in the previous section (TTQA and the domain-specific perplexity scenarios).
#### 5.2.1 Traditional Chinese domain-specific perplexity
For the domain-specific perplexity, we evaluate the model on COCT-books, Wikipedia-zh, DRCD, FGC and TTQA. The results are given in Table 4. For the small 1B1 model, we observe that BLOOM-zh 1B1 achieves a higher level of proficiency in all domains compared to BLOOM and BLOOMZ 1B1. These scores matches the experience we obtained by interaction with those models: While BLOOM and BLOOMZ often generate Simplified Chinese text, BLOOM-zh actually generates Traditional Chinese.
#### 5.2.2 Traditional Chinese Passage Question Answering
DRCD and FGC are reading comprehension benchmarks. In both scenarios, the model answer questions based on provided context. Thereby, the questions in DRCD are related to general knowledge while the questions in FGC are related to Taiwanese news articles and government announcements. Both scenarios measure the natural language understanding ability of a model. The results of prefix exact matches are shown in Table 5. We observed that BLOOM-zh 1B1 model
\begin{table}
\begin{tabular}{l l l l} \hline \hline & & Wikitext103 [ppl] & Lambda [acc] \\ \hline BLOOM & 1B1 & 31.6 & 43.9 \\ BLOOMZ & 1B1 & 34.7 & **46.6** \\ BLOOM-zh & 1B1 & **28.6** & 42.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Language modeling performance on English text.
outperforms BLOOM-1B1 on DRCD and FGC. In comparisson to BLOOMZ-1B1 we see a slight drop in performance on DRCD, while FGC is competitive. In our interpretation the dedicated instruction tuning phase of BLOOMZ-1B1 might be advantageous to perform particularly well on reading comprehension tasks. For BLOOM-zh 1B1, we did not apply a dedicated instruction tuning phase but still obtain a model with a competitive reading comprehension ability.
#### 5.2.3 Taiwan-specific knowledge benchmark
The results for TTQA are shown in Table 5, where prefix exact match scores are given. TTQA is a question answering task, where the model generate responses from knowledge within its parameters. We can observe that BLOOM-zh 1B1 model outperforms both BLOOM 1B1 and BLOOMZ 1B1 model, showing training on FractionalText increases understanding and knowledge of Taiwan related terms.
## 6 Toxicity and Bias Analysis
While enhancing the Traditional Chinese capability of language models offers significant benefits, it is also essential to examine the potential harms of these models. In this section, we analyze the behavior of our model concerning toxic outputs and biases.
### Toxicity
Similar to prior work, we evaluate the toxicity using the toxicity classifier of Perspective API16Gehman et al. (2020). Perspective API defines toxicity as a rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion. Given a sequence of words as an input, Perspective API returns a toxicity score. A score greater than 0.5 can be interpreted as toxic Gehman et al. (2020).
Footnote 16: Perspective API is created by JIGSAW and Google: [https://perspectiveapi.com](https://perspectiveapi.com).
For the systematic analysis of the toxicity of a Traditional Chinese language model, we create two datasets. Each datapoint corresponds to a prompt for a language model. Then, the language model generates text given this prompt. This generated text is then scored by the Perspective API.
#### 6.1.1 Data Collection
The two datasets we used for toxicity evaluation are a machine translated version of the existing toxicity benchmark RealToxicityPromptsGehman et al. (2020) and a newly curated dataset from crawling comments from the Taiwanese social forum Dcard17.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & Wikipedia-zh & COCT-books & DRCD & FGC & TTQA \\ \hline BLOOM & 1B1 & 56.1 & 71.5 & 64.2 & 28.9 & 40.0 \\ BLOOMZ & 1B1 & 67.7 & 81.8 & 74.8 & 34.1 & 47.1 \\ BLOOM-zh & 1B1 & **34.5** & **58.7** & **48.3** & **23.4** & **30.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Language modeling performance on domain-specific Traditional Chinese materials measured as perplexity.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & TTQA & DRCD & FGC \\ \hline BLOOM & 1B1 & 17.2 & 11.1 & 4.3 \\ BLOOMZ & 1B1 & 14.5 & **65.3** & **30.4** \\ BLOOM-zh & 1B1 & **23.4** & 55.0 & **30.4** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Model performance on reading comprehension (DRCD, FGC) and question answering tasks (TTQA) measured as the accuracy of prefix exact matches.
Machine-translated RealToxicityPromptsWe translated the English dataset RealToxicityPrompts[Gehman et al., 2020] into Traditional Chinese using Google Translate. As a sanity check for this translation, we queried Perspective API and measured the toxicity of the original English version in comparison to its Traditional Chinese counterpart. There is essentially little change in toxicity between the translations before and after, as seen in Table 6. We conclude that creating a dataset of Chinese toxicity by machine translation is a reliable approach.
Collection from Taiwan social forumUsing the RealToxicityPrompts translations is a practical and effective way to measure the toxicity of our model. However, in the context of their prompts derived from Reddit, an American social news and discussion forum, there is a cultural asymmetry between the perceptions of Americans and Chinese. For instance, foxes can be clever or cunning among animal stereotypes. Describing a person as "as cunning as a fox" is a positive description of a person with an American background but harmful to the Chinese. Moreover, there is a substantial cultural difference between American and Taiwanese societies. Historical stereotypes, such as white and black people, are inappropriate to apply to Taiwanese society directly. Therefore, we take inspiration from [Gehman et al., 2020] to create and release TaiwanToxicityPrompts, a dataset of sentence-level prompts and continuations. TaiwanToxicityPrompts is scraped from Traditional Chinese web comments on Dcard (see Figure 1). For this corpus, we collected 387 human-written comments from Dcard category "trending"18 and divided each comment into _prompt_ and _continuation_ by the first occurrence of a Chinese punctuation mark or newline symbol. We discarded those comments which did not contain a Chinese punctuation mark or newline symbol. In addition, the prompts and continuations with lengths less than three would also be removed to ensure the quality of toxicity measurement. After above data cleaning, TaiwanToxicityPrompts contains 231 comments, each split into a paired _prompt_ prefix and _continuation_ postfix.
Footnote 18: [https://www.dcard.tw/f/trending](https://www.dcard.tw/f/trending)
#### 6.1.2 Methodology
Following (Rae et al., 2021; Du et al., 2022), we use the RealToxicityPrompts dataset (Gehman et al., 2020) and the Perspective API to analyze the toxicity of our model's generations. To fully observe the results, we use the entire dataset, including 99,442 prompt-continuation pairs. Firstly, we obtain the traditional Chinese prompts from Google Translate. Then, for each traditional Chinese prompt, we generate its continuations by BigScience's BLOOM 1B1 and BLOOMZ 1B1, and our extending pretrained BLOOM-zh with up to 32 traditional Chinese tokens per continuation using multinomial sampling by the HuggingFace library. After removing 54 examples containing empty generated continuations, there are 99,388 examples in our toxicity analysis. Finally, we use Perspective API to obtain the toxicity score for all prompts and continuations.
#### 6.1.3 Toxicity Results
Figure 2 shows the relationship to toxicity scores of different prompt-continuation pairs in English and Traditional Chinese. To avoid visual clutter caused by too many data points, we use linear trend lines to represent the relationship between prompts and continuations instead of scatter plots. The blue line represents human-written English prompts and continuations from RealToxicityPrompts. The orange line represents machine-translated Traditional Chinese prompts and continuations from the same source. Finally, the green, purple, and red lines represent model-generated Traditional Chinese continuations from three different models: Bigscience's BLOOM 1B1, BLOOMZ 1B1, and our BLOOM-zh 1B1.
Three findings could be observed in our toxicity experiment (Figure 2). First, model-generated continuations are more sensitive to toxicity than human-written continuations in either original English or machine-translated traditional Chinese. The model-generated continuations, including BLOOM 1B1, BLOOMZ 1B1, and BLOOM-zh 1B1, start with lower toxicity scores when given a low-toxicity prompt but increase sharply as the prompt toxicity rises. This shows that models tend to follow the prompts' toxicity closely. More toxic prompts lead to more toxic continuations, which is consistent with previous studies (Du et al., 2022; Rae et al., 2021). Second, our extended BLOOM-zh slightly reduces the sensitivity compared to its unextended version BLOOM 1B1 as the prompt toxicity increases. This benefits from our large extended training corpus. Third, the average toxicity of 99,388 continuations from RealToxicityPrompts, machine translation, BigScience's BLOOM 1B1, BLOOMZ 1B1, and our extended BLOOM-zh 1B1 were 0.28, 0.19, 6.74e-2, 8.56e-2, and 6.72e-2, respectively. With our extended BLOOM-zh 1B1, we achieved lower toxicity than any human-written English continuations, their Zh-TW translations by machine, unextended BLOOM 1B1, and BLOOMZ 1B1.
We conducted additional analysis on the toxicity of generated continuations by comparing them with human-written text on TaiwanToxicityPrompts. Figure3 shows the toxicity relationship between prompts and four continuations for each prompt: human-written text on Dcard, and the continuations generated by BigScience's BLOOM 1B1, BLOOMZ 1B1, and our BLOOM-zh 1B1. Consistent with Figure2, we found that the toxicity of both human-written and model-generated continuations was positively correlated with the toxicity of prompts. Interestingly, we observed that human-written text was even more toxic than model-generated text. We hypothesize that this may be because the toxicity in TaiwanToxicityPrompts tends to occur towards the end of comments, which may have influenced human writers to produce more toxic language. It also leads to the following observation. Despite using a large extended training corpus, our BLOOM-zh 1B1 showed a slight decrease in toxicity performance compared to the unextended BLOOM 1B1. However, it exhibited behavior that was even closer to that of human-written Zh-TW text,
\begin{table}
\begin{tabular}{c c} \hline Class & Prompts \\ \hline gender & \(\{\)term\(\}\)\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#
indicating that our model was able to learn the subtleties of the traditional Chinese language and its use of language in a more nuanced manner.
### Bias
To identify the potential harm, we analyze the distributional bias in our BLOOM-zh model. Our goal is to discover whether biases exist with respect to groups, including gender, occupation, and social groups, and we leave the research on the degenerating bias to future works. For measuring
Figure 3: This figure shows the relationship between the toxicity probability of Prompts and Continuations testing on TaiwanToxicityPrompts, a corpus of traditional Chinese web comments scraped from **Dcard**, a popular community forum in Taiwan. With web scraping, we collect 387 human-written comments from Dcard trending. After data cleaning, TaiwanToxicityPrompts contains 231 comments, divided into prompt and continuation by the first occurrence of a Chinese punctuation mark or newline symbol. This figure compared the toxicity of continuations derived from four approaches, including the human-written (blue) and generated by BigScience’s BLOOM 1B1 (green), BLOOMZ 1B1 (yellow) and our BLOOM-zh 1B1 (orange). All toxicity scores were taken from the demo on the Perspective API website.
Figure 2: This figure illustrates how the toxicity probability of a prompt affects the toxicity probability of its continuation. The data comes from the RealToxicityPrompts dataset, which contains 99442 English prompt-continuation pairs (blue). Each pair was translated into Traditional Chinese (orange) and then used as input for three models: BigScience’s BLOOM 1B1 (green), BLOOMZ 1B1 (purple), and our BLOOM-zh 1B1 (red). The figure compares the toxicity probabilities of the original and generated continuations for each model. All the toxicity scores were obtained by Perspective API.
the bias, we follow (Rae et al., 2021) to analyze the probability distribution of terms generated by our model, with respect to different aspects in Table 8. For instance, we feed our model with the prompt "-" (term)" " by replacing a term in occupation class, such as politician, and then measure word co-occurrences in the generated text. To demonstrate the difference between model versions, we compare the top associations with each aspect before and after extending to Traditional Chinese.
#### 6.2.1 Data Collection
We use multinominal sampling by the HuggingFace library to generate 200 continuations of length 10 (token) with top-\(k\) sampling (\(k=40\)) for the prompt of each term, which represents a certain group. After removing stop words by CKIPtagger(Li et al., 2020), we take the first five Chinese words to observe and list the ten most common words for each group to analyze the impact of extension as a pilot study.
#### 6.2.2 Gender
In this section, we focus on gender bias in zero-shot co-occurrences. The table 9 displays the top ten co-occurrences for the terms and prompts. Neither of the models display obvious gender bias. However, there have been several changes in the high-frequency words before and after the extension. BLOOM-zh, in general, tends to associate words related to appearance more often than the original BLOOM, generating words such as " " (young), " " (image), and " " (apparel). Meanwhile, BLOOM generates words associated with diet, such as " " (health) and " " (eat), as well as " " (food). One notable difference between BLOOMZ and BLOOM-zh is that BLOOMZ tends to generate words in simplified Chinese, such as " " (like). Interestingly, the original BLOOM displays medical vocabulary such as " " (total carcinoma) or " " (suffer from an illness) with high frequency. However, we do not consider the gender difference in medical terms as bias since it is a common attribute when analyzing a disease.
\begin{table}
\begin{tabular}{c c} \hline Class & Prompts \\ \hline gender & \(\{\)term\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\)\(\}\)\(\)\(}\)\(\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}}\)\(}\)\)\(}}\)\)\(\(}}\)\)\(}}\)\)\(\(\}}\)\)\(\)\(}}\)\)\(\(}}\)\)\(}}\)\)\(\(}}\)\)\(\)\(}}\)\)\(}}\)\)\(\(\}}\)\)\(\(}}\)\)\(\}}\)\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}}\)\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(}}\)\)\(\(}}\)\)\(\}}\)\)\(\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(\}}\)\(\)\(\}}\)\)\(\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\)\(\}\)\}\(\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\(\}\)\)\(\}\)\(\}\)\}\(\)\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\}\(\)\}\)\(\}\)\(\}\)\}\(\)\}\)\(\}\)\(\}\}\)\(\}\)\(\}\)\)\(\}\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\}\)\)\(\}\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\}\(\}\)\)\(\}\)\(\}\}\)\(\}\)\}\)\(\}\)\(\}\)\}\(\}\)\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\}\}\)\(\}\)\(\}\)\}\(\}\)\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\)\}\(\}\)\}\)\(\}\)\(\}\)\}\)\(\}\)\(\}\}\)\(\}\)\}\}\)\(\}\)\(\}\)\(\}\)\}\}\)\(\}\)\}\}\)\(\}\)\(\}\)\}\}\)\(\}\)\(\}\)\}\}\)\(\}\}\)\(\}\)\}\}\(\)\}\}\}\)\(\}\)\}\}\{\(\}\}\)\}\}\{\(\}\)\}\}\)\}\{\(\}\}\)\}\}\{\}\{\(\}\)\}\}\}\{\}\)\(\}\}\)\}\{\}\{\(\}\}\)\}\}\}\{\}\{\}\)\}\}\{\(\}\}\}\)\}\}\{\}\{\}\}\{\}\}\)\}\}\}\}\}\\\{\{\{\{\(\}\}\}\}\
As noted in [Rudinger et al., 2018], some individuals have difficulty linking the words "doctor" and "mother" in a riddle. Because our model was trained on a large dataset, it is possible that biases and stereotypes present in the training data have been learned. Based on the definitions proposed by Zhao et al. (2018), a pro-stereotypical condition refers to the use of gender-specific pronouns that are linked to occupations traditionally dominated by the gender of the pronoun. Conversely, an anti-stereotypical condition refers to the use of gender-specific pronouns that are linked to occupations not traditionally dominated by the gender of the pronoun. A gender-biased system is one that shows a stronger association between pronouns and occupations in pro-stereotypical conditions than in anti-stereotypical conditions. In this study, we adopt these definitions to assess the potential gender biases in BLOOM-zh. For example, given the anti-stereotypical sentence " (The physician prescribed the drugs to the designer because she thought the disease could be cured.) we input the prompt " " (The physician prescribed the drugs to the designer because she thought the disease could be cured.)
Does the pronoun "she" mean the physician? Please answer yes or no.) into our model, and we asked the system whether the gender-sensitive pronoun "she" referred to the physician or not. We considered an inference to be correct if the model identified the correct occupation, which was the physician in this example. To assess whether BLOOM-zh exhibited gender biases, we utilized the WinoBias dataset. We transformed the original sentences into a yes/no question format with the correct answer being "yes" for all instances. We computed the probability of generating a "yes" response for a given prompt \(x\) under both pro-stereotypical and anti-stereotypical scenarios, which is represented as \(P(yes|x)\). To conduct this analysis, we utilized 50 examples from the WinoBias dataset for each scenario. Table 10 displays the results of this preliminary study. Before the extension, BLOOM-zh exhibited no gender bias according to the aforementioned definition. After the extension, the average probability \(P(yes|x)\) under the pro-stereotypical scenario is slightly higher than that under the anti-stereotypical scenario, indicating that the model exhibits marginal gender bias. But, the absolute value of the average \(P(yes|x)\) is now ten times higher than before the extension, meaning that "yes" (correct answer) is now more likely to be generated. For additional examples and details, please refer to Appendix C.
#### 6.2.3 Occupation
We also analyzed occupation bias by exploring prompt-based co-occurrence using a broad occupational category in Taiwan, which mainly uses Traditional Chinese. Table 11 displays the top co-occurrences for the listed occupations under the prompt "term" (term is usually...)"
Overall, our pilot study found no discrimination among the selected terms in BLOOM, BLOOMZ, and BLOOM-zh. However, we observed that BLOOM-zh showed greater adaptability to the usage of Traditional Chinese. For example, in Traditional Chinese, the term " (engineer) is highly associated with computers, resulting in related words such as " (computer) and " " (system) recurring in the generated texts. This association was not evident before the extension. We anticipate that our model will continue to improve with the incorporation of more high-quality datasets in future releases.
#### 6.2.4 Social Groups
We conducted an analysis of bias in social groups by examining co-occurrences of terms with respect to certain groups in Taiwan, where traditional Chinese is the official language. Table 12 shows the top 10 co-occurrences for the given prompts. In this preliminary study, we did not discover any severe instances of discriminatory co-occurrences. However, we will remain vigilant
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{pro-stereotypical condition} & anti-stereotypical condition \\ \hline BLOOM & 1B1 & 0.121\% & 0.122\% \\ BLOOMZ & 1B1 & 0.340\% & 0.349\% \\ BLOOM-zh & 1B1 & 3.778\% & 3.698\% \\ \hline \hline \end{tabular}
\end{table}
Table 10: Under the prompts with given condition, this table shows the probability of generating word “yes”. Meanwhile, this table demonstrates the results before and after extension. Our model is improved in ability of coreference, and keep the comparable level of unbiasedness in gender with respect to the model before extension.
for such instances as we conduct larger-scale experiments or incorporate additional, potentially more chaotic sources of data, such as web-crawled data
Our extension model has been designed to reflect the preferences of traditional Chinese users, and as such, we observed that BLOOM-zh is more likely to generate words like "\(\frac{\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text {\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{\text{\#}}\text{ \text{\#}}\text{\text{\
Conclusion
In this paper, we presented the BLOOM-zh models. They are models derived from the BLOOM models with enhanced Traditional Chinese capabilities. To create BLOOM-zh, we curated datasets and conducted extended pretraining. We presented the nature of the underlying dataset. We evaluated BLOOM-zh on existing benchmarks and also new proposed benchmarks in Traditional Chinese. In order to increase the coverage of performance evaluation, we created two additional evaluation benchmarks.
Compared to the original BLOOM and BLOOMZ model, our results show that BLOOM-zh outperforms, often greatly, its predecessor in almost all our Traditional Chinese benchmarks. Furthermore, in our toxicity and bias study we show that our model is prune to strong biases and toxicity.
The weights of BLOOM-zh 1B1 are now available for public download. We expect to release larger models soon. In addition to model weights, the new benchmarks created by us are also open-sourced to facilitate the further research on Traditional Chinese LLMs.
## Acknowledgements
The authors thank all the members of MediaTek Research, Academia Sinica, and the National Academy for Educational Research who participated in the project. We would like to thank Ching-Lung, Lin and Ming-Hong, Bai from National Academy for Educational Research for assistance in obtaining the training data; Jezabel Garcia and Federica Freddi for their surveying literature in the early stages of the project.
## References
* Artetxe et al. (2019) Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. On the cross-lingual transferability of monolingual representations. _arXiv preprint arXiv:1910.11856_, 2019.
* Bommasani et al. (2021) Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselt, Emma Brunskill, et al. On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_, 2021.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020.
* Chen et al. (2018) Zhiyuan Chen, Bing Liu, Ronald Brachman, Peter Stone, and Francesca Rossi. _Lifelong Machine Learning_. Morgan & Claypool Publishers, 2nd edition, 2018. ISBN 1681733021.
* Conneau et al. (2019) Alexis Conneau, Kartiaky Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzman, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. _arXiv preprint arXiv:1911.02116_, 2019.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* Du et al. (2022) Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P Bosma, Zongwei Zhou, Tao Wang, Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. CiLaM: Efficient scaling of language models with mixture-of-experts. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 5547-5569. PMLR, 17-23 Jul 2022.
* Foundation (2020) Wikimedia Foundation. Wikimedia downloads. URL [https://dumps.wikimedia.org](https://dumps.wikimedia.org).
* Gao et al. (2020) Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb dataset of diverse text for language modeling. _arXiv preprint arXiv:2101.00027_, 2020.
* Ghahahramani et al. (2019)
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal III Daume, and Kate Crawford. Datasheets for datasets. arxiv. _arXiv preprint arXiv:1803.09010_, 2018.
* Gehman et al. (2020) Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. Realtoxic-typrompts: Evaluating neural toxic degeneration in language models. _Findings in EMNLP 2020_, 2020.
* Hoffmann et al. (2022) Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_, 2022.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_, pages 2790-2799. PMLR, 2019.
* Kaplan et al. (2020) Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_, 2020.
* Kim et al. (2021) Boseop Kim, Hyoungseok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sung ju Kim, Seonhoon Kim, Dong Hyung Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, SukHyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hium Kim, Jisu Jeong, Yong Goo Yeo, Dong hyun Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, Woo Chul Park, and Nako Sung. What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers. _ArXiv_, abs/2109.04650, 2021.
* Kirkpatrick et al. (2017) James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. _Proc. of the national academy of sciences_, 2017. URL [https://www.pnas.org/content/pnas/114/13/3521.full.pdf](https://www.pnas.org/content/pnas/114/13/3521.full.pdf).
* Kondratyuk and Straka (2019) Dan Kondratyuk and Milan Straka. 75 languages, 1 model: Parsing universal dependencies universally. _arXiv preprint arXiv:1904.02099_, 2019.
* Korthikanti et al. (2022) Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022. URL [https://arxiv.org/abs/2205.05198](https://arxiv.org/abs/2205.05198).
* Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. _arXiv preprint arXiv:1704.04683_, 2017.
* Laurencon et al. (2021) Hugo Laurencon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo Gonzalez Ponferrada, Huu Nguyen, Jorg Frohberg, Mario Sasko, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Mariam Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Romero Munoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Vu Minh Chien, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Luccioni, and Yacine Jernite. The bigscience ROITS corpus: A 1.6TB composite multilingual dataset. In _Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_, 2022. URL [https://openreview.net/forum?id=UoEw6KigKUn](https://openreview.net/forum?id=UoEw6KigKUn).
* Le Scao et al. (2022) Teven Le Scao, Thomas Wang, Daniel Hesslow, Lucile Saulnier, Stas Bekman, M Saiful Bari, Stella Biderman, Hady Elsahar, Jason Phang, Ofir Press, et al. What language model to train if you have one million gpu hours? In _Challenges_, 2022.
* Le et al. (2018)
Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models better. _arXiv preprint arXiv:2107.06499_, 2021.
* Li et al. (2020) Peng-Hsuan Li, Tsu-Jui Fu, and Wei-Yun Ma. Why attention? analyze bilstm deficiency and its remedies in the case of ner. _Proceedings of the AAAI Conference on Artificial Intelligence_, 34(05):8236-8244, Apr. 2020. doi: 10.1609/aaai.v34i05.6338. URL [https://ojs.aaai.org/index.php/AAAI/article/view/6338](https://ojs.aaai.org/index.php/AAAI/article/view/6338).
* Merity (2016) Stephen Merity. The wikitext long term dependency language modeling dataset. _Salesforce Metamind_, 9, 2016.
* Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. _arXiv preprint arXiv:1609.07843_, 2016.
* Mitchell et al. (2019) Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In _Proceedings of the conference on fairness, accountability, and transparency_, pages 220-229, 2019.
* Muennighoff et al. (2022) Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual generalization through multitask finetuning. _arXiv preprint arXiv:2211.01786_, 2022.
* Muller et al. (2020) Benjamin Muller, Antonis Anastasopoulos, Benoit Sagot, and Djame Seddah. When being unseen from mbert is just the beginning: Handling new languages with multilingual language models. _arXiv preprint arXiv:2010.12858_, 2020.
* Paperno et al. (2016) Denis Paperno, German Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The LAMBADA dataset: Word prediction requiring a broad discourse context. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 1525-1534, Berlin, Germany, August 2016a. Association for Computational Linguistics. doi: 10.18653/v1/P16-1144. URL [https://aclanthology.org/P16-1144](https://aclanthology.org/P16-1144).
* Paperno et al. (2016b) Denis Paperno, German Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The lambada dataset: Word prediction requiring a broad discourse context. _arXiv preprint arXiv:1606.06031_, 2016b.
* Parisi et al. (2019) German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. _Neural Networks_, 113:54-71, 2019. ISSN 0893-6080. doi: [https://doi.org/10.1016/j.neunet.2019.01.012](https://doi.org/10.1016/j.neunet.2019.01.012). URL [https://www.sciencedirect.com/science/article/pii/S0893608019300231](https://www.sciencedirect.com/science/article/pii/S0893608019300231).
* Parker et al. (2011) Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. Chinese gigaword 5th edition Idc2011t13. 2011. doi: 10.35111/102m-dr17. URL [https://doi.org/10.35111/102m-dr17](https://doi.org/10.35111/102m-dr17).
* Pfeiffer et al. (2020a) Jonas Pfeiffer, Aishwarya Kamath, Andreas Ruckle, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non-destructive task composition for transfer learning. _arXiv preprint arXiv:2005.00247_, 2020a.
* Pfeiffer et al. (2020b) Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. Mad-x: An adapter-based framework for multi-task cross-lingual transfer. _arXiv preprint arXiv:2005.00052_, 2020b.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
* Rae et al. (2019) Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. Compressive transformers for long-range sequence modelling. _arXiv preprint arXiv:1911.05507_, 2019.
* Rae et al. (2021) Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. _arXiv preprint arXiv:2112.11446_, 2021.
* Ritter et al. (2019)
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. _In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '20, Tutorial)_, 2020.
* Roberts et al. (2019) Adam Roberts, Colin Raffel, Katherine Lee, Michael Matena, Noam Shazeer, Peter J Liu, Sharan Narang, Wei Li, and Yanqi Zhou. Exploring the limits of transfer learning with a unified text-to-text transformer. 2019.
* Rudinger et al. (2018) Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. Gender bias in coreference resolution. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
* Sanh et al. (2021) Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. _arXiv preprint arXiv:2110.08207_, 2021.
* Scao et al. (2018) Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagne, Alexandra Sasha Luccioni, Francois Vron, Matthias Galle, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanmanchi, Thomas Wang, Benoit Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurencon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo Gonzalez Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gerard Dupont, German Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jorg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro Von Werra, Leon Weber, Long Phan, Lohua Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Munoz, Maraim Masoud, Maria Grandury, Mario Sasko, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulajilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis Lopez, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vasililina Nikoulina, Veronika Liappala, Violette Lpercq, Virinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenpeli Si, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheseth Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debapyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre Francois Lavallee, Remi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, Stephane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurelie Neveol, Charles Lovering, Dan Garrette, Deepak Tunugunta, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz,
Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zdenek Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollah, Aycha Tammuv, Azadeh HajHosseni, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Munoz Ferrandis, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne Ozaoni, Fatina Mirza, Frankline Onon-iwu, Habib Rezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Karen Fort, Livia Dutra, Mairon Samagao, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo Wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clementine Fortier, Daniel Leon Perinan, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pamies, Maria A Castillo, Marianna Nezhurina, Mario Sanger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishi Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Theo Gigant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkataraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, and Thomas Wolf. Bloom: A 176b-parameter open-access multilingual language model, 2022a. URL [https://arxiv.org/abs/2211.05100](https://arxiv.org/abs/2211.05100).
|
2304.08188 | Statute-enhanced lexical retrieval of court cases for COLIEE 2022 | We discuss our experiments for COLIEE Task 1, a court case retrieval
competition using cases from the Federal Court of Canada. During experiments on
the training data we observe that passage level retrieval with rank fusion
outperforms document level retrieval. By explicitly adding extracted statute
information to the queries and documents we can further improve the results. We
submit two passage level runs to the competition, which achieve high recall but
low precision. | Tobias Fink, Gabor Recski, Wojciech Kusa, Allan Hanbury | 2023-04-17T11:59:52Z | http://arxiv.org/abs/2304.08188v1 | # Statute-enhanced lexical retrieval of court cases for COLIEE 2022
###### Abstract
We discuss our experiments for COLIEE Task 1, a court case retrieval competition using cases from the Federal Court of Canada. During experiments on the training data we observe that passage level retrieval with rank fusion outperforms document level retrieval. By explicitly adding extracted statute information to the queries and documents we can further improve the results. We submit two passage level runs to the competition, which achieve high recall but low precision.
Keywords:Information Retrieval Information Extraction Legal Domain.
## 1 Introduction
In the legal domain, court cases play an unique role as they often contain the last say on a particular legal subject. This is especially true in Common Law systems, such as the legal systems of North America, where court cases play a large role in shaping the law. While statutes are the foundation of the legal system, it is often necessary to look through precedent court cases for detailed information that is not available in statutes to reach a decision. However, not only are court cases long and difficult to read, the number of potentially relevant court cases is ever increasing. As such, the need for development of automated methods for retrieval of legal information to aid legal experts is equally increasing.
The Competition on Legal Information Extraction/Entailment (COLIEE)1 evaluates legal information retrieval (IR) systems for a variety of legal retrieval tasks. We participate in the COLIEE 2022 Task 1, which deals with Canadian law precedent retrieval (notice cases). We experiment with lexical methods for retrieval, focusing on ways of improving established methods with domain-specific fine-tuning. Considering that statutes are still the foundation of the legal system, we add statute information to the search to focus the models on information that is typically defining relevancy in the legal domain. Although not all cases contain statute information, we observe that making use of this information will overall improve retrieval performance.
Footnote 1: [https://sites.ualberta.ca/~rabelo/COLIEE2022/](https://sites.ualberta.ca/~rabelo/COLIEE2022/)
## 2 Task Description
In Task 1 of the COLIEE 2022, the goal is to retrieve supporting court cases (notice cases) for new court cases (query cases). Notice cases can be understood as precedent cases that are highly relevant for a query case. Each query case is supported by at least one notice case. For this task, cases from the Federal Court of Canada are used for both query and notice cases. A training collection as well as a test collection is provided (see Table 1), both having their own respective query cases, which are part of the collection. The training collection provides labels for relevant notice cases for each query, while the test collection only provides query cases without labels. Cases have been edited to have references to other cases removed and replaced by placeholder tokens. The task is to retrieve notice cases from the test collection using the queries of the test collection. The performance is measured using F1 score.
The length of the query documents makes the task challenging in a few ways. Not only are many IR methods better suited for shorter queries, due to the length of the documents, the relationship between query cases and notice cases is also difficult to understand without expert knowledge.
## 3 Method
We approach this task with the assumption that there is a topical overlap between query and notice cases, but that not all parts of a query case are equally important. It has been shown in past legal retrieval workshops (see AILA [4, 5], COLIEE [7, 6, 1]) that lexical methods, such as BM25 or IR language models (LM), yield competitive results, even when compared to newer neural network based approaches. We build on top of these lexical methods and adapt them to the legal collections of this task.
\begin{table}
\begin{tabular}{l r r} \hline \hline & Training & Test \\ \hline Total Cases & 4415 & 1563 \\ Query Cases & 898 & 300 \\ Max \# of tokens & 90567 & 61065 \\ Median \# of tokens & 3658 & 3573 \\ Mean \# of tokens & 4778 & 4979 \\ Max \# of notice cases & 34 & N/A \\ Median \# of notice cases & 3 & N/A \\ Mean \# of notice cases & 4.68 & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training and test collection statistics. Tokens per document and notice cases are per query.
### Document-Level
First, we experiment with using the models out-of-the-box. We preprocess the training collection by removing special characters and tokens with two characters or less and index the documents using Elasticsearch. Numbers are also removed, except when they are part of a statute section citation. During indexing, the text is also lowercased, stemmed and stopwords removed, including the task-specific placeholder tokens. To convert case documents to queries, we try to extract the most informative terms from the case. As a naive approach for this term extraction, we calculate the TF-IDF score for each token in the query case and then use the top \(T\) tokens with the highest score as query terms. We compare the performance of the Elasticsearch implementations of BM25 and the LM Jelinek Mercer similarity [8], which calculate a score \(s\) for each document. For each query, 100 documents are retrieved and the precision, recall and F1 scores calculated for each rank. Although query cases are part of the collection, they are skipped during retrieval. We perform a random search to find the best hyperparameters for BM25 (\(k1\), \(b\)) and LM Jelinek Mercer (\(\lambda\)) as well as \(T\). While searching for the best hyperparameters, we only use the first 700 queries of the training collection (_training set_). We determine the best cutoff rank \(k\) using the F1 micro-averages for each rank. We use the remaining 198 queries (_dev set_) to evaluate the best hyperparameters and cutoff rank \(k\).
### Passage-Level
Next, we experiment by changing the way how queries are created from query cases and change how documents are retrieved by using passage level retrieval. The information on where a passage starts and ends is already present in the case files and just needs to be utilized. Similar to the lexical baseline of [2], we split each case \(c\) in the collection \(C\) into passages \(p_{1},...,p_{n_{c}}\) and index the passages instead of the whole case, using the same preprocessing method as before. Now, the score \(s\) is calculated for each passage instead of each document. A query case \(q\) is also split into passage queries \(pq_{1},...,pq_{n_{q}}\) which retrieve a set of passage level rankings \(R\) with \(|R|=n_{q}\). We aggregate the passage level rankings to case level using Reciprocal Rank Fusion, a method of aggregation that outperforms other methods, such as Condorcet Fuse or CombMNZ [3]:
\[RRFscore(c\in C)=\sum_{r\in R}\frac{1}{k_{rrf}+r(c)}*p_{b} \tag{1}\]
We set \(k_{rrf}=60\), the same value as in [3]. We also add a passage boost factor \(p_{b}\) that is set to 1 for now. For this passage level ranking approach, we again perform the same method as before to find the best hyperparameters for BM25 (\(k1\), \(b\)) and LM Jelinek Mercer (\(\lambda\)) as well as \(T\) and \(k\), using the same _training set / dev set_ split of queries for evaluation. For all further experiments, we the values of the hyperparameters are fixed to the best result of this random search (excluding cutoff rank \(k\)).
### Statute Field
Finally, we experiment with adding additional domain knowledge to the search by extracting statute sections mentioned in the case documents and adding them to the documents explicitly. For this purpose, we scrape the titles of Canadian rules, regulations, orders and acts from the Canadian Justice Law Website2. This scraped list of titles also contains parts that would not be typically found in statute citations (e.g. text fragments that the law has been repealed). Consequently, we clean the titles by only considering text up to the first mention of _regulations_, _order_, _act_ or _rules_. Further, since some statutes are only mentioned as acronyms, we create acronym candidates for each statute by taking the first upper-case letter of each token in the title. We identify the statutes of a case based on mentions of titles and generated title acronyms in the text. Additionally, we use regular expressions to detect statute section numbers in the text. We map statutes to section numbers by counting the number of passages in a case where a section number co-occurs with a statute mention, and then assigning the most frequently co-occurring statute to a section number.
Footnote 2: [https://laws-lois.justice.gc.ca/eng/](https://laws-lois.justice.gc.ca/eng/)
These extracted statute sections are then added to the case passages and indexed as an additional statute-section field in Elasticsearch. We combine the original passage query with the extracted statute sections of the passage using a compound query. The score for the statute field \(s_{statute}\) is calculated using BM25 and added to the overall score for each passage (the Elasticsearch default for a compound query), resulting in a new total score \(s_{total}\):
\[s_{total}=s+s_{statute}*s_{b} \tag{2}\]
To further control the influence of the statute-section field on the similarity calculation, we adjust the weight of the similarity score of the statute-section field with the factor \(s_{b}\) (using the ElasticSearch _boost_ functionality). We assume that query passages that mention statute-sections are more likely to contain information that is of particular importance for a case. If the number of statute-sections \(s_{n}\) that are present in the query passage is at least 1, we now set the earlier introduced passage boost factor \(p_{b}\) to the hyperparameter \(P_{b}\):
\[p_{b}=\begin{cases}P_{b},&\text{if }s_{n}\geq 1\\ 1,&\text{otherwise}\end{cases} \tag{3}\]
The best values for \(P_{b}\) and \(s_{b}\) are determined using a random search and \(k\) is determined as before.
## 4 Results
The results of the experiments on the training set and dev set are shown in Table 2. On _document level_, _BM25_ achieved the highest F1 using the parameters \(T=200\), \(k1\ =1.09\), \(b\ =0.99\) while the _LM_ Jelinek Mercer achieved the highest
F1 using \(T=200\), \(\lambda\ =0.64\), \(b\ =0.99\). On _passage level_, _BM25_ achieved the highest F1 using the parameters \(T=100\), \(k1\ =0.66\), \(b\ =0.59\) while the _LM_ Jelinek Mercer achieved the highest F1 without limiting \(T\) and \(\lambda\ =0.56\). This means _LM_ uses every token of a passage as query, but with duplicate tokens removed. In our experiments, all passage level methods with rank fusion outperform document level retrieval methods. For this reason we did not continue experimenting on document level. While passage level BM25 achieved a higher F1 score on our training set, the LM model performed better on the dev set. The best overall F1 score was achieved by the LM model with inclusion of the statute field. Especially the dev set performance could be improved by adding this information.
For the task submission, we submitted two runs, using the _Passage level LM_ setup as run **TUWBR_LM** and using the _Passage level LM + Statute Field_ setup as run **TUWBR_LM_LAW**. The results for our submitted runs and a selection of top scoring runs for the task are shown in Table 3. Our methods achieve a high level of recall but perform poorly regarding precision. However, our runs are only situated in the bottom half of the F1 score sorted ranking.
One weakness of our method is certainly that our naive term extraction approach was insufficient. Further, we were only able to produce a ranking of court cases and determined relevancy based on a fixed cutoff value (rank 7 or 8). Since most query cases cite fewer notice cases than our cutoff value, our precision is low. However, extracting statute-section information produced positive results. If we compare our two runs, we can see that adding statute information can yield a higher precision while recall is only reduced minimally. We expect that results can be improved further with better strategies for utilizing this information.
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{3}{c|}{Training Set} & \multicolumn{3}{c}{Dev Set} \\ \cline{2-7} Method & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline Document level BM25 (\(k=7\)) & 0.1193 & 0.1944 & 0.1479 & 0.0871 & 0.1825 & 0.1179 \\ Document level LM (\(k=8\)) & 0.1200 & 0.2200 & 0.1553 & 0.0802 & 0.1892 & 0.1127 \\ Passage level BM25 (\(k=8\)) & 0.1214 & **0.2226** & 0.1571 & 0.0898 & 0.2116 & 0.1261 \\ Passage level LM (\(k=8\)) & 0.1210 & 0.2218 & 0.1565 & 0.0993 & **0.2341** & 0.1395 \\ Passage level LM + Statute Field (\(k=7\)) & **0.1282** & 0.2090 & **0.1589** & **0.1073** & 0.2249 & **0.1453** \\ \end{tabular}
\end{table}
Table 2: Results for our experiments using the _training set_ and _dev set_ queries.
\begin{table}
\begin{tabular}{l l l l l l} \hline Rank Team & Run & F1 Score & Precision & Recall \\ \hline
1 & UA & pp\_0.65\_10\_3.csv & **0.3715** & 0.4111 & 0.3389 \\
2 & UA & pp\_0.7\_9\_2.csv & 0.3710 & **0.4967** & 0.2961 \\
3 & siat & siatrun1.txt & 0.3691 & 0.3005 & **0.4782** \\
7 & LeiBi & run\_bm25.txt & 0.2923 & 0.3000 & 0.2850 \\
15 & TUWBR & TUWBR\_LM\_law & 0.2367 & 0.1895 & 0.3151 \\
17 & TUWBR & TUWBR\_LM & 0.2206 & 0.1683 & 0.3199 \\ \end{tabular}
\end{table}
Table 3: Excerpt of the task 1 ranking showing selected runs and our results.
## 5 Conclusion
For the COLIEE 2022 Task 1 case retrieval, we used passage-level LMs to retrieve notice cases for case queries. Our methods achieved a high recall but low precision. We showed that a simple method making use of statute-section mentions in passages can achieve a higher precision with only a minor decrease in recall. Overall, low precision remained a problem for our methods and they were outperformed by other methods in the competition. We expect that our lexical approach could still be improved by different query term extraction strategies.
**Acknowledgments.** Project partly supported by BRISE-Vienna (UIA04-081), a European Union Urban Innovative Actions project.
|
2305.07192 | Big Ramsey Degrees of Countable Ordinals | Ramsey's theorem states that for all finite colorings of an infinite set,
there exists an infinite homogeneous subset. What if we seek a homogeneous
subset that is also order-equivalent to the original set? Let $S$ be a linearly
ordered set and $a \in N$. The big Ramsey degree of $a$ in $S$, denoted
$T(a,S)$, is the least integer $t$ such that, for any finite coloring of the
$a$-subsets of $S$, there exists $S'\subseteq S$ such that (i) $S'$ is
order-equivalent to $S$, and (ii) if the coloring is restricted to the
$a$-subsets of $S'$ then at most $t$ colors are used.
Ma\v{s}ulovi\'{c} \& \v{S}obot (2019) showed that $T(a,\omega+\omega)=2^a$.
From this one can obtain $T(a,\zeta)=2^a$. We give a direct proof that
$T(a,\zeta)=2^a$.
Ma\v{s}ulovi\'{c} and \v{S}obot (2019) also showed that for all countable
ordinals $\alpha < \omega^\omega$, and for all $a \in N$, $T(a,\alpha)$ is
finite. We find exact value of $T(a,\alpha)$ for all ordinals less than
$\omega^\omega$ and all $a\in N$. | Joanna Boyland, William Gasarch, Nathan Hurtig, Robert Rust | 2023-05-12T01:19:53Z | http://arxiv.org/abs/2305.07192v2 | # Big Ramsey Degrees of Countable Ordinals
###### Abstract
Ramsey's theorem states that for all finite colorings of an infinite set, there exists an infinite homogeneous subset. What if we seek a homogeneous subset that is also order-equivalent to the original set? Let \(S\) be a linearly ordered set and \(a\in\mathbb{N}\). The big Ramsey degree of \(a\) in \(S\), denoted \(T(a,S)\), is the least integer \(t\) such that, for any finite coloring of the \(a\)-subsets of \(S\), there exists \(S^{\prime}\subseteq S\) such that (i) \(S^{\prime}\) is order-equivalent to \(S\), and (ii) if the coloring is restricted to the \(a\)-subsets of \(S^{\prime}\) then at most \(t\) colors are used.
Masulovic & Sobot (2019) showed that \(T(a,\omega+\omega)=2^{a}\). From this one can obtain \(T(a,\zeta)=2^{a}\). We give a direct proof that \(T(a,\zeta)=2^{a}\).
Masulovic and Sobot (2019) also showed that for all countable ordinals \(\alpha<\omega^{\omega}\), and for all \(a\in\mathbb{N}\), \(T(a,\alpha)\) is finite. We find exact value of \(T(a,\alpha)\) for all ordinals less than \(\omega^{\omega}\) and all \(a\in\mathbb{N}\).
**Mathematics Subject Classifications:** 05D10, 03E10
Introduction
**Definition 1**.:
1. Let \(\mathcal{A}=(A,\preceq_{A})\) and \(\mathcal{B}=(B,\preceq_{B})\) be ordered sets. Then \(\mathcal{A},\mathcal{B}\) are _order-equivalent_, denoted \(\mathcal{A}\approx\mathcal{B}\), if there exists an order-preserving bijection \(f\colon A\to B\); that is, for all \(a_{1},a_{2}\in A\): \[a_{1}\preceq_{A}a_{2}\iff f(a_{1})\preceq_{B}f(a_{2}).\]
2. Let \(\mathcal{A}=(A,\preceq_{A})\) and \(\mathcal{B}=(B,\preceq_{B})\) be ordered sets. Then the ordered set \(\mathcal{A}+\mathcal{B}\) is defined to be \((A\sqcup B,\preceq)\), where \(c_{1}\preceq c_{2}\) when either \(c_{1},c_{2}\in A\) and \(c_{1}\preceq_{A}c_{2}\), or if \(c_{1},c_{2}\in B\) and \(c_{1}\preceq_{B}c_{2}\), or if \(c_{1}\in A\) and \(c_{2}\in B\). Note that \(+\) agrees with the definition of ordinal addition, and is not commutative in general.
3. Let \(\mathcal{A}=(A,\preceq_{A})\) be an ordered set. Then 1. \(-A=\{-a\colon a\in A\}\) 2. \(-a\preceq_{-A}-b\iff b\preceq_{A}a\). 3. \(-\mathcal{A}=\{-A,\preceq_{-A})\)
Throughout this paper we conflate the notation for an ordered set and its underlying set, for example, we could define \(A\) as an ordered set and still discuss \(\binom{A}{2}\).
_Notation 2_.: We use \(\mathbb{N}\) to be the set of natural numbers including \(0\). Let \(a,b\in\mathbb{N}\) and \(S\) be an ordered set.
1. \([b]\) is \(\{1,\ldots,b\}\). Note that if \(b=0\) then \([b]=\emptyset\). Also note that \(|[b]|=b\); this is our main motivation for this notation.
2. \(\binom{S}{a}\) is the set of all \(a\)-element subsets of \(S\). Often, we index the elements of a subset by their ordering in \(S\).
3. Let \(\operatorname{COL}\colon S\to[b]\) and \(S^{\prime}\subseteq S\). Then \(\operatorname{COL}(S^{\prime})=\{\operatorname{COL}(s)\colon s\in S^{\prime}\}\). Hence \(|\operatorname{COL}(S^{\prime})|\) is the size of the codomain of the restriction of \(\operatorname{COL}\) to \(S^{\prime}\).
**Definition 3**.: Let \(S\) be an ordered set, \(S^{\prime}\subseteq S\), \(a,b,t\in\mathbb{N}\), and \(\operatorname{COL}\colon\binom{S}{a}\to[b]\) be a coloring.
1. \(S^{\prime}\) is _homogeneous_ if \(|\operatorname{COL}\left(\binom{S^{\prime}}{a}\right)|=1\). \(S^{\prime}\) is _\(t\)-homogeneous_ if \(|\operatorname{COL}\left(\binom{S^{\prime}}{a}\right)|\leqslant t\).
2. \(S^{\prime}\) is _\(S\)-\(t\)-homogeneous_ if \(|\operatorname{COL}\left(\binom{S^{\prime}}{a}\right)|\leqslant t\) and \(S^{\prime}\approx S\).
_Notation 4_.:
1. We characterize every ordinal \(\alpha\) as the ordered set of all ordinals \(\beta<\alpha\), with \(0\) as the least ordinal.
2. \(\zeta\) is the ordered set containing the integers, \(\omega\) is the ordered set containing the naturals, and \(\eta\) is the ordered set containing the rationals under their respective natural orderings.
**Definition 5**.: Let \(S\) be an ordered set. For any \(a\in\mathbb{N}\), \(T(a,S)\) is the least \(t\in\mathbb{N}\) such that for all \(b\in\mathbb{N}\), for all colorings \(\operatorname{COL}\colon\binom{S}{a}\to[b]\), there exists some \(S^{\prime}\subseteq S\) such that \(S^{\prime}\) is \(S\)-\(t\)-homogeneous. Note that \(t\) is independent of \(b\). If no such \(t\) exists then we write \(T(a,S)=\infty\). \(T(a,S)\) is called the _big Ramsey degree_ of \(a\) in \(S\). The term was first coined by Kechris et al. [6]. In set theoretic notation, it can be written as \(S\to(S)_{r,T(a,S)}^{a}\) and \(S\not\to(S)_{T(a,S),T(a,S)-1}^{a}\) for all \(r\in\mathbb{N}\).
**Lemma 6**.: _Definition 5 allows \(a=0\) in \(T(a,S)\). For any \(b\in\mathbb{N}\), we would then define \(\operatorname{COL}\colon\binom{S}{0}\to[b]\). Because \(\binom{S}{0}=\{\emptyset\}\) for all sets \(S\) and \(\binom{S^{\prime}}{0}=\{\emptyset\}\) for all sets \(S^{\prime}\subseteq S\), it is clear that \(T(0,S)=1\) for all sets \(S\)._
This paper focuses on \(T(a,\zeta)\) and \(T(a,\alpha)\) for ordinals \(\alpha<\omega^{\omega}\). We do not consider \(T(a,\eta)\), however, the interested reader should know the following:
**Theorem 7**.:
1. \(T(2,\eta)=2\)_. It was established by Sierpinski_ _[_13_]_ _that_ \(T(2,\eta)\geqslant 2\)_. Equality was first proven by Galvin, unpublished._
2. _For all_ \(a\in\mathbb{N}\)_,_ \(T(a,\eta)\) _exists. This was first proven by Laver_ _[_9_]__._
3. \(T(a,\eta)\) _is the coefficient of_ \(x^{2a+1}\) _in the Taylor series for the tangent function, hence_ \[T(a,\eta)=\frac{B_{2a+1}(-1)^{a+1}(1-4^{a+1})}{(2(a+1))!}\] _where_ \(B_{2a+1}\) _is the_ \((2a+1)\)_th Bernoulli number. This was proven by Devlin_ _[_2_]__. See also Vuksanovic_ _[_14_]_ _using the work of Halpern & Lauchli_ _[_5_]__._
Note 8.: The notion of \(T(a,S)\) has been defined on structures other than orderings. We give an example. Let \(R=(\mathbb{N},E)\) be the Rado graph. \(T(a,R)\) is the least number \(t\) such that, for all \(b\), for all colorings \(\operatorname{COL}:\binom{\mathbb{N}}{a}\to[b]\), there exists \(H\subseteq\mathbb{N}\) where both \(|\operatorname{COL}\left(\binom{H}{a}\right)|\leqslant t\) and the graph induced by \(H\) is isomorphic to \(R\). The numbers \(T(a,R)\) are known but complicated; however, \(T(2,R)=2\). See Sauer [12], Laflamme et al. [7], and Larson [8]. See Dobrinen [3] for more references and other examples.
## 2 Summary of Results
Ramsey's Theorem on \(\mathbb{N}\) gives an infinite \(1\)-homogeneous subset of \(\mathbb{N}\). Theorem 9 restates Ramsey's Theorem in two equivalent ways.
**Theorem 9**.: _Let \(a\in\mathbb{N}\)._
1. \(T(a,\omega)=1\) _for all_ \(a\in\mathbb{N}\)_._
2. _Let_ \(b\geqslant 1\) _and_ \(\operatorname{COL}\colon\binom{\omega}{a}\to[b]\)_. Then there exists some_ \(H\approx\omega\) _such that_ \[\left|\operatorname{COL}\left(\binom{H}{a}\right)\right|=1.\]
What happens for other ordered sets? In this paper we do the following.
1. In Section 3 we show that \(T(a,\zeta)=2^{a}\). This can be obtained by the result due to Masulovic & Sobot [10] that \(T(a,\omega+\omega)=2^{a}\). We give a more direct proof.
2. In Sections 4, 5, 6, 7, 8, and 9 we construct theorems to eventually determine \(T(a,\alpha)\) for all ordinals \(\alpha<\omega^{\omega}\). Masulovic & Sobot [10] previously showed for all ordinals \(\alpha\geqslant\omega^{\omega}\) that \(T(a,\alpha)=\infty\). They also showed for ordinals \(\alpha<\omega^{\omega}\) that \(T(a,\alpha)\) is finite; however, they did not obtain the exact values of \(T(a,\alpha)\).
## 3 Big Ramsey degrees of \(\zeta\)
As a warmup we first prove \(T(1,\zeta)=2\) and \(T(2,\zeta)=4\).
**Theorem 10**.: \(T(1,\zeta)=2\)_._
Proof.: Let \(b\in\mathbb{N}\).
We first prove \(T(1,\zeta)\leqslant 2\). Let \(\operatorname{COL}\colon\zeta\to[b]\). Let \(\operatorname{COL}^{\prime}\colon\omega\to[b]^{2}\) be defined by
\[\operatorname{COL}^{\prime}(x)=(\operatorname{COL}(-x),\operatorname{COL}(x)).\]
By Theorem 9, there exists an \(\omega\)-\(1\)-homogeneous set \(H^{\prime}\). Let the color of the homogeneous set be \((c_{1},c_{2})\). Consider the set \(H=-H^{\prime}+H^{\prime}\), which is order-equivalent to \(\zeta\). Let \(h\in H\). If \(h\in H^{\prime}\) then by definition of \(\operatorname{COL}^{\prime}\) and \(1\)-homogeneity of \(H^{\prime}\), \(\operatorname{COL}(h)=c_{1}\). Similarly, if \(h\in-H^{\prime}\), \(\operatorname{COL}(h)=c_{2}\). Thus \(H\) is \(\zeta\)-\(2\)-homogeneous. Because \(\operatorname{COL}\) was arbitrary, \(T(1,\zeta)\leqslant 2\).
We now prove \(T(1,\zeta)\geqslant 2\). Let \(\operatorname{COL}\colon\zeta\to[2]\) be the function that colors all nonnegative integers "\(1\)" and all negative integers "\(2\)". Since the nonnegative integers have no infinitely descending chain and the negative integers have no infinitely ascending chain, there is no \(\zeta\)-\(1\)-homogeneous subset of \(\zeta\) under \(\operatorname{COL}\). Therefore \(T(1,\zeta)\geqslant 2\), and with the previous result, \(T(1,\zeta)=2\).
**Theorem 11**.: \(T(2,\zeta)=4\)_._
Proof.: Let \(b\in\mathbb{N}\).
We first prove \(T(2,\zeta)\leqslant 4\). Let \(\operatorname{COL}\colon\binom{\zeta}{2}\to[b]\). Let \(\operatorname{COL}^{\prime}\colon\binom{\omega}{2}\to[b]^{4}\) be defined by
\[\operatorname{COL}^{\prime}(x,y)=(\operatorname{COL}(-x,-y),\operatorname{COL }(-x,y),\operatorname{COL}(x,-y),\operatorname{COL}(x,y)).\]
By Theorem 9 there exists an \(\omega\)-\(1\)-homogeneous set \(H^{\prime}\). Let the color of the homogeneous set be \((c_{1},c_{2},c_{3},c_{4})\). Label \(H^{\prime}\) as \(\{h_{0}<h_{1}<\cdots\}\).
Then consider the set
\[H=\{-h_{i}\colon i\text{ is even}\}+\{h_{i}\colon i\text{ is odd}\},\]
which is order-equivalent to \(\zeta\). Let \(h_{i},h_{j}\in H\). Let \(h_{i}=s_{i}n_{i}\) and \(h_{j}=s_{j}n_{j}\), where \(n_{i},n_{j}\geqslant 0\) and \(s_{i},s_{j}\in\{-1,1\}\). Then by definition of \(\operatorname{COL}^{\prime}\), \(\operatorname{COL}(h_{i},h_{j})\) is in \(\{c_{1},c_{2},c_{3},c_{4}\}\), depending on \(s_{i}\) and \(s_{j}\). As an aside, this method only works when \(n_{i}\) and \(n_{j}\) are guaranteed to be distinct, as we forced with our alternation of sign by the parity of index.
Since \(\operatorname{COL}(H)\subseteq\{c_{1},c_{2},c_{3},c_{4}\}\), \(H\) is \(\zeta\)-\(4\)-homogeneous. We could have partitioned \(H^{\prime}\) on something other than parity; any two disjoint infinite sets would work. Therefore \(T(2,\zeta)\leqslant 4\).
We now prove \(T(2,\zeta)\geqslant 4\). Let \(\operatorname{COL}\colon\binom{\zeta}{2}\to[4]\) be the coloring
\[\operatorname{COL}(x,y)=\begin{cases}1&\text{if $x,y\geqslant 0$}\\ 2&\text{if $x\geqslant 0$, $y<0$, and $|x|\leqslant|y|$}\\ 3&\text{if $x\geqslant 0$, $y<0$, and $|x|>|y|$}\\ 4&\text{if $x<0$, $y<0$}\end{cases}\]
We leave it to the reader to show there is no \(\zeta\)-\(3\)-homogeneous set. The key idea of the proof is that if we suppose \(\operatorname{COL}\) does not output some color under some set, then that set cannot be order-equivalent to \(\zeta\). Therefore \(T(2,\zeta)\geqslant 4\), and with the previous result, \(T(2,\zeta)=4\).
**Theorem 12**.: _For all \(a\in\mathbb{N}\), \(T(a,\zeta)=2^{a}\)._
Proof.: Let \(b\in\mathbb{N}\).
We first prove \(T(a,\zeta)\leqslant 2^{a}\). Let \(\operatorname{COL}\colon\binom{\zeta}{a}\to[b]\) be an arbitrary coloring. Let \(\operatorname{COL}^{\prime}\colon\binom{\omega}{a}\to[b]^{(2^{a})}\) be defined by
\[\operatorname{COL}^{\prime}(x_{1},\dots,x_{a})=(\operatorname{COL}(x_{1}, \dots,x_{a}),\operatorname{COL}(-x_{1},x_{2},\dots,x_{a}),\dots,\operatorname{ COL}(-x_{1},\dots,-x_{a})).\]
Formally, the tuple contains \(\operatorname{COL}\)'s output on each tuple in the set
\[\{-x_{1},x_{1}\}\times\{-x_{2},x_{2}\}\times\dots\times\{-x_{a},x_{a}\}.\]
The output of \(\operatorname{COL}^{\prime}\) goes through all ways to negate one of the \(2^{a}\) subsets of the \(x_{i}\). Note that \(\operatorname{COL}^{\prime}\) only depends on the color of elements of \(\binom{\zeta}{a}\) where the absolute values of the elements are all different.
By Theorem 9 there exists some homogeneous set \(H^{\prime}\). Index \(H^{\prime}\) as \(\{h_{0}<h_{1}<\cdots\}\). Then the set
\[H=\{-h_{i}\colon i\text{ is even}\}+\{h_{i}\colon i\text{ is odd}\}\]
is \(\zeta\)-\(2^{a}\)-homogeneous. Because the absolute values of every element in \(H\) are all different, each \(a\)-subset was considered by \(\operatorname{COL}^{\prime}\). Therefore \(T(a,\zeta)\leqslant 2^{a}\).
We now prove \(T(a,\zeta)\geqslant 2^{a}\). We describe a coloring \(\operatorname{COL}\colon{\zeta\choose a}\to[2]^{a}\).
Let \(\{x_{1}<\cdots<x_{a}\}\in{\zeta\choose a}\). Define an ordering \(<^{*}\) as \(x<^{*}y\) if \(|x|<|y|\) or both \(|x|=|y|\) and \(x<y\) (we order by absolute values, and in case of ties, order the negative before the positive).
Let \((i_{1},\ldots,i_{a})\) be such that
\[|x_{i_{1}}|<^{*}|x_{i_{2}}|<^{*}\cdots<^{*}|x_{i_{a}}|.\]
Let \(s_{i_{j}}\) be \(1\) if \(x_{i_{j}}\geqslant 0\) and \(0\) if \(x_{i_{j}}<0\).
We define \(\operatorname{COL}(\{x_{1}<\cdots<x_{a}\})\) as \((s_{i_{1}},\ldots,s_{i_{a}})\).
We use \(2^{a}\) colors, as there are \(a\) elements each either \(1\) or \(0\). We leave it to the reader to show that there is no \(\zeta\)-\((2^{a}-1)\)-homogeneous set.
## 4 Big Ramsey degrees of finite multiples of \(\omega\)
As noted in Theorem 9, \(T(a,\omega)=1\). In this and later sections we examine limit ordinals larger than \(\omega\). For simplicity in stating results, we save the big Ramsey degrees of successor ordinals such as \(\omega+1,\omega+2,\ldots\) as well as some limit ordinals such as \(\omega^{2}+\omega\) for Section 9.
Our first result shows \(T(a,\omega\cdot k)=k^{a}\) for most \(a,k\). This is undefined when \(a=k=0\). Lemma 6 shows that even though \(\omega\cdot 0=0=\emptyset\) under ordinal arithmetic, \(T(0,\omega\cdot 0)=1\).
**Theorem 13**.: _For \(a,k\in\mathbb{N}\) with at least one of \(a,k\) nonzero, \(T(a,\omega\cdot k)\leqslant k^{a}\)._
Proof.: Let \(a,b,k\in\mathbb{N}\). Let
\[\operatorname{COL}\colon{\omega\cdot k\choose a}\to[b]\]
be an arbitrary coloring. Let \(\operatorname{COL}^{\prime}\colon{\omega\choose a}\to[b]^{(k^{a})}\) be defined by
\[\operatorname{COL}^{\prime}(x_{1},x_{2},\ldots,x_{a})= (\operatorname{COL}(x_{1},\ldots,x_{a}),\operatorname{COL}(\omega+x _{1},x_{2},\ldots,x_{a}),\ldots,\] \[\operatorname{COL}(\omega\cdot(k-1)+x_{1},\ldots,\omega\cdot(k-1 )+x_{a}))\]
where \(\operatorname{COL}^{\prime}\) maps \(a\) elements of \(\omega\) to the \(\operatorname{COL}\) of each of the \(k^{a}\) ways to add one of \(\omega\cdot 0\) through \(\omega\cdot(k-1)\) with each of the \(a\) coordinates. Formally, its output tuple contains \(\operatorname{COL}\)'s assignment of each element of
\[\{x_{1},\omega+x_{1},\ldots,\omega\cdot(k-1)+x_{1}\}\times\cdots\times\{x_{a},\omega+x_{a},\ldots,\omega\cdot(k-1)+x_{a}\}.\]
Apply Theorem 9 with \(\operatorname{COL}^{\prime}\) to find some \(G\approx\omega\) such that
\[\left|\operatorname{COL}^{\prime}\left({G\choose a}\right)\right|=1.\]
Let the one color in \(\operatorname{COL}^{\prime}({G\choose a})\) be \(Y\). Note that \(Y\) is a tuple of length \(k^{a}\).
Index \(G\) as \(\{g_{0}<g_{1}<\cdots\}\) and let
\[H= \{\omega\cdot 0+g_{i}\colon i\equiv 0\mod k\}+\] \[\{\omega\cdot 1+g_{i}\colon i\equiv 1\mod k\}+\cdots+\] \[\{\omega\cdot(k-1)+h_{i}\colon i\equiv k-1\mod k\}.\]
Now \(H\approx\omega\cdot k\). Note that the use of modulus was only to ensure each copy of \(\omega\) within \(H\) has different numbers, so this is not the only way to define a useful \(H\). For the case of \(k=3\), we have
\[H=\{ 0, 3, 6,\ldots,\] \[\omega+1, \omega+4, \omega+7,\ldots,\] \[\omega\cdot 2+2,\omega\cdot 2+5,\omega\cdot 2+8,\ldots\}.\]
Then
\[\left|\operatorname{COL}\left(\binom{H}{a}\right)\right|\leqslant k^{a}:\]
for any selection of \(a\) elements from \(H\), its color was considered in \(\operatorname{COL}^{\prime}\) so it must be one of the \(k^{a}\) colors in \(Y\).
We now prove that \(T(a,\omega\cdot k)\) is bounded below by \(k^{a}\) by providing an example coloring. The coloring is inspired by Theorem 13, although we use it to prove a different bound. This duality will become more clear in later sections.
**Theorem 14**.: _For \(a,k\in\mathbb{N}\), \(T(a,\omega\cdot k)\geqslant k^{a}\)._
Proof.: We give a \(k^{a}\)-coloring of \(\binom{\omega\cdot k}{a}\) that has no \((k^{a}-1)\)-homogeneous \(H\approx\omega\cdot k\). We represent \(\omega\cdot k\) as
\[\omega\cdot k\approx X_{1}+\cdots+X_{k}\]
where each \(X_{i}\approx\omega\). If an element \(x\in\omega\cdot k\) is the \(k\)th element of \(X_{i}\), we represent it as the ordered pair \((i,x)\).
Before defining the coloring in general we give an example with \(a=5\) and \(k=200\). We define the color of the element
\[e=\{(3,12),(50,2),(110,12),(110,7777),(117,3)\}\]
as follows. Order the ordered pairs by their second coordinates. If two elements have the same second coordinates, order by their first coordinate. We have
\[((50,2),(117,3),(3,12),(110,12),(110,7777)).\]
We define the color of the element as the sequence of first coordinates, so
\[\operatorname{COL}(e)=(50,117,3,110,110).\]
Since the set of possible colors is the set of all \(5\)-tuples of numbers \(\{0,\ldots,199\}\), there are \(200^{5}\) possible colors.
In general, for any \(e=\{(i_{1},x_{1}),\ldots,(i_{a},x_{a})\}\), order the ordered pairs by their second coordinates, break ties with their first coordinates, and then \(\mathrm{COL}(e)\) is the sequence of first coordinates after ordering.
Notice that the number of colors is the number of \(a\)-tuples where each number is in \(\{1,\ldots,k\}\). Hence there are \(k^{a}\) colors. We leave it to the reader to show that there can be no \((\omega\cdot k)\)-\((k^{a}-1)\)-homogeneous \(H\). The key idea of the proof, much like the previous lower bounds in this paper, is supposing that one of the \(k^{a}\) colors is not output by \(\mathrm{COL}\) on some set, and using that to show that the set cannot be order-equivalent to \(\omega\cdot k\).
**Theorem 15**.: _For \(a,k\in\mathbb{N}\), \(T(a,\omega\cdot k)=k^{a}\)._
Proof.: By Theorem 13, \(T(a,\omega\cdot k)\leqslant k^{a}\). By Theorem 14, \(T(a,\omega\cdot k)\geqslant k^{a}\). The result follows.
## 5 A big Ramsey degree of \(\boldsymbol{\omega^{2}}\)
This section provides a concrete example involving ordinals greater than \(\omega\). We use \(\omega^{2}\), which is the set of all ordinals \(\omega\cdot a+b\) with \(a,b\in\omega\). \(\omega^{2}\) is order-equivalent to the ordered-set-concatenation of countably infinite copies of \(\omega\), as visualized below:
\[\begin{array}{ccccc}0,&1,&2,&3,&\ldots\\ \omega+0,&\omega+1,&\omega+2,&\omega+3,&\ldots\\ \omega\cdot 2+0,&\omega\cdot 2+1,&\omega\cdot 2+2,&\omega\cdot 2+3,&\ldots\\ \omega\cdot 3+0,&\omega\cdot 3+1,&\omega\cdot 3+2,&\omega\cdot 3+3,&\ldots\\ \vdots\end{array}\]
Note that although we typeset \(\omega^{2}\) as a grid, it is totally linearly ordered with \(0<1<\ldots<\omega<\omega+1<\ldots\) and so on.
The method used to prove \(T(1,\omega^{2})=1\) is similar to those already seen in this paper. We begin with \(T(2,\omega^{2})\) before proving the general case.
**Theorem 16**.: \(T(2,\omega^{2})=4\)_._
Proof.: We first prove \(T\left(2,\omega^{2}\right)\leqslant 4\).
Let \(b\in\mathbb{N}\). Let
\[\mathrm{COL}\colon\binom{\omega^{2}}{2}\to[b]\]
be an arbitrary coloring. We define four functions \(f_{1},f_{2},f_{3},f_{4}\) from domain \(\binom{\omega}{4}\) to codomain \(\binom{\omega^{2}}{2}\) and then use them to define a coloring from \(\binom{\omega}{4}\) to \([b]^{4}\). In what follows, we index variables as \(x_{1}<x_{2}<x_{3}<x_{4}\).
\(f_{1}\colon\binom{\omega}{4}\rightarrow\binom{\omega^{2}}{2}\) is defined by
\[f_{1}(x_{1},x_{2},x_{3},x_{4})=\{\omega\cdot x_{1}+x_{2},\omega\cdot x_{3}+x_{4}\}.\]
\(f_{2}\colon\binom{\omega}{4}\rightarrow\binom{\omega^{2}}{2}\) is defined by
\[f_{2}(x_{1},x_{2},x_{3},x_{4})=\{\omega\cdot x_{1}+x_{3},\omega\cdot x_{2}+x_{4}\}.\]
\(f_{3}\colon\binom{\omega}{4}\rightarrow\binom{\omega^{2}}{2}\) is defined by
\[f_{3}(x_{1},x_{2},x_{3},x_{4})=\{\omega\cdot x_{1}+x_{4},\omega\cdot x_{2}+x_{3 }\}.\]
\(f_{4}\colon\binom{\omega}{4}\rightarrow\binom{\omega^{2}}{2}\) is defined by
\[f_{4}(x_{1},x_{2},x_{3},x_{4})=\{\omega\cdot x_{1}+x_{2},\omega\cdot x_{1}+x_{ 3}\}.\]
\(\operatorname{COL}^{\prime}\colon\binom{\omega}{4}\rightarrow[b]^{4}\) is defined by
\[\operatorname{COL}^{\prime}(X)=(\operatorname{COL}(f_{1}(X)),\operatorname{ COL}(f_{2}(X)),\operatorname{COL}(f_{3}(X)),\operatorname{COL}(f_{4}(X))).\]
Apply Theorem 9 on \(\operatorname{COL}^{\prime}\) to find some \(G\approx\omega\) where \(|\operatorname{COL}^{\prime}\left(\binom{G}{4}\right)|=1\). Enumerate \(G\) as \(G=\{x_{0},x_{1},\ldots\}\) with \(x_{0}<x_{1}<\cdots\). Let
\[H= \{\omega\cdot x_{1}+x_{2},\omega\cdot x_{1}+x_{6},\omega\cdot x_{1 }+x_{10},\ldots\}\] \[+ \{\omega\cdot x_{3}+x_{4},\omega\cdot x_{3}+x_{12},\omega\cdot x_ {3}+x_{20},\ldots\}\] \[+ \{\omega\cdot x_{5}+x_{8},\omega\cdot x_{5}+x_{24},\omega\cdot x_ {5}+x_{40},\ldots\}\] \[\vdots\]
Formally,
\[H=X_{1}+X_{2}+\cdots\]
where
\[X_{i}=\{\omega\cdot x_{2i-1}+x_{j}\colon j=2^{i}+k2^{i+1},k\in\mathbb{N}\}.\]
Note that each of the infinite sets' coefficients are disjoint.
Then \(H\approx\omega^{2}\), as it is the concatenation of countably infinite sets order-equivalent to \(\omega\). Note that for every \(\omega\cdot x_{i}+x_{j}\in H\) we have \(x_{i}<x_{j}\).
For any edge \(\{\omega\cdot y_{1}+y_{2},\omega\cdot y_{3}+y_{4}\}\in\binom{H}{2}\) with \(\omega\cdot y_{1}+y_{2}<\omega\cdot y_{3}+y_{4}\), either \(y_{1}\neq y_{3}\) or \(y_{1}=y_{3}\).
* If \(y_{1}\neq y_{3}\), then \(y_{1}<y_{3}\) by the ordering of the two elements and \(y_{2}\neq y_{4}\) by the construction of \(H\). We also have \(y_{1}<y_{2}\), \(y_{1}<y_{4}\), and \(y_{3}<y_{4}\) by the construction of \(H\). Then either \(y_{1}<y_{2}<y_{3}<y_{4}\), \(y_{1}<y_{3}<y_{2}<y_{4}\), or \(y_{1}<y_{3}<y_{4}<y_{2}\). In each of the three cases, \(f_{1}(y_{1},y_{2},y_{3},y_{4})\in Y\), \(f_{2}(y_{1},y_{3},y_{2},y_{4})\in Y\), and \(f_{3}(y_{1},y_{3},y_{4},y_{2})\in Y\) respectively so \(\operatorname{COL}(\{\omega\cdot y_{1}+y_{2},\omega\cdot y_{3}+y_{4}\})\in Y\).
* If \(y_{1}=y_{3}\), then \(y_{2}<y_{4}\) by the ordering of the elements and so \(y_{1}=y_{3}<y_{2}<y_{4}\) by the construction of \(H\). Because \(f_{4}(y_{1},y_{2},y_{4},1)\in Y\) (note that \(f_{4}\)'s output does not depend on its \(4\)th argument), \(\operatorname{COL}(\{\omega\cdot y_{1}+y_{2},\omega\cdot y_{3}+y_{4}\})\in Y\).
In all cases, \(\operatorname{COL}(\{\omega\cdot y_{1}+y_{2},\omega\cdot y_{3}+y_{4}\})\in Y\) so
\[\operatorname{COL}\left(\binom{H}{2}\right)\subseteq Y\]
with \(|Y|=4\). Because \(H\approx\omega^{2}\) and \(\operatorname{COL}\) was arbitrary, \(T\left(2,\omega^{2}\right)\leqslant 4\).
We now prove \(T\left(2,\omega^{2}\right)\geqslant 4\). Let \(\operatorname{COL}\colon\binom{\omega^{2}}{2}\to[4]\) with
\[\operatorname{COL}(\omega\cdot x_{1}+x_{2},\omega\cdot x_{3}+x_{4})=\begin{cases} 1&x_{1}<x_{2}<x_{3}<x_{4}\\ 2&x_{1}<x_{3}<x_{2}<x_{4}\\ 3&x_{1}<x_{3}<x_{4}<x_{2}\\ 4&\text{otherwise}\end{cases}\]
where \(\omega\cdot x_{1}+x_{2}<\omega\cdot x_{3}+x_{4}\). Let \(H\subseteq\omega^{2}\) be any set such that \(H\approx\omega^{2}\). We show that \(|\operatorname{COL}(\binom{H}{2}))|=4\). Because \(H\approx\omega^{2}\), we have \(H=H_{1}+H_{2}+\cdots\) where each \(H_{i}\approx\omega\). Let \(y_{i}\) and \(z_{ij}\) be such that every element of \(H_{i}\) is of the form \(\omega\cdot y_{i}+z_{ij}\). Note that
\[y_{1}<y_{2}<\cdots\]
and for all \(i\),
\[z_{i1}<z_{i2}<\cdots.\]
We now show each color is output by \(\operatorname{COL}\) on \(H\).
* Color 1: Starting with any \(y_{i}\), find some \(z_{ij}\) where \(z_{ij}>y_{i}\). This is guaranteed, as \(y_{i}\) is finite and the \(z_{ij}\) are infinitely increasing. Then find some \(y_{k}\) where \(y_{k}>z_{ij}\); again guaranteed because \(z_{ij}\) is finite and the \(y_{i}\) are infinitely increasing. Finally, find a \(z_{k\ell}>y_{k}\) guaranteed by similar means. Then \[\operatorname{COL}(\omega\cdot y_{i}+z_{ij},\omega\cdot y_{k}+z_{k\ell})=1.\]
* Colors 2 and 3 are guaranteed to exist by a similar argument similar to color 1's argument.
* Color 4. The case of \(x_{1}=x_{3}<x_{2}<x_{4}\) falls into the category of _otherwise_. With this in mind, we can start with some \(y_{i}\), and then find a \(z_{ij}>y_{i}\). Then, we only need to find a \(z_{i\ell}>z_{ij}\); this is guaranteed because \(z_{ij}\) is finite. Then \[\operatorname{COL}(\omega\cdot y_{i}+z_{ij},\omega\cdot y_{i}+z_{i\ell})=4.\]
While it was simple to find edges that output each of these four colors, the proof that \(T(2,\omega^{2})\leqslant 4\) shows that a coloring with more than four colors can never guarantee that more than four colors are output on an order-equivalent subset.
Coloring Rules
We introduce a concept called _coloring rules_ (hereafter CRs) to prove general results about \(T(a,\omega^{d})\). The concept behind CRs is built on the ideas of Blass et al. [1]. We motivate the concept by examining at the proof of Theorem 16.
The proof of Theorem 16 used four functions \(f_{1},f_{2},f_{3}\), and \(f_{4}\). These functions were chosen to cover \(H\) in a way where the color of every edge in \(H\) was output by one of \(f_{1},f_{2},f_{3}\), or \(f_{4}\). We note a function that was _not_ used:
\(f\colon{\binom{\omega}{4}}\to{\omega^{2}\choose 2}\) defined by
\[f(x_{1},x_{2},x_{3},x_{4})=\operatorname{COL}(\{\omega\cdot x_{1}+x_{3}, \omega\cdot x_{2}+x_{3}\}).\]
We didn't use \(f\) in the lower bound proof because \(f\) didn't cover _any_ edges in \(H\): we constructed \(H\) in a way where distinct copies of \(\omega\) had distinct finite coefficients. Since \(x_{1}\neq x_{2}\), the elements \(\omega\cdot x_{1}+x_{3}\) and \(\omega\cdot x_{2}+x_{3}\) couldn't both be from \(H\) no matter the values of \(x_{1},x_{2}\), and \(x_{3}\).
We could have designed \(H\) differently to require more than \(4\) functions to cover it, but that would have weakened the upper bound result of Theorem 16. We define a notion of colorings that only \(f_{1},f_{2},f_{3}\), and \(f_{4}\) qualify. We also show how to count these colorings, and how these colorings are linked to big Ramsey degrees.
**Definition 17**.: We now define CRs (coloring rules) rigorously. We impose a structure on edges and list criteria that CRs must satisfy. For integers \(a,d,k\geqslant 0\), an edge \(e=\{p_{1},\ldots,p_{a}\}\) in \({\omega^{d\cdot k}\choose a}\) consists of \(a\) points which are elements of \(\omega^{d}\cdot k\). Each element \(p_{q}\) is equal to
\[\omega^{d}\cdot y_{q}+\omega^{d-1}\cdot x_{q,d-1}+\omega^{d-2}\cdot x_{q,d-2}+ \cdots+\omega^{1}\cdot x_{q,1}+x_{q,0},\]
where \(x_{q,n}\geqslant 0\) and \(0\leqslant y_{q}<k\).
Thus, any edge \(e\) is defined by the \(a\) values of the \(y_{q}\)'s and the \(a\cdot d\) values of the \(x_{q,n}\)'s.
A _CR (coloring rule)_ on \({\omega^{d\cdot k}\choose a}\) is a pair \((\mathcal{Y},\preceq_{\mathcal{X}})\) of constraints on these values \(y_{q}\) and \(x_{q,n}\) satisfying certain properties we enumerate below. \(\mathcal{Y}\) is an assignment of the values for the \(y_{q}\); formally, it is a map \(\mathcal{Y}:[a]\to\{0,\ldots,k-1\}\) from indices of the \(y_{q}\) to the possible values for the \(y_{q}\). Having \(\mathcal{Y}(q)=v\) means we constrain \(y_{q}=v\). \(\preceq_{\mathcal{X}}\) is a total preorder (each pair of elements are compared, and \(\preceq_{\mathcal{X}}\) is reflexive and transitive) on the indices of the \(x_{q,n}\). Having \((q_{1},n_{1})\preceq_{\mathcal{X}}(q_{2},n_{2})\) means we constrain \(x_{q_{1},n_{1}}\leqslant x_{q_{2},n_{2}}\).
For some \(\mathcal{Y}\), we often denote it by an ordered list of clauses:
\[y_{1}=\mathcal{Y}(1),y_{2}=\mathcal{Y}(2),\ldots,y_{a}=\mathcal{Y}(a).\]
For some preorder \(\preceq_{\mathcal{X}}\), we ocfen denote it by some permutation of the characters \(x_{1,0},\ldots,x_{a,d-1}\) interspersed by either \(<\) or \(=\). We write \(x_{q_{1},n_{1}}<x_{q_{2},n_{2}}\) to mean both \((q_{1},n_{1})\preceq_{\mathcal{X}}(q_{2},n_{2})\) and \((q_{2},n_{2})\not\preceq_{\mathcal{X}}(q_{1},n_{1})\) and we write \(x_{q_{1},n_{1}}=x_{q_{2},n_{2}}\) to mean both \((q_{1},n_{1})\preceq_{\mathcal{X}}(q_{2},n_{2})\) and \((q_{2},n_{2})\preceq_{\mathcal{X}}(q_{1},n_{1})\). Note that \(x_{a}=x_{b}<x_{c}\) and \(x_{b}=x_{a}<x_{c}\) are two representations of the same preorder, so this notation is not unique. One example of such a representation is
\[x_{11}=x_{21}<x_{10}<x_{20}.\]
To be a CR the following must hold:
1. If \(d\geqslant 1\), \(x_{i0}<x_{j0}\) for all \(i<j\). Otherwise when \(d=0\), \(y_{i}<y_{j}\) for all \(i<j\). (The element indices are ordered by their lowest-exponent variable.)
2. \(y_{i}\neq y_{j}\implies x_{in}\neq x_{jn}\) for all \(n\). (Elements that have different \(y\) values have all different \(x\) values.)
3. \(x_{qa}<x_{qb}\) for all \(a>b\). (The high-exponent variables of each element are strictly less than the low-exponent variables.)
4. \(x_{ia}=x_{jb}\implies a=b\). (Only variables with the same exponent can be equal.)
5. \(x_{in}\neq x_{jn}\implies x_{i,n-1}\neq x_{j,n-1}\) for all \(n>0\). (Elements that differ in a high-exponent variable differ in all lower-exponent variables.)
An example of a CR for \(\binom{\omega^{2}}{2}\) is
\[y_{1}=0,\ y_{2}=0,\ x_{11}=x_{21}<x_{10}<x_{20}.\]
Note that because \(k=1\) in the example, we must have \(y_{q}=0\) for every \(q\).
Two CRs \((\mathcal{Y},\preceq_{\mathcal{X}})\) and \((\mathcal{Y}^{\prime},\preceq^{\prime}_{\mathcal{X}})\) are equivalent if and only if \(\mathcal{Y}=\mathcal{Y}^{\prime}\) and \(\preceq_{\mathcal{X}}=\preceq^{\prime}_{\mathcal{X}}\).
**Definition 18**.: The _size_ of a CR is how many equivalence classes its \(x_{qn}\) form under \(\preceq_{X}\): for example, \(x_{11}=x_{21}<x_{10}<x_{20}\) would have size \(p=3\) regardless of of \(y_{q}\). Clearly a CR's size \(p\) can be no larger than \(a\cdot d\), how many \(x\) variables \(\binom{\omega^{d}}{a}\) has.
**Definition 19**.:
1. \(P_{p}\left(a,\omega^{d}\cdot k\right)\) is the number of CRs with size \(p\) there are for \(\binom{\omega^{d}\cdot k}{a}\).
2. \(P\left(a,\omega^{d}\cdot k\right)\) is the total number of CRs there are for \(\binom{\omega^{d}\cdot k}{a}\) of any size. It can be calculated as \[\sum_{p=0}^{a\cdot d}P_{p}(a,\omega^{d}\cdot k).\] We will show \(T(a,\omega^{d}\cdot k)=P(a,\omega^{d}\cdot k)\).
**Definition 20**.: An edge
\[\{\omega^{d}\cdot y_{q}+\omega^{d-1}\cdot x_{q,d-1}+\omega^{d-2}\cdot x_{q,d- 2}+\cdots+\omega^{1}\cdot x_{q,1}+x_{q,0}\colon 1\leqslant q\leqslant a\}\]
_satisfies_ some CR if \(\mathcal{Y}(q)=y_{q}\) for every \(1\leqslant q\leqslant a\) and if \((q_{1},n_{1})\preceq_{\mathcal{X}}(q_{2},n_{2})\iff x_{q_{1},n_{1}}\leqslant x _{q_{2},n_{2}}\). Note that some edges might not satisfy any CRs.
Big Ramsey degrees of \(\mathbf{\omega}^{d}\)
This section is devoted to the case where \(k=1\) in \(\binom{\omega^{d}\cdot k}{a}\). When \(k=1\), each \(y_{q}\) in a CR is forced to be \(0\). Then all \(y_{q}\) values are the same, so criterion 2 of Definition 17 is always satisfied. In this section, our proofs focus only on how \(\preceq_{X}\) permutes the \(x_{qn}\) variables. In this section, when values for \(y_{q}\) are not specified, they are assumed to be all \(0\) by default.
This section's aim is to show equality between big Ramsey degrees and numbers of CRs. We start with a recurrence that counts CRs.
**Lemma 21**.: _For integers \(a,d\geqslant 0\),_
\[P_{p}\left(a,\omega^{d}\right)=\begin{cases}0&d=0\wedge a\geqslant 2\\ 1&a=0\wedge p=0\\ 0&a=0\wedge p\geqslant 1\\ 1&d=0\wedge a=1\wedge p=0\\ 0&d=0\wedge a=1\wedge p\geqslant 1\\ 1&d=1\wedge a\geqslant 1\wedge a=p\\ 0&d=1\wedge a\geqslant 1\wedge a\neq p\\ \sum\limits_{j=1}^{a}\sum\limits_{i=0}^{p-1}\binom{p-1}{i}P_{i}\left(j,\omega^ {d-1}\right)P_{p-1-i}\left(a-j,\omega^{d}\right)&d\geqslant 2\wedge a \geqslant 1\end{cases}\]
Proof.: First suppose \(a\geqslant 2\) and \(d=0\). As argued at the beginning of this section, \(y_{q}=0\) for all \(q\). But by criterion 1 of Definition 17, since \(d=0\) we need \(y_{1}<y_{2}\), so no CRs are possible regardless of size \(p\). This proves the first case of the result.
Suppose \(a=0\). Since there are no \(y_{q}\) variables, and since \(a\cdot d=0\), there are no \(x\) variables to permute. Therefore the criteria are vacuously satisfied. There is only one CR and it has size \(p=0\). This proves the second and third cases of the result.
When both \(d=0\) and \(a\leqslant 1\), criterion 1 of Definition 17 is vacuously satisfied with either no \(y_{q}\) or \(y_{1}=0\). Again, because \(a\cdot d=0\), there are no \(x_{qn}\) variables to permute. Thus there is only one CR with size \(p=0\), which proves the fourth and fifth cases of the result.
Now suppose \(a\geqslant 1\) and \(d=1\). To ensure criterion 1 of Definition 17, each of the \(a\) variables \(x_{q,0}\) can only form one CR \(x_{1,0}<x_{2,0}<\ldots<x_{a,0}\) with size \(a\) so \(P_{a}\left(a,\omega^{d}\right)=1\) and \(P_{p}\left(a,\omega^{d}\right)=0\) for \(p\neq a\). This proves the sixth and seventh cases of the result.
Finally, consider \(a\geqslant 1,d\geqslant 2\). We prove this final case by showing that the process described below creates all possible CRs of an expression.
For arbitrary integers \(a\geqslant 1\), \(d\geqslant 2\), and \(p\geqslant 0\), let \(1\leqslant j\leqslant a\) and \(0\leqslant i\leqslant p-1\) be integers. As we proceed through the process, we work with an example of \(a=4\), \(d=5\), \(p=13\), \(j=2\), and \(i=5\).
We create
\[\binom{p-1}{i}P_{i}\left(j,\omega^{d-1}\right)P_{p-1-i}\left(a-j,\omega^{d}\right)\]
CRs of size \(p\) by combining \(P_{i}\left(j,\omega^{d-1}\right)\) CRs with size \(i\) and \(P_{p-1-i}\left(a-j,\omega^{d}\right)\) CRs with size \(p-1-i\), with \(\binom{p-1}{i}\) CRs derived from each pair.
Let \(\tau_{1}\) represent one of the \(P_{i}\left(j,\omega^{d-1}\right)\) CRs of \(\binom{\omega^{d-1}}{j}\) with size \(i\), and \(\tau_{2}\) represent one of the \(P_{p-1-i}\left(a-j,\omega^{d}\right)\) CRs of \(\binom{\omega^{d}}{a-j}\) with size \(p-1-i\). In our example, let
\[\tau_{1} \colon x_{1,3}=x_{2,3}<x_{1,2}=x_{2,2}<x_{1,1}=x_{2,1}<x_{1,0}<x_{2,0}\] \[\tau_{2} \colon x_{1,4}=x_{2,4}<x_{1,3}=x_{2,3}<x_{1,2}=x_{2,2}<x_{2,1}<x_{1,1}<x_{1,0}<x_{2,0}.\]
Then we can combine each \(\tau_{1}\) and \(\tau_{2}\) to form \(\binom{p-1}{i}\) unique new CRs of size \(p\): Reindex each variable \(x_{q,n}\) as \(\tau_{2}\) to \(x_{q+j,n}\), and permute the equivalence classes of the CRs together, preserving each CR's original ordering of its own equivalence classes: there are \(\binom{p-1}{i}\) ways to do this. In our example, after reindexing \(\tau_{2}\) we have
\[\tau_{1} \colon x_{1,3}=x_{2,3}<x_{1,2}=x_{2,2}<x_{1,1}=x_{2,1}<x_{1,0}<x_{ 2,0}\] \[\tau_{2} \colon x_{3,4}=x_{4,4}<x_{3,3}=x_{4,3}<x_{3,2}=x_{4,2}<x_{4,1}<x_{ 3,1}<x_{3,0}<x_{4,0}\]
and one of the \(\binom{12}{5}\) permutations is
\[x_{3,4}=x_{4,4}<x_{1,3}=x_{2,3}<x_{1,2}=x_{2,2}<x_{3,3}=x_{4,3}<x _{3,2}=x_{4,2}\] \[<x_{1,1}=x_{2,1}<x_{4,1}<x_{1,0}<x_{3,1}<x_{3,0}<x_{2,0}<x_{4,0}.\]
This new CR likely breaks criterion 1 of Definition 17; for each \(1\leqslant q\leqslant a\), reindex each \(x_{q,n}\) according to where \(x_{q,0}\) is in the ordering of all \(x_{i,0}\). In our example, we have \(x_{1,0}<x_{3,0}<x_{2,0}<x_{4,0}\); after swapping indices 2 and 3 to enforce criterion 1 we have
\[x_{2,4}=x_{4,4}<x_{1,3}=x_{3,3}<x_{1,2}=x_{3,2}<x_{2,3}=x_{4,3}<x _{2,2}=x_{4,2}\] \[<x_{1,1}=x_{3,1}<x_{4,1}<x_{1,0}<x_{2,1}<x_{2,0}<x_{3,0}<x_{4,0}.\]
There are now \(d\cdot(a-j)+(d-1)\cdot j=a\cdot d-j\) variables in the CR. There are \(j\) variables of the form \(x_{q_{i},d-1}\) for \(1\leqslant i\leqslant j\) that are not in the CR yet; insert one equivalence class \(x_{q_{1},d-1}=x_{q_{2},d-1}=\cdots=x_{q_{j},d-1}\) at the front of the new CR, bringing its size to \(p\). We insert \(x_{1,4}=x_{3,4}\) in our example to get
\[x_{1,4}=x_{3,4}<x_{2,4}=x_{4,4}<x_{1,3}=x_{3,3}<x_{1,2}=x_{3,2}<x _{2,3}=x_{4,3}<x_{2,2}=x_{4,2}\] \[<x_{1,1}=x_{3,1}<x_{4,1}<x_{1,0}<x_{2,1}<x_{2,0}<x_{3,0}<x_{4,0}.\]
Each CR is unique by the \(\tau_{1}\) and \(\tau_{2}\) used to create it because the process is invertible: we can remove the leading equivalence class and separate the remaining variables into \(\tau_{1}\) and \(\tau_{2}\) by whether their indices were in the leading equivalence class and reindexing. Here, \(\tau_{1}\) corresponds to indices 1 and 3 (not including the leading equivalence class) and is **bolded**, and \(\tau_{2}\) corresponds to indices 2 and 4 and is underlined.
\[x_{1,4}=x_{3,4}<\underline{x_{2,4}=x_{4,4}}<\mathbf{x_{1,3}=x_{3,3 }<x_{1,2}=x_{3,2}<\underline{x_{2,3}=x_{4,3}<\underline{x_{2,2}=x_{4,2}}}}\\ <\mathbf{x_{1,1}=x_{3,1}<\underline{x_{4,1}<x_{1,0}<\underline{x_{2,1 }<x_{2,0}<x_{3,0}<\underline{x_{4,0}}}}}.\]
We now show that each CR created by this process has the properties described by Definition 17. Because the high-exponent equivalence class \(x_{q_{1},d-1}=x_{q_{2},d-1}=\dots=x_{q_{j},d-1}\) was added at the start of the CR, the high-exponent coefficients of each term are smaller than the low-exponent coefficients. Criterion 1 is satisfied by reindexing the variables. The remaining criteria are satisfied because \(\tau_{1}\) and \(\tau_{2}\) satisfied them and their internal orders were preserved in permuting the equivalence classes. Therefore this process does not overcount CRs.
We next show that every CR of \(\binom{\omega^{d}}{a}\) is counted by this process. Each can be mapped to some \(\tau_{1}\) and \(\tau_{2}\) that create it by a similar argument to proving that the process creates unique CRs. Every CR of \(\binom{\omega^{d}}{a}\) must have a leading equivalence class of \(x_{q_{1},d-1}=x_{q_{2},d-1}=\dots=x_{q_{j},d-1}\) to satisfy Definition 17 (the equivalence class might only contain one variable); taking only the variables \(x_{q,n}\) with indices appearing in that equivalence class (but not those variables in the equivalence class itself) forms \(\tau_{1}\), a CR for \(\binom{\omega^{d-1}}{j}\). The variables with \(q\) indices not in the equivalence class form \(\tau_{2}\), a CR for \(\binom{\omega^{d}}{a-j}\). The original CR of \(\binom{\omega^{d}}{a}\) is counted by interleaving \(\tau_{1}\) with \(\tau_{2}\) and inserting the leading equivalence class of \(x_{q_{1},d-1}=x_{q_{2},d-1}=\dots=x_{q_{j},d-1}\). Therefore the final case of the result holds.
For more information, see the OEIS [11], where \(P(a,\omega^{2})\) is sequence A000311 and \(P(2,\omega^{d})\) is A079309. Other sequences such as \(P(a,\omega^{3})\) are not contained in the OEIS at the time this paper was produced, although we can compute them. Values for small \(a,d\) are tabulated in the appendix; see Table 1.
### \(\boldsymbol{T(a,\omega^{d})\leqslant P(a,\omega^{d})}\)
We use the following lemma to show that CRs bound big Ramsey degrees from above.
**Lemma 22**.: _For integers \(a,d\geqslant 0\) and \(G\approx\omega\), there exists some \(H\subseteq\omega^{d}\) with \(H\approx\omega^{d}\) where for all \(e\in\binom{H}{a}\), \(e\) satisfies a CR of \(\binom{\omega^{d}}{a}\) and each coefficient in \(e\) is contained in \(G\)._
Proof.: Because \(G\approx\omega\), we can index it \(x_{0},x_{1},x_{2},\dots\) with \(x_{0}<x_{1}<x_{2}<\dots\). We proceed by induction on \(d\). When \(d=0\), \(\omega^{0}\approx 1\) and so \(H=\{x_{0}\}\) suffices. When \(d=1\), \(\omega^{1}\approx\omega\) so \(H=G\) suffices.
For \(d\geqslant 2\), partition \(G\setminus\{x_{0}\}\) into infinite sets order-equivalent to \(\omega\):
\[X_{0} =\{x_{1},x_{3},x_{5},\dots\}\] \[X_{1} =\{x_{2},x_{6},x_{10},\dots\}\] \[X_{2} =\{x_{4},x_{12},x_{20},\dots\}\] \[X_{3} =\{x_{8},x_{24},x_{40},\dots\}\] \[\vdots\]
Formally,
\[X_{i}=\{x_{j}\colon j=2^{i}+k2^{i+1},k\in\mathbb{N}\}.\]
Apply the inductive hypothesis on \(X_{i}\) for all \(i\geqslant 1\), yielding \(S_{i}\approx\omega^{d-1}\) for all \(i\geqslant 1\). Then for all \(i\geqslant 1\), for all \(e\in\binom{S_{i}}{a}\), \(e\) satisfies a CR of \(\binom{\omega^{d-1}}{a}\) and each coefficient of \(e\) is contained in \(X_{i}\). For all \(i\), let \(S_{i}=\{y_{i,0},y_{i,1},\ldots\}\). Then let
\[H= \quad\{\omega^{d-1}x_{1}+y_{1,0},\omega^{d-1}x_{1}+y_{1,1}, \omega^{d-1}x_{1}+y_{1,2},\ldots\}\] \[+\{\omega^{d-1}x_{3}+y_{2,0},\omega^{d-1}x_{3}+y_{2,1},\omega^{d-1 }x_{3}+y_{2,2},\ldots\}\] \[+\{\omega^{d-1}x_{5}+y_{3,0},\omega^{d-1}x_{5}+y_{3,1},\omega^{d-1 }x_{5}+y_{3,2},\ldots\}\] \[+\ \cdots.\]
Then \(H\) is the concatenation of \(\omega\) ordered sets, each order-equivalent to \(\omega^{d-1}\). Hence \(H\approx\omega^{d}\). For any edge \(e\) in \(\binom{S}{a}\), index its variables to satisfy criterion 1 of Definition 17 (this is possible because all low-exponent coefficients are distinct in \(H\)). Criterion 3 is satisfied inductively for variables with exponents lower than \(d-1\). Because \(\min X_{i}=x_{2^{i}}\) for all \(i\) and \(2i-1<2^{i}\) for all integers \(i\geqslant 1\), \(x_{2i-1}<x\) for all \(x\in X_{i}\). So criterion 3 is satisfied by \(e\). Because \(X_{0}\) is disjoint from all \(X_{i}\) with \(i\geqslant 1\), criterion 4 is satisfied for variables with exponent \(d-1\) and by induction, it is satisfied for lower exponents. Because \(X_{i}\) is disjoint with \(X_{j}\) for all \(i\neq j\), elements that differ in variables with exponent \(d-1\) differ in all lower-exponent variables. The induction with the previous statement satisfies criterion 5. Therefore \(e\) satisfies a CR of \(\binom{\omega^{d}}{a}\). The coefficients in \(e\) are contained in \(G\) by the construction of \(H\) from \(G\).
**Theorem 23**.: _For integers \(a,d\geqslant 0\), \(T\left(a,\omega^{d}\right)\leqslant P\left(a,\omega^{d}\right)\)._
Proof.: Let \(E=\binom{\omega^{d}}{a}\) and
\[\text{COL}\colon E\rightarrow[b]\]
be an arbitrary coloring of \(E\) for some \(b\in\mathbb{N}\).
Enumerate the CRs of \(E\) as \(\tau_{1}\) to \(\tau_{P(a,\omega^{d})}\). The maximum size of any CR of \(E\) is \(a\cdot d\). For each \(\tau_{i}\), let
\[f_{i}\colon\binom{\omega}{a\cdot d}\to E\]
where if \(\tau_{i}\) has size \(p\), \(f_{i}\) maps \(X\) to the unique \(e\in E\) where \(e\) satisfies \(\tau_{i}\) and the \(p\) equivalence classes of \(e\) are made up of the \(p\) least elements of \(X\). For example, one CR of \(\binom{\omega^{2}}{2}\) is
\[x_{11}=x_{21}<x_{10}<x_{20}.\]
The corresponding \(f_{i}\) would be \(f_{i}\colon\binom{\omega}{4}\rightarrow\binom{\omega^{2}}{2}\) with
\[f_{i}(x_{1},x_{2},x_{3},x_{4})=\text{COL}(\{\omega\cdot x_{1}+x_{2},\omega \cdot x_{1}+x_{3}\})\]
where \(x_{1}<x_{2}<x_{3}<x_{4}\). Note that \(f_{i}\) does not depend on \(x_{4}\) - this is because the example CR has size 3, but the largest CR for \(\binom{\omega^{2}}{2}\) has size 4.
Then, define \(\mathrm{COL}^{\prime}\colon\binom{\omega}{a\cdot d}\to[b]^{P(a,\omega^{d})}\) with
\[\mathrm{COL}^{\prime}(X)=(f_{1}(X),f_{2}(X),\ldots,f_{P(a,\omega^{d})}(X))\]
and apply Theorem 9 to find some \(G\approx\omega\) where
\[\left|\mathrm{COL}^{\prime}\left(\binom{G}{a\cdot d}\right)\right|=1.\]
Let the one color in \(\mathrm{COL}^{\prime}(\binom{G}{a\cdot d})\) be \(Y\). Note that \(Y\) is a tuple of \(P(a,\omega^{d})\) colors.
Apply Lemma 22 to find some \(H\approx\omega^{d}\) with the properties listed in Lemma 22. Now we claim
\[\left|\mathrm{COL}\left(\binom{H}{a}\right)\right|\leqslant P(a,\omega^{d}).\]
By Lemma 22, each element \(e\in\binom{H}{a}\) satisfies a CR of \(E\). Then for any arbitrary edge \(e\), let \(e\) satisfy \(\tau_{i}\) with size \(p\leqslant a\cdot d\). Then take the \(p\) unique values in \(e\), and if necessary, insert any new larger nonnegative integers from \(G\) to form a set of \(a\cdot d\) values; denote this \(X\in\binom{G}{a\cdot d}\). \(\mathrm{COL}^{\prime}(X)=Y\) so by the definition of \(\mathrm{COL}^{\prime}\), \(\mathrm{COL}(e)\in Y\). Because \(|Y|=P(a,\omega^{d})\), \(T(a,\omega^{d})\leqslant P(a,\omega^{d})\).
### \(\boldsymbol{T(a,\omega^{d})\geqslant P(a,\omega^{d})}\)
**Theorem 24**.: _For integers \(a,d\geqslant 0\), \(T\left(a,\omega^{d}\right)\geqslant P\left(a,\omega^{d}\right)\)._
Proof.: If \(P(a,\omega^{d})=0\), this is satisfied vacuously because \(T(a,\omega^{d})\geqslant 0\). Now suppose \(P(a,\omega^{d})\geqslant 1\). Let \(E=\binom{\omega^{d}}{a}\). Note that all CRs of \(E\) are disjoint from each other. That is, for any edge \(e\in E\), if \(e\) satisfies \(\tau^{\prime}\), then it does not satisfy any nonequivalent CR of \(E\). This is because if \(e\) were to satisfy two CRs \(\tau_{1}\) and \(\tau_{2}\), then \(\tau_{1}\) and \(\tau_{2}\) must share the same equivalence classes and order, so the CRs must be equivalent. Therefore, we can index them \(\tau_{1}\ldots\tau_{P(a,\omega^{d})}\) and construct a coloring \(\mathrm{COL}\colon E\to[P(a,\omega^{d})]\) with
\[\mathrm{COL}(e)=\begin{cases}i&e\text{ satisfies }\tau_{i}\\ 1&\text{otherwise}\end{cases}\]
Similar to the proof of Theorem 16, our coloring has two ways to output color \(1\), both through satisfaction of \(\tau_{1}\) and through the catch-all case. The part that forces color \(1\) to be present in all order-equivalent subsets is the satisfaction of \(\tau_{1}\).
We now prove there is no \(\omega^{d}\)-\((P(a,\omega^{d})-1)\)-homogeneous set. For all \(H\approx\omega^{d}\) and for every CR \(\tau\) of \(E\), we find some \(e\in\binom{H}{a}\) that satisfies \(\tau\). For arbitrary \(H\approx\omega^{d}\) and \(\tau\), we find \(z_{qn}\) where
\[\{\omega^{d-1}z_{1,d-1}+\cdots+\omega^{1}z_{1,1}+z_{1,0},\ldots,\omega^{d-1}z _{a,d-1}+\cdots+\omega^{1}z_{a,1}+z_{a,0}\}\]
satisfies \(\tau\).
We do this by assigning values to each \(z_{qn}\) according to where the equivalence class that contains \(x_{qn}\) is found in \(\tau\), moving left to right in \(\tau\)'s permutation. By criterion 3 of Definition 17, each \(z_{qn}\) is assigned before \(z_{q,n-1}\). As we do this, we ensure that if the leftmost unassigned value in \(\tau\) is \(z_{qn}\), then
\[\{\omega^{d-1}z_{q,d-1}+\cdots+\omega^{n+1}z_{q,n+1}+\omega^{n}c_{n}+\omega^{n- 1}c_{n-1}+\cdots+\omega^{1}c_{1}+c_{0}\mid c_{i}\in\omega\}\cap H\approx\omega^ {n+1}.\]
By criterion 3 of Definition 17, the leftmost variable in \(\tau\) must be \(x_{q,d-1}\). Before any values are assigned, it is clear that
\[\{\omega^{d-1}c_{d-1}+\cdots+\omega^{1}c_{1}+c_{0}\mid c_{i}\in\omega\}= \omega^{d},\]
and because \(H\subseteq\omega^{d}\), \(\omega^{d}\cap H=H\approx\omega^{d}\).
By criterion 4 of Definition 17, all variables in an equivalence class must have the same exponent \(d\). Let the leftmost equivalence class in \(\tau\) be \(x_{q_{1},n}=x_{q_{2},n}=\cdots=x_{q_{m},n}\). By criterion 3, each \(x_{q_{i},\ell}\) for \(1\leqslant i\leqslant m\) and \(\ell>n\) appeared to the left of this equivalence class and has already been assigned a value, and by criterion 5 the values for each exponent are equal: for all \(\ell>n\) and \(1\leqslant i\leqslant m\), \(z_{q_{i},\ell}=z_{q_{1},\ell}\).
By our previous steps,
\[\{\omega^{d-1}z_{q_{1},d-1}+\cdots+\omega^{n+1}z_{j,q_{1},n+1}+\omega^{n}c_{n} +\omega^{n-1}c_{n-1}+\cdots+\omega^{1}c_{1}+c_{0}\mid c_{i}\in\omega\}\cap H \approx\omega^{n+1}.\]
Then there exists some value \(z^{\prime}\) where
\[\{\omega^{d-1}z_{q_{1},d-1}+\cdots+\omega^{n+1}z_{q_{1},n+1}+\omega^{n}z^{ \prime}+\omega^{n-1}c_{n-1}+\cdots+\omega^{1}c_{1}+c_{0}\mid c_{i}\in\omega\} \cap H\approx\omega^{n},\]
where \(z^{\prime}\) is greater than all previously assigned (and therefore finite) \(z_{qn}\) values. Then for \(1\leqslant i\leqslant m\), assign \(z_{q_{i},n}\) to be \(z^{\prime}\).
We can repeat this process to find \(z_{qn}\) for each CR of \(E\) for arbitrary \(H\approx\omega^{d}\). Therefore for all \(H\approx\omega^{d}\), \(|\operatorname{COL}(\binom{S}{a})|\geqslant P(a,\omega^{d})\) so \(T(a,\omega^{d})\geqslant P\left(a,\omega^{d}\right)\).
### \(T(a,\omega^{d})=P(a,\omega^{d})\)
**Theorem 25**.: _For all \(a,d\in\mathbb{N}\), \(T(a,\omega^{d})=P(a,\omega^{d})\)._
Proof.: By Theorem 23, \(T(a,\omega^{d})\leqslant P(a,\omega^{d})\). By Theorem 24, \(T(a,\omega^{d})\geqslant P(a,\omega^{d})\). The result follows.
## 8 Big Ramsey degrees of \(\omega^{d}\cdot k\)
We now use the theory we developed for the case \(k=1\) to prove results for arbitrary \(k\). We first extend the recurrence from Lemma 21.
**Lemma 26**.: _For integers \(a,d,k\geqslant 0\),_
\[P_{p}\left(a,\omega^{d}\cdot k\right)=\\ \begin{cases}0&d=0\wedge a>k\\ 1&a=0\wedge p=0\\ 0&a=0\wedge p\geqslant 1\\ \binom{k}{a}&d=0\wedge 1\leqslant a\leqslant k\wedge p=0\\ 0&d=0\wedge 1\leqslant a\leqslant k\wedge p\geqslant 1\\ k^{a}&d=1\wedge a\geqslant 1\wedge a=p\\ 0&d=1\wedge a\geqslant 1\wedge a\neq p\\ k\sum\limits_{j=1}^{a}\sum\limits_{i=0}^{p-1}\binom{p-1}{i}P_{i}\left(j,\omega ^{d-1}\right)P_{p-1-i}\left(a-j,\omega^{d}\cdot k\right)\,d\geqslant 2\wedge a \geqslant 1\end{cases}\]
Proof.: First suppose \(a>k\) and \(d=0\). Since \(0\leqslant y_{q}<k\) for all \(y_{q}\), there are at most \(k\) unique values for the \(y_{q}\). But by criterion 1 of Definition 17, since \(d=0\) we need \(a\) unique values of \(y_{q}\), so no CRs are possible regardless of size \(p\). This proves the first case of the result.
Suppose \(a=0\). Then there can be no \(y_{q}\), and since \(a\cdot d=0\), there can be no \(x_{qn}\). Thus all criteria are vacuously satisfied. Because there are no \(y_{q}\) or \(x_{qn}\), there is only one CR, and it has size \(p=0\). This proves the second and third cases of the result.
When both \(d=0\) and \(a\leqslant k\), criterion 1 of Definition 17 can be satisfied with the assignments to the \(a\) variables \(y_{q}\) being any permutation of \(a\) unique values out of \(k\) possible integer values. This leads to \(\binom{k}{a}\) feasible combinations. Again, because \(a\cdot d=0\), there are no variables \(x_{qn}\) to permute so there are \(\binom{k}{a}\) empty CRs with size \(p=0\), which proves the fourth and fifth cases of the result.
Now suppose \(a\geqslant 1\) and \(d=1\). To ensure criteria 1 of Definition 17, each of the \(a\) variables \(x_{q,0}\) can only form one permutation \(x_{1,0}<x_{2,0}<\ldots<x_{a,0}\) with size \(a\). Because all \(x_{qn}\) are distinct and \(d=1\), the values \(y_{q}\) are not restricted by any criteria so each of the \(a\) variables can be any of the \(k\) integers. Therefore \(P_{a}\left(a,\omega^{d}\cdot k\right)=k^{a}\) and \(P_{p}\left(a,\omega^{d}\cdot k\right)=0\) for \(p\neq a\). This proves the sixth and seventh cases of the result.
Finally, consider \(a\geqslant 1,d\geqslant 2\). We prove the final case of our result by showing the process for combining CRs described below creates all possible CRs of an expression.
For arbitrary integers \(a\geqslant 1\), \(d\geqslant 2\), \(k\geqslant 0\), and \(p\geqslant 0\), let \(1\leqslant j\leqslant a\) and \(0\leqslant i\leqslant p-1\) be integers.
We create
\[k\binom{p-1}{i}P_{i}\left(j,\omega^{d-1}\right)P_{p-1-i}\left(a-j,\omega^{d} \cdot k\right)\]
CRs of size \(p\) by combining \(P_{i}\left(j,\omega^{d-1}\right)\) CRs with size \(i\) and \(P_{p-1-i}\left(a-j,\omega^{d}\cdot k\right)\) CRs with size \(p-1-i\), with \(k\binom{p-1}{i}\) new CRs for each pair.
Let \(\tau_{1}\) represent one of the \(P_{i}\left(j,\omega^{d-1}\right)\) CRs of \(\binom{\omega^{d-1}}{j}\) with size \(i\), and \(\tau_{2}\) represent one of the \(P_{p-1-i}\left(a-j,\omega^{d}\cdot k\right)\) CRs of \(\binom{\omega^{d}\cdot k}{a-j}\) with size \(p-1-i\).
Then we can combine each \(\tau_{1}\) and \(\tau_{2}\) to form \(k\binom{p-1}{i}\) unique new CRs of size \(p\): Reindex \(\tau_{2}\), permute the equivalence classes, and insert a leading equivalence class \(x_{q_{1},d-1}=x_{q_{2},d-1}=\cdots=x_{q_{j},d-1}\) as in the proof of Lemma 21. This leads to \(\binom{p-1}{i}\) new permutations of the \(x\) variables.
Because \(\tau_{1}\) was a CR for \(\binom{\omega^{d-1}}{j}\), each of its \(y_{q}\) had value \(0\). Now that we are creating a CR for \(\binom{\omega^{d}\cdot k}{a}\), we can choose the \(y\) coefficients to be composed of values between \(0\) and \(k-1\). By criterion 2 of Definition 17, because all elements from \(\tau_{1}\) are bound together in a leading high-exponent equivalence class, they must all have equal \(y\) values. This leads to \(k\) options for these \(y\) values; with the options of permuting the \(x\) variables, \(k\binom{p-i}{i}\) ways to create a new CR.
For the new CR's values for \(y_{q}\), we assign each element originally from \(\tau_{2}\) with its original \(y\) value (likely at a different index due to reindexing). Then, the remaining elements from \(\tau_{1}\) are given all the same \(y\) value from one of the \(k\) options.
Each CR is unique by the \(\tau_{1}\) and \(\tau_{2}\) used to create it because the process is invertible: we can remove the leading equivalence class and separate the remaining variables into \(\tau_{1}\) and \(\tau_{2}\) by whether their indices were in the leading equivalence class and reindexing. The \(y\) values for \(\tau_{2}\) can be found from the CR's \(y\) values after reversing the index change, and the \(y\) values for \(\tau_{1}\) are all \(0\).
We claim each CR created by this process has the properties described by Definition 17: Because the high-exponent equivalence class \(x_{q_{1},d-1}=x_{q_{2},d-1}=\cdots=x_{q_{j},d-1}\) was added at the start of the CR, the high-exponent coefficients of each term are smaller than the low-exponent coefficients. Criterion 1 is satisfied by reindexing the variables. Criterion 2 is met by assigning all elements from \(\tau_{1}\) the same \(y\) value. The remaining criteria are satisfied because \(\tau_{1}\) and \(\tau_{2}\) satisfied them and their internal orders were preserved in permuting the equivalence classes. Therefore this process does not overcount CRs.
We also claim that every CR of \(\binom{\omega^{d}\cdot k}{a}\) is counted by this process: each can be mapped to some \(\tau_{1}\) and \(\tau_{2}\) that create it by a similar argument to proving that the process creates unique CRs. Every CR of \(\binom{\omega^{d}\cdot k}{a}\) must have a leading equivalence class of \(x_{q_{1},d-1}=x_{q_{2},d-1}=\cdots=x_{q_{j},d-1}\) to satisfy Definition 17 (the equivalence class might only contain one variable); taking only the variables \(x_{q,n}\) with indices appearing in that equivalence class (but not those variables in the equivalence class itself) with all-zero \(y\) values forms \(\tau_{1}\), a CR for \(\binom{\omega^{d-1}}{j}\). The variables with \(q\) indices not in the equivalence class with their \(y\) values form \(\tau_{2}\), a CR for \(\binom{\omega^{d}\cdot k}{a-j}\). The original CR of \(\binom{\omega^{d}\cdot k}{a}\) was counted by interleaving \(\tau_{1}\) with \(\tau_{2}\) and inserting the leading equivalence class of \(x_{q_{1},d-1}=x_{q_{2},d-1}=\cdots=x_{q_{j},d-1}\). Therefore the final case of the result holds.
### \(\boldsymbol{T(a,\omega^{d}\cdot k)}\leqslant\boldsymbol{P(a,\omega^{d}\cdot k)}\)
**Lemma 27**.: _For nonnegative integers \(a,d,k\) and \(G\approx\omega\), there exists some \(H\subseteq\omega^{d}\cdot k\), \(H\approx\omega^{d}\cdot k\) where for all \(e\in\binom{H}{a}\), \(e\) satisfies a CR of \(\binom{\omega^{d}\cdot k}{a}\) and each coefficient in \(e\) is contained in \(G\)._
Proof.: Because \(G\approx\omega\), we can index it \(x_{1},x_{2},x_{3},\ldots\) with \(x_{1}<x_{2}<x_{3}<\cdots\).
If \(d=0\), \(H=\{x_{1},x_{2},\ldots,x_{k}\}\) suffices.
If \(d\geqslant 1\), we can first apply Lemma 22 with \(G\) to attain some \(H^{\prime}\approx\omega^{d+1}\) with the listed properties. Then, let \(H\) be the first \(k\) copies of \(\omega^{d}\) within \(H^{\prime}\): formally,
\[H=\{\omega^{d}\cdot y+\omega^{d-1}\cdot x_{d-1}+\ldots+\omega^{1}\cdot x_{1}+x_ {0}\in H^{\prime}\mid y<k\}.\]
Because the edges of \(H^{\prime}\) satisfied criterion 5 of Definition 17 at exponent \(n=d+1\), the edges of \(H\) satisfy criterion 2. The remaining criteria are satisfied because \(H^{\prime}\) satisfied them.
**Theorem 28**.: _For integers \(a,d,k\geqslant 0\), \(T\left(a,\omega^{d}\cdot k\right)\leqslant P\left(a,\omega^{d}\cdot k\right)\)._
Proof.: Let \(E=\binom{\omega^{d}\cdot k}{a}\) and
\[\mathrm{COL}\colon E\rightarrow[b]\]
be an arbitrary coloring of \(E\) for some \(b\in\mathbb{N}\).
Enumerate the CRs of \(E\) from \(\tau_{1}\) to \(\tau_{P(a,\omega^{d}\cdot k)}\). The maximum size of any CR of \(E\) is \(a\cdot d\). For each \(\tau_{i}\), let
\[f_{i}\colon\binom{\omega}{a\cdot d}\to E\]
where if \(\tau_{i}\) has size \(p\), \(f_{i}\) maps \(X\) to the unique \(e\in E\) where \(e\) satisfies \(\tau_{i}\) and the \(p\) equivalence classes of \(e\) are made up of the \(p\) least elements of \(X\). For example, one CR of \(\binom{\omega^{2}\cdot 2}{2}\) is
\[y_{1}=0,\ y_{2}=1,\ x_{11}<x_{21}<x_{20}<x_{10}.\]
The corresponding \(f_{i}\) would be \(f_{i}\colon\binom{\omega}{4}\rightarrow\binom{\omega^{2}\cdot 2}{2}\) with
\[f_{i}(x_{1},x_{2},x_{3},x_{4})=\mathrm{COL}(\{\omega^{2}\cdot 0+\omega\cdot x_{1 }+x_{4},\omega^{2}\cdot 1+\omega\cdot x_{2}+x_{3}\})\]
where \(x_{1}<x_{2}<x_{3}<x_{4}\). Note that the values of the \(y_{q}\)'s are used directly in of the definition of \(f_{i}\) - for the CR with identical orderings on \(x_{qn}\) except \(y_{0}=1\) and \(y_{1}=0\), the coefficients \(y_{q}\) would be swapped.
Then, define \(\mathrm{COL}^{\prime}\colon\binom{\omega}{a\cdot d}\rightarrow[b]^{P(a, \omega^{d}\cdot k)}\) with
\[\mathrm{COL}^{\prime}(X)=(f_{1}(X),f_{2}(X),\ldots,f_{P(a,\omega^{d}\cdot k)}(X))\]
and apply Theorem 9 to find some \(G\approx\omega\) where
\[\left|\mathrm{COL}^{\prime}\left(\binom{G}{a\cdot d}\right)\right|=1\]
Let \(Y\) be the one color in \(\mathrm{COL}^{\prime}(\binom{G}{a\cdot d})\). Note that \(Y\) is a tuple of \(P(a,\omega^{d}\cdot k)\) colors.
Apply Lemma 27 to find some \(H\approx\omega^{d}\cdot k\) with the properties listed in Lemma 27. Now we claim
\[\left|\operatorname{COL}\left(\binom{H}{a}\right)\right|\leqslant P(a,\omega^{d} \cdot k).\]
By Lemma 27, each element \(e\in\binom{H}{a}\) satisfies a CR of \(E\). Then for any arbitrary edge \(e\), let \(e\) satisfy \(\tau_{i}\) with size \(p\leqslant a\cdot d\). Then take the \(p\) unique values in \(e\), and if necessary, insert any new larger nonnegative integers from \(G\) to form a set of \(a\cdot d\) values; denote this \(X\in\binom{G}{a\cdot d}\). \(\operatorname{COL^{\prime}}(X)=Y\) so by the definition of \(\operatorname{COL^{\prime}}\), \(\operatorname{COL}(e)\in Y\). Because \(|Y|=P(a,\omega^{d}\cdot k)\), \(T(a,\omega^{d}\cdot k)\leqslant P(a,\omega^{d}\cdot k)\).
### \(T(a,\omega^{d}\cdot k)\geqslant P(a,\omega^{d}\cdot k)\)
**Theorem 29**.: _For \(a,d,k\in\mathbb{N}\), \(T\left(a,\omega^{d}\cdot k\right)\geqslant P\left(a,\omega^{d}\cdot k\right)\)._
Proof.: If \(P(a,\omega^{d}\cdot k)=0\), this is satisfied vacuously because \(T(a,\omega^{d}\cdot k)\geqslant 0\). Suppose \(P(a,\omega^{d}\cdot k)\geqslant 1\). Let \(E=\binom{\omega^{d}\cdot k}{a}\). Note that all CRs of \(E\) are disjoint from each other. That is, for any edge \(e\in E\), if \(e\) satisfies \(\tau^{\prime}\), then it does not satisfy any nonequivalent CR of \(E\). This is because if \(e\) were to satisfy two CRs \(\tau_{1}\) and \(\tau_{2}\), then \(\tau_{1}\) and \(\tau_{2}\) must share the same \(y_{q}\) values, equivalence classes and order, so the CRs must be equivalent. Therefore, we can index them \(\tau_{1}\ldots\tau_{P(a,\omega^{d})}\) and construct a coloring \(\operatorname{COL}\colon E\to[P(a,\omega^{d}\cdot k)]\) with
\[\operatorname{COL}(e)=\begin{cases}i&e\text{ satisfies }\tau_{i}\\ 1&\text{otherwise}\end{cases}\]
Similar to Theorem 16, our coloring has two ways to output color \(1\), both through satisfaction of \(\tau_{1}\) and through the catch-all case. The part that forces color \(1\) to be present in all order-equivalent subsets is the satisfaction of \(\tau_{1}\). For arbitrary \(H\approx\omega^{d}\cdot k\) and \(\tau\), we find variables \(y_{q}\) and \(z_{qn}\) where
\[\{\omega^{d}y_{1}+\omega^{d-1}z_{1,d-1}+\cdots+\omega^{1}z_{1,1}+z_{1,0}, \ldots,\omega^{d}y_{a}+\omega^{d-1}z_{a,d-1}+\cdots+\omega^{1}z_{a,1}+z_{a,0}\}\]
satisfies \(\tau\).
For any \(H\approx\omega^{d}\cdot k\) and \(\tau\), we first separate \(H\) into \(k\) ordered sets by the leading coefficient, each order-equivalent to \(\omega^{d}\).
Then, if there are equivalence classes in \(\tau\), using the process formally described in the proof of Theorem 24, we consider the leading equivalence class of \(\tau\). By criterion 2 of Definition 17, all variables in that equivalence class must come from same set order-equivalent to \(\omega^{d}\). We assign a finite value to that equivalence class, and move to the next class with a potentially different \(y\) value, using the assigned finite value as a lower bound for the next one. We can repeat this process to find \(z_{qn}\) that satisfy every CR of \(E\) for arbitrary \(H\approx\omega^{d}\). Then, we can assign the \(y_{q}\) variables directly as the \(y\) variables in \(\tau\).
If there are no equivalence classes in \(\tau\) (it has size \(p=0\)), we can simply assign the variables \(y_{q}\) directly according to \(\tau\).
Therefore for all \(H\approx\omega^{d}\), \(|\operatorname{COL}\left(\binom{H}{a}\right)|\geqslant P(a,\omega^{d}\cdot k)\) so \(T(a,\omega^{d}\cdot k)\geqslant P\left(a,\omega^{d}\cdot k\right)\).
### \(T(a,\omega^{d}\cdot k)=P(a,\omega^{d}\cdot k)\)
**Theorem 30**.: _For \(a,d,k\in\mathbb{N}\), \(T(a,\omega^{d}\cdot k)=P(a,\omega^{d}\cdot k)\)._
Proof.: By Theorem 28, \(T(a,\omega^{d}\cdot k)\leqslant P(a,\omega^{d}\cdot k)\). By Theorem 29, \(T(a,\omega^{d}\cdot k)\geqslant P(a,\omega^{d}\cdot k)\). The result follows.
## 9 Big Ramsey degrees of ordinals less than \(\omega^{\omega}\)
### General Coloring Rules
We defined CRs (coloring rules) to compute big Ramsey degrees of ordinals of the form \(\omega^{d}\cdot k\). We'll now extend the definition to _GCRs_ (general coloring rules), which allows us to compute big Ramsey degrees for all ordinals less than \(\omega^{\omega}\).
**Definition 31**.: We now define GCRs rigorously. Much like the definition of CRs, we impose a structure on edges, and then list criteria that GCRs must satisfy. Consider some ordinal \(\alpha<\omega^{\omega}\):
\[\alpha\approx\omega^{d}\cdot k_{d}+\omega^{d-1}\cdot k_{d-1}+\cdots+\omega \cdot k_{1}+k_{0}.\]
Then \(\alpha\) is the addition of \(d+1\) ordinals each of the form \(\omega^{n}\cdot k_{n}\). For any element \(\beta\) of \(\alpha\), there is some least \(q\) such that \(b\in\omega^{q}\cdot k_{q}\). In this case, we write \(\beta\)_originated_ from the \(\omega^{q}\) part of \(\alpha\).
For an integer \(a\geqslant 0\), there are \(a\) elements in \(e\in\binom{\alpha}{a}\). Unlike the definition of CRs, each element can have anywhere from \(0\) to \(d\) variables \(x_{qn}\), depending on which of the \(d+1\) ordered sets the element originated from. For \(1\leqslant q\leqslant a\), we use \(c_{q}\) for the number of variables \(x_{qn}\) element \(q\) has (the element therefore originated from the \(\omega^{c_{q}}\) part of \(\alpha\)). We denote each element as
\[\omega^{c_{q}}\cdot y_{q}+\omega^{c_{q}-1}\cdot x_{q,c_{q}-1}+\omega^{c_{q}-2 }\cdot x_{q,c_{q}-2}+\cdots+\omega^{1}\cdot x_{q,1}+x_{q,0}.\]
where \(0\leqslant y_{q}<k_{c_{q}}\) and \(0\leqslant x_{qn}\).
A _general coloring rule_, hereafter referred to as a GCR, is a triple \((\mathcal{C},\mathcal{Y},\preceq_{\mathcal{X}})\) of constraints on the \(c_{q},y_{q}\), and \(x_{q,n}\).
\(\mathcal{C}\) is a map \(\mathcal{C}:[a]\to\{0,\ldots,d\}\) from indices of the \(c_{q}\) to the possible values for the \(c_{q}\). Similarly, \(\mathcal{Y}\) is a map \(\mathcal{Y}:[a]\to\{0,\ldots,k_{c_{q}}\}\) from indices of the \(y_{q}\) to the possible values for the \(y_{q}\). \(\preceq_{\mathcal{X}}\) is a total preorder on the indices of the \(x_{q,n}\). We will continue to use the same notation to represent the preorder.
GCRs must fulfill the following criteria (only criterion 6 below is different from its corresponding criterion in Definition 17):
1. If \(d\geqslant 1\), \(x_{i0}<x_{j0}\) for all \(i<j\). Otherwise when \(d=0\), \(y_{i}<y_{j}\) for all \(i<j\). (The element indices are ordered by their lowest-exponent variable.)
2. \(y_{i}\neq y_{j}\implies x_{in}\neq x_{jn}\) for all \(n\). (Elements that have a different \(y\) value have all different \(x\) values.)
3. \(x_{qa}<x_{qb}\) for all \(a>b\). (The high-exponent variables of each element are strictly less than the low-exponent variables.)
4. \(x_{ia}=x_{jb}\implies a=b\). (Only variables with the same exponent can be equal.)
5. \(x_{in}\neq x_{jn}\implies x_{i,n-1}\neq x_{j,n-1}\) for all \(n>0\). (Elements that differ in a high-exponent variable differ in all lower-exponent variables.)
6. \(c_{i}\neq c_{j}\implies(y_{i}\neq y_{j}\) and \(x_{in}\neq x_{jn})\) for all \(0\leqslant n<d\). (Different \(c\) variables mean different \(y\) and \(x\) variables.)
For an edge
\[e=\{\omega^{c_{q}}\cdot y_{q}+\omega^{c_{q}-1}\cdot x_{q,c_{q}-1}+\omega^{c_{q} -2}\cdot x_{q,c_{q}-2}+\cdots+\omega^{1}\cdot x_{q,1}+x_{q,0}\colon 1\leqslant q \leqslant a\},\]
\(e\) _satisfies_ the GCR \((\mathcal{C},\mathcal{Y},\preceq_{\mathcal{X}})\) if \(c_{q}=\mathcal{C}(q)\) and \(y_{q}=\mathcal{Y}(q)\) for every \(1\leqslant q\leqslant k_{c_{q}}\), and if \((q_{1},n_{1})\preceq_{\mathcal{X}}(q_{2},n_{2})\iff x_{q_{1},n_{1}}\leqslant x _{q_{2},n_{2}}\) for all \(1\leqslant q_{1},q_{2}\leqslant k_{c_{q}}\) and \(0\leqslant n_{1},n_{2}<d\).
**Definition 32**.: We again define the _size_ of a GCR to be how many equivalence classes its \(x\) variables form. A GCR's size \(p\) is still bounded above by \(d\cdot a\).
**Definition 33**.:
1. \(S_{p}\left(a,\alpha\right)\) is the number of GCRs with size \(p\) there are for \(\binom{\alpha}{a}\).
2. \(S\left(a,\alpha\right)\) is the total number of GCRs there are for \(\binom{\alpha}{a}\) regardless of size. It can be calculated as \[\sum_{p=0}^{a\cdot d}S_{p}(a,\alpha).\] We will show \(T(a,\alpha)=S\left(a,\alpha\right)\).
**Lemma 34**.: _For \(a,d,k,p\in\mathbb{N}\), \(S_{p}(a,\omega^{d}\cdot k)=P_{p}(a,\omega^{d}\cdot k)\)._
Proof.: Let \(\alpha=\omega^{d}\cdot k\). Suppose some \(c_{q}\neq d\). Then \(y_{q}<k_{c_{q}}\) by Definition 31, so \(y_{q}<0\), which is impossible. Thus \(c_{q}=d\). Then there are the same count of \(a\cdot d\) variables \(x_{qn}\) being permuted, the new criterion 6 has no effect because all \(c_{q}\) are equal. Hence both are under the same restrictions so \(S_{p}(a,\omega^{d}\cdot k)=P_{p}(a,\omega^{d}\cdot k)\).
**Lemma 35**.: _For all \(\alpha<\omega^{\omega}\) with_
\[\alpha\approx\omega^{d}\cdot k_{d}+\omega^{d-1}\cdot k_{d-1}+ \cdots+\omega\cdot k_{1}+k_{0}\text{ with }k_{d}\neq 0\text{ and }d>0,\] \[S_{p}\left(a,\alpha\right)=\sum_{j=0}^{a}\sum_{i=0}^{p}\binom{p}{ i}P_{i}\left(j,\omega^{d}\cdot k_{d}\right)S_{p-i}\left(a-j,\omega^{d-1} \cdot k_{d-1}+\cdots+\omega\cdot k_{1}+k_{0}\right).\]
_When the conditions \(k_{d}\neq 0\) and \(d>0\) cannot be satisfied, then \(\alpha<\omega\) and \(S_{p}(a,k_{0})=P_{p}(a,k_{0})\)._
Proof.: When \(k_{0}\) is the only nonzero \(k\) term, Lemma 34 shows \(S_{p}(a,\alpha)=P_{p}(a,\alpha)\). When \(d\geqslant 1\), we describe a process of combining CRs with GCRs to create GCRs for \(\binom{\alpha}{a}\).
For arbitrary integers \(a\geqslant 0,p\geqslant 0\), and some \(\alpha<\omega^{\omega}\), let \(0\leqslant j\leqslant a\) and \(0\leqslant i\leqslant p\) be integers. We create
\[\binom{p}{i}P_{i}\left(j,\omega^{d}\cdot k_{d}\right)S_{p-i}\left(a-j,\omega^{ d-1}\cdot k_{d-1}+\cdots+k_{0}\right)\]
GCRs, with each GCR having \(j\) elements from the \(\omega^{d}\cdot k_{d}\) part of \(\alpha\) and \(a-j\) elements from parts with lower exponents.
Let \(\tau_{1}\) represent one of the \(P_{i}\left(j,\omega^{d}\cdot k_{d}\right)\) CRs of \(\binom{\omega^{d}\cdot k_{d}}{j}\) with size \(i\), and \(\tau_{2}\) represent one of the \(S_{p-i}\left(a-j,\omega^{d-1}\cdot k_{d-1}+\cdots+k_{0}\right)\) GCRs of \(\binom{\omega^{d-1}\cdot k_{d-1}+\cdots+k_{0}}{a-j}\) with size \(p-i\). We change \(\tau_{1}\) into a GCR by assigning it \(c_{q}=d\) for all \(c_{q}\).
Then we can combine each \(\tau_{1}\) and \(\tau_{2}\) to form \(\binom{p}{i}\) unique new GCRs of size \(p\): Reindex \(\tau_{2}\) and permute the equivalence classes as in the proof of Lemma 21. Note that we do not insert a leading equivalence class - this is because we do not need to increase the exponent or size of \(\tau_{1}\).
We can keep each \(c_{q}\) and \(y_{q}\) value the same, and reindex them alongside the \(x_{qn}\) variables to ensure criterion 1.
Each GCR is unique by the \(\tau_{1}\) and \(\tau_{2}\) used to create it because the process is invertible: we can identify the elements originally from \(\tau_{1}\) because they uniquely have \(c_{q}=d\).
We claim each GCR created by this process has the properties described by Definition 31: Because all \(c_{q}\) are equal for \(\tau_{1}\), criterion 6 is satisfied for the elements from \(\tau_{1}\). Criterion 1 is satisfied by reindexing the variables. The remaining criteria are satisfied because \(\tau_{1}\) and \(\tau_{2}\) satisfied them and their internal orders and equivalence classes were preserved in permuting the equivalence classes. Therefore this process does not overcount GCRs.
We also claim that every GCR of \(\binom{\alpha}{a}\) is counted by this process: each can be mapped to some \(\tau_{1}\) and \(\tau_{2}\) that create it by a similar argument to proving that the process creates unique GCRs.
### \(\boldsymbol{T(a,\alpha)\leqslant S(a,\alpha)}\)
**Lemma 36**.: _For all \(\alpha<\omega^{\omega}\), \(a\in\mathbb{N}\), and \(G\approx\omega\), there exists some \(H\subseteq\alpha\), \(H\approx\alpha\) where for all \(e\in\binom{H}{a}\), \(e\) satisfies a GCR of \(\binom{\alpha}{a}\) and each coefficient in \(e\) is contained in \(G\)._
Proof.: Because \(G\approx\omega\), we can index it \(x_{0},x_{1},x_{2},\ldots\) with \(x_{0}<x_{1}<x_{2}<\cdots\). Let \(\alpha\approx\omega^{d}\cdot k_{d}+\omega^{d-1}\cdot k_{d-1}+\cdots+\omega \cdot k_{1}+k_{0}\).
First, apply Lemma 27 on \(G\) to produce an \(H^{\prime}\approx\omega\cdot(d+1)\). For \(0\leqslant n\leqslant d\), let \(G^{\prime}_{n}\approx\omega\) such that
\[H^{\prime}=G^{\prime}_{0}+\cdots+G^{\prime}_{d}.\]
For \(0\leqslant n\leqslant d\), apply Lemma 27 on \(G_{n}^{\prime}\) to yield some \(H_{n}\approx\omega^{n}\cdot k_{n}\) where all \(e\in H_{n}\) satisfy a CR for \(\binom{\omega^{n}k_{n}}{a}\). Then let
\[H=\sum_{n=0}^{d}H_{n}\]
so that \(H\approx\alpha\).
Because all \(e\in H_{n}\) satisfy a CR for \(0\leqslant n\leqslant d\), only criterion 6 of Definition 31 remains to be satisfied. Since we separated \(G\) into disjoint orders \(H_{n}^{\prime}\), each \(H_{n}\) is disjoint from the others so criterion 6 is satisfied. The coefficients in \(e\) are contained in \(G\) by the construction of \(H\) from \(G\).
**Theorem 37**.: _For all \(\alpha<\omega^{\omega}\),_
\[T\left(a,\alpha\right)\geqslant S\left(a,\alpha\right).\]
Proof.: Let \(E=\binom{\alpha}{a}\) and
\[\operatorname{COL}\colon E\to[b]\]
be an arbitrary coloring of \(E\) for some \(b\in\mathbb{N}\).
Enumerate the GCRs of \(E\) from \(\tau_{1}\) to \(\tau_{S(a,\alpha)}\). The maximum size of any GCR of \(E\) is \(a\cdot d\). For each \(\tau_{i}\), let
\[f_{i}\colon\binom{\omega}{a\cdot d}\to E\]
where if \(\tau_{i}\) has size \(p\), \(f_{i}\) maps \(X\) to the unique \(e\in E\) where \(e\) satisfies \(\tau_{i}\) and the \(p\) equivalence classes of \(e\) are made up of the \(p\) least elements of \(X\). For example, one GCR of \(\binom{\omega^{2}+\omega\cdot 8}{2}\) is
\[c_{1}=2,c_{2}=1,y_{1}=0,y_{2}=6,\ x_{11}<x_{20}<x_{10}.\]
The corresponding \(f_{i}\) would be \(f_{i}\colon\binom{\omega}{4}\to\binom{\omega^{2}+\omega\cdot 8}{2}\) with
\[f_{i}(x_{1},x_{2},x_{3},x_{4})=\operatorname{COL}(\{\omega^{2}\cdot 0+\omega \cdot x_{1}+x_{3},\omega\cdot 6+x_{2}\})\]
where \(x_{1}<x_{2}<x_{3}<x_{4}\).
Then, define \(\operatorname{COL}^{\prime}\colon\binom{\omega}{a\cdot d}\to[b]^{S(a,\alpha)}\) with
\[X=(f_{1}(X),f_{2}(X),\ldots,f_{S(a,\alpha)}(X))\]
and apply Theorem 9 to find some \(G\approx\omega\) where
\[\left|\operatorname{COL}^{\prime}\left(\binom{G}{a\cdot d}\right)\right|=1.\]
Let \(Y\) be the one color in \(\operatorname{COL}^{\prime}(\binom{G}{a\cdot d})\). Note that \(Y\) is a tuple of \(S(a,\alpha)\) colors.
Apply Lemma 36 to find some \(H\approx\alpha\) with the properties listed in Lemma 36. Now we claim
\[\left|\operatorname{COL}\left(\binom{H}{a}\right)\right|\leqslant S(a,\alpha)\]
By Lemma 36, each element \(e\in\binom{H}{a}\) satisfies a GCR of \(E\). Then for any edge \(e\), let \(e\) satisfy \(\tau_{i}\) with size \(p\leqslant a\cdot d\). Then take the \(p\) unique values in \(e\), and if necessary, insert any new larger nonnegative integers from \(G\) to form a set of \(a\cdot d\) values; denote this \(X\in\binom{G}{a\cdot d}\). \(\operatorname{COL}^{\prime}(X)=Y\) so by the definition of \(\operatorname{COL}^{\prime}\), \(\operatorname{COL}(e)\in Y\). Because \(|Y|=S(a,\alpha)\), \(T(a,\alpha)\leqslant S(a,\alpha)\).
### \(T(a,\alpha)\geqslant S(a,\alpha)\)
**Theorem 38**.: _For all \(\alpha<\omega^{\omega},\)_
\[T\left(a,\alpha\right)\leqslant S\left(a,\alpha\right).\]
Proof.: If \(S(a,\alpha)=0\), this is satisfied vacuously because \(T(a,\alpha)\geqslant 0\). Suppose \(S(a,\alpha)\geqslant 1\). Let \(E=\binom{\alpha}{a}\). Note that all GCRs of \(E\) are disjoint from each other. That is, for any edge \(e\in E\), if \(e\) satisfies \(\tau^{\prime}\), then it does not satisfy any nonequivalent GCR of \(E\). This is because if \(e\) were to satisfy two GCR \(\tau_{1}\) and \(\tau_{2}\), then \(\tau_{1}\) and \(\tau_{2}\) must share the same \(c_{q}\), \(y_{q}\), equivalence classes, and order, so the GCRs must be equivalent. Therefore, we can index them \(\tau_{1}\ldots\tau_{S(a,\alpha)}\) and construct a coloring \(\operatorname{COL}\colon E\to[S(a,\alpha)]\) with
\[\operatorname{COL}(e)=\begin{cases}i&e\text{ satisfies }\tau_{i}\\ 1&\text{otherwise}\end{cases}\]
For arbitrary \(H\approx\alpha\) and a GCR \(\tau\) for \(\alpha\), we can assign \(c_{q}\) and \(y_{q}\) based on \(\tau\). Then we can apply a similar process to the one used in Theorem 29 to find \(z_{qn}\) variables that match the permutation of \(x_{qn}\) variables.
Let \(\alpha\approx\omega^{d}\cdot k_{d}+\omega^{d-1}\cdot k_{d-1}+\cdots+\omega \cdot k_{1}+k_{0}\). We can separate \(H\) into \(d+1\) sets each order-equivalent to \(\omega^{n}\cdot k_{n}\) for \(0\leqslant n\leqslant d\), and separate each of those into \(k_{n}\) sets order-equivalent to \(\omega^{n}\).
Then, for each equivalence class in \(\tau\), using the process formally described in the proof of Theorem 24, we consider the leading equivalence class of \(\tau\). By criteria 2 and 6 of Definition 31, all variables in that equivalence class must come from same set order-equivalent to \(\omega^{n}\). We assign a finite value to that equivalence class, and move to the next class with potentially different \(c\) and \(y\) values, using the assigned finite value as a lower bound for the next one. We can repeat this process to find \(z_{qn}\) that satisfy each GCR of \(E\) for arbitrary \(H\approx\alpha\).
Therefore for all \(H\approx\alpha\), \(|\operatorname{COL}\left(\binom{H}{a}\right)|\geqslant S(a,\alpha)\) so \(T(a,\alpha)\geqslant S\left(a,\alpha\right)\).
### \(T(a,\alpha)=S(a,\alpha)\)
**Theorem 39**.: _For all \(\alpha<\omega^{\omega}\),_
\[T\left(a,\alpha\right)=S\left(a,\alpha\right).\]
Proof.: By Theorem 37, \(T\left(a,\alpha\right)\geqslant S\left(a,\alpha\right)\). By Theorem 38, \(T\left(a,\alpha\right)\leqslant S\left(a,\alpha\right)\). The result follows.
## 10 Open Problems
The original motivation for this paper was pedagogicial (see the open problems column by Dobrinen & Gasarch [4]). We sought easier proofs of results in the literature. For the case of \(T(a,\zeta)\) we succeeded, as the proof we give is easy in that it uses Ramsey's Theorem on \(\omega\) to do most of the work. For the case of \(T(a,\alpha)\) where \(\alpha<\omega^{\omega}\) our proof is more accessible than literature, and gives exact bounds, but cannot really be called _easy_.
With this in mind, the following open problems remain:
1. Find an easier proof that \(T(a,\alpha)<\infty\). If the easier proof does not give exact bounds, that is fine.
2. Find an easier proof of the exact values for \(T(a,\alpha)\).
3. Find an easy proof that for all \(\alpha\geqslant\omega^{\omega}\), and all \(a\geqslant 2\), \(T(a,\omega^{\omega})\) does not exist.
4. For \(k\geqslant 3\), find combinatorial interpretations for the sequences \(T(a,\omega^{k})\).
5. Find an easier proof of \(T(a,\eta)<\infty\). If the easier proof does not give exact bounds, that is fine.
6. Find an easier proof of the exact values for \(T(a,\eta)\).
### Acknowledgments
We would like to thank Natasha Dobrinen for introducing us to this subject, giving us advice on the project, and helpful comments on the final draft. We would like to thank Nathan Cho, Isaac Mammel, and Adam Melrod for helping us simplify the proof of Theorem 12. |
2303.14506 | Toward DNN of LUTs: Learning Efficient Image Restoration with Multiple
Look-Up Tables | The widespread usage of high-definition screens on edge devices stimulates a
strong demand for efficient image restoration algorithms. The way of caching
deep learning models in a look-up table (LUT) is recently introduced to respond
to this demand. However, the size of a single LUT grows exponentially with the
increase of its indexing capacity, which restricts its receptive field and thus
the performance. To overcome this intrinsic limitation of the single-LUT
solution, we propose a universal method to construct multiple LUTs like a
neural network, termed MuLUT. Firstly, we devise novel complementary indexing
patterns, as well as a general implementation for arbitrary patterns, to
construct multiple LUTs in parallel. Secondly, we propose a re-indexing
mechanism to enable hierarchical indexing between cascaded LUTs. Finally, we
introduce channel indexing to allow cross-channel interaction, enabling LUTs to
process color channels jointly. In these principled ways, the total size of
MuLUT is linear to its indexing capacity, yielding a practical solution to
obtain superior performance with the enlarged receptive field. We examine the
advantage of MuLUT on various image restoration tasks, including
super-resolution, demosaicing, denoising, and deblocking. MuLUT achieves a
significant improvement over the single-LUT solution, e.g., up to 1.1dB PSNR
for super-resolution and up to 2.8dB PSNR for grayscale denoising, while
preserving its efficiency, which is 100$\times$ less in energy cost compared
with lightweight deep neural networks. Our code and trained models are publicly
available at https://github.com/ddlee-cn/MuLUT. | Jiacheng Li, Chang Chen, Zhen Cheng, Zhiwei Xiong | 2023-03-25T16:00:33Z | http://arxiv.org/abs/2303.14506v1 | # Toward DNN of LUTs: Learning Efficient Image Restoration with Multiple Look-Up Tables
###### Abstract
The widespread usage of high-definition screens on edge devices stimulates a strong demand for efficient image restoration algorithms. The way of caching deep learning models in a look-up table (LUT) is recently introduced to respond to this demand. However, the size of a single LUT grows exponentially with the increase of its indexing capacity, which restricts its receptive field and thus the performance. To overcome this intrinsic limitation of the single-LUT solution, we propose a universal method to construct multiple LUTs like a neural network, termed MuLUT. Firstly, we devise novel complementary indexing patterns, as well as a general implementation for arbitrary patterns, to construct multiple LUTs in parallel. Secondly, we propose a re-indexing mechanism to enable hierarchical indexing between cascaded LUTs. Finally, we introduce channel indexing to allow cross-channel interaction, enabling LUTs to process color channels jointly. In these principled ways, the total size of MuLUT is linear to its indexing capacity, yielding a practical solution to obtain superior performance with the enlarged receptive field. We examine the advantage of MuLUT on various image restoration tasks, including super-resolution, demosaicing, denoising, and deblocking. MuLUT achieves a significant improvement over the single-LUT solution, _e.g._, up to 1.1dB PSNR for super-resolution and up to 2.8dB PSNR for grayscale denoising, while preserving its efficiency, which is 100\(\times\) less in energy cost compared with lightweight deep neural networks. Our code and trained models are publicly available at [https://github.com/dilees-cn/MuLUT](https://github.com/dilees-cn/MuLUT).
efficient image restoration, look-up table, super-resolution, demosaicing, denoising, deblocking
## 1 Introduction
Image restoration aims to generate high-quality (HQ) visual data with high-frequency details from low-quality (LQ) observations (_e.g._, downscaled, noisy, and compressed images). Image restoration algorithms enjoy wide applications, ranging from visual quality enhancement [1, 2], digital holography [3], satellite imaging [4], medical imaging [5], and gaming [6, 7]. Moreover, besides improving image quality, image restoration helps in many other computer vision tasks, _e.g._, human face recognition [8], scene understanding [9], and autonomous driving [10].
Recent methods based on deep neural networks (DNNs) [11, 12, 13, 14, 15, 16, 17, 18] have made impressive progress in restoration performance, thanks to their scalability and flexibility from constructing elementary building blocks like convolutional layers. However, superior performance is usually obtained at a cost of heavy computational burden. Although this can be alleviated by elaborate network structures or dedicated computing engines (_e.g._, GPU and NPU), the hardware cost and power consumption still limit the deployment of existing deep restoration networks. Specifically, the growing number of high-definition screens on edge devices (_e.g._, smartphones and televisions) calls for a practical restoration solution.
On the other hand, in the image processing pipeline [19, 20, 21], look-up table (LUT) is a widely-used mapping operator, especially for color manipulation. For sRGB color-wise mapping, the source colors and the corresponding target colors are stored in index-value pairs in a LUT. This way, each pixel can be directly mapped to the target color with highly efficient memory access. An emerging research, SR-LUT [22], adopts LUT to image super-resolution by building spatial-wise mapping between low-resolution (LR) patches and high-resolution (HR) patches. Specifically, SR-LUT utilizes a single LUT to cache the exhaustive HR patch values for later retrieval, which are computed in advance by a learned super-resolution network. At inference time, the LR patches sampled from neighboring pixels are compared with indexes in the LUT, and the cached HR patch values are retrieved. This contributes significantly to the power efficiency and inference speed, making SR-LUT a distinct solution other than existing lightweight super-resolution networks [13, 23, 24]. However, in practice, the size of LUT is limited by the on-device memory. For a single LUT, the size grows **exponentially** as the dimension of indexing entries (_i.e._, indexing capacity) increases. This imposes a restriction on the indexing capacity as well as the corresponding receptive field (RF) of the super-resolution network to be cached, which is the main obstacle to performance improvement.
In this paper, we embrace the merits of LUT and propose a universal method to overcome its intrinsic limitation, by enabling the cooperation of **M**ultiple **LUTs**, termed MuLUT. Inspired by the construction of a common DNN, we propose three fundamental ways to construct LUTs in the spatial, depth, and channel dimensions. 1) In the spatial dimension, we devise novel _complementary indexing_ patterns as well as a general implementation for realizing arbitrary patterns, to construct LUTs in a parallel manner. 2) In the depth dimension, we enable _hierarchical indexing_ between cascaded LUTs, by proposing a re-indexing mechanism to link between LUTs from different hierarchies. 3) In the channel |
2302.03353 | What do Language Models know about word senses? Zero-Shot WSD with
Language Models and Domain Inventories | Language Models are the core for almost any Natural Language Processing
system nowadays. One of their particularities is their contextualized
representations, a game changer feature when a disambiguation between word
senses is necessary. In this paper we aim to explore to what extent language
models are capable of discerning among senses at inference time. We performed
this analysis by prompting commonly used Languages Models such as BERT or
RoBERTa to perform the task of Word Sense Disambiguation (WSD). We leverage the
relation between word senses and domains, and cast WSD as a textual entailment
problem, where the different hypothesis refer to the domains of the word
senses. Our results show that this approach is indeed effective, close to
supervised systems. | Oscar Sainz, Oier Lopez de Lacalle, Eneko Agirre, German Rigau | 2023-02-07T09:55:07Z | http://arxiv.org/abs/2302.03353v1 | # What do Language Models know about word senses?
###### Abstract
Language Models are the core for almost any Natural Language Processing system nowadays. One of their particularities is their contextualized representations, a game changer feature when a disambiguation between word senses is necessary. In this paper we aim to explore to what extent language models are capable of discerning among senses at inference time. We performed this analysis by prompting commonly used Languages Models such as BERT or RoBERTa to perform the task of Word Sense Disambiguation (WSD). We leverage the relation between word senses and domains, and cast WSD as a textual entailment problem, where the different hypothesis refer to the domains of the word senses. Our results show that this approach is indeed effective, close to supervised systems.
## 1 Introduction
It is undeniable that Language Models (LM) have drastically changed the Natural Language Processing (NLP) field [14]. More recently, those LM have also shown to be capable of performing NLP tasks with just few examples given in the context [1], using the so called _prompting_. One of their particularities, and the key difference with previous approaches, is their contextualized token representation. Allowing the model to adopt different representations for words (tokens) depending on the context has supposed a huge advantage when sense disambiguation is required for a given inference. But, **to what extent do LM actually know about word senses?** In this work, we tried to answer that question by evaluating LMs directly on the Word Sense Disambiguation (WSD) task via prompting.
Word Sense Disambiguation is the task of identifying the correct sense of a word in a given context. Current state-of-the-art on WSD involves fine-tuning a LM on SemCor [14] to predict the correct among all possible sense glosses of the word in the given context. Other methods leverage the contextual representations of LM to perform WSD with a simple K-NN algorithm on the embedding space. Lately, the use of domain inventories was proposed to alleviate the high granularity of knowledge-bases [1]. Recent studies that worked on zero-shot WSD refer to the task of predicting the senses of new lemmas not seeing during training as zero-shot [1] WSD, however we aim for a completely zero-shot evaluation, where no annotated data is available for any lemma.
Despite the knowledge already encoded in the LM, training data is used in one way or another to introduce knowledge about the task. To avoid drawing noisy conclusions, we evaluated the LM as they are, without further fine-tuning on or using any kind of WSD training data. To that end, we prompted LMs like BERT [4] and RoBERTa [15] to perform a task
Figure 1: An example of the Word Sense Disambiguation task converted to Textual Entailment, where the hypothesis refer to the possible domains of word senses. To solve the task a model would be asked to select the most probable hypothesis based on the context.
that requires WSD knowledge to be successfully solved.
Figure 1 shows an example of how a model can be prompted to solve WSD using Textual Entailment as a proxy. On this example we consider that the word bank has senses from three different domains: _Geography and places_, _Business, economics and finance_ and _Geology and geophysics_. The three possible domains are converted to hypothesis using predefined prompts. Finally, a supervised Textual Entailment model is used to perform the inference. More details on of the approach are discussed in Section 2.
In this work we first evaluated commonly used LMs as a zero-shot domain labelers with 3 different domain inventories. Then, following (Lacerra et al., 2020) we addressed the WSD using domain inventories and evaluated the LMs on them. We showed that LMs have some notion of senses as they perform zero-shot WSD significantly better than a random baseline and sometimes close to the supervised state-of-the-art. We also provided different analysis comparing different prompts and performing an error analysis over the two evaluated tasks.
## 2 Prompting Language Models
Since the past few years, prompting has become the _de facto_ approach to probe language models (Li et al., 2022). Min et al. (2021) defined prompting as the practice of adding natural language text, often short phrases, to the input or output to encourage pre-trained models to perform specific tasks. However, due to its wide definition, several different ways of prompting exists, such as _instruction based_, _template-based_ or _proxy-task based_. For more information about prompting we encourage the reader to read the Liu et al. (2022) survey.
In this work we focused on the _proxy-task based_ approach, more precisely, we made use of the Next Sentence Prediction (NSP) and Textual Entailment (TE) tasks as a proxy. The TE is also known as Natural Language Inference (NLI), we will use both terms interchangeably. The choice of this approach was made based on previous works on zero-shot domain labelling (Sainz and Rigau, 2021).
Both, NSP and TE are sentence-pair classification tasks: the first attempts to predict whether a sentence is followed by another and the second aims to predict if an entailment relation exists between both sentences (premise and hypothesis). Figure 2 shows an example of how to perform WSD using NSP or TE models. The process can be briefly summarized as follows: (1) for each possible sense \(s\) of the target word \(w\) we obtain their corresponding domain \(d\) using a domain inventory \(D\) (domain inventories are discussed in more detail in Section 3). (2) predefined prompts are used to generate verbalizations that will serve as possible continuations (on NSP) or hypothesis (on TE) \(h\). (3) a pretrained NSP or TE model is used to obtain a probability for each sentence/hypothesis and therefore, to each domain. Formally, for a TE model we defined the probability of word \(w\) being from domain \(d_{i}\in D^{w}\) in context \(c\) as follows:
\[P(d_{i}|c,w)=P(\text{entailment}|c,h_{wi}) \tag{1}\]
where \(h_{wi}\) is the hypothesis generated using a predefined prompt, the domain label \(d_{i}\) and the word \(w\). Similarly, for a NSP model the probability is defined as follows:
\[P(d_{i}|c,w)=P(\text{is\_next}|c,h_{wi}) \tag{2}\]
Table 2 shows the prompts used for probing Language Models in Domain Labelling and Word
Figure 2: Graphical description of the zero-shot WSD approach using Domain Inventories.
Sense Disambiguation tasks.
## 3 Domain Inventories
A domain inventory is a set of domain labels such as _Health and Medicine_, _Culture_ or _Business and economics_ that aims to cover the wider spectrum of domains as possible with a specific granularity level. Actually, these domain inventories are used to label synsets from knowledge-bases like WordNet (Fellbaum, 1998) and BabelNet (Navigli and Ponzetto, 2012). Examples of WordNet synset annotations from different domain inventories are shown in the Table 1. Recent studies (Lacerra et al., 2020) suggest to use domain inventories to address the high granularity problem that affects WSD tasks. In this section we describe the three domain inventories on which we evaluated the Language Models.
**BabelDomains**Camacho-Collados and Navigli (2017) is a unified resource that includes domain information for Wikipedia, WordNet and BabelNet. It inherits the domains from Wikipedia domains of knowledge, a total of 34 coarse labels. Although it is semi-automatically annotated, two gold standard datasets (for WordNet and Wikipedia) are provided for evaluation.
**Coarse Sense Inventory (CSI)**Lacerra et al. (2020) was created to reduce the level of granularity of WordNet synsets while maintaining their expressiveness. It contains a total of 45 labels shared across the lexicon. Compared to previous alternatives, CSI provided a higher agreement among annotators. Also it was already proven to be useful for the WSD task.
**WordNet Domains**Bentivogli et al. (2004) is a fine-grained domain inventory containing about 160 labels. It is organised in a hierarchical way, from global concepts such as _pure_science_ to specific concepts as _oceanography_. This inventory provides a domain label to each synset in WordNet. Due to the hierarchical nature and fine granularity, in our experiments we kept only the domain labels until the third level, mapping all the labels below to the closest available domain. We end up with 60 domain labels.
## 4 Experimental Setup
In this section we describe the models we evaluated, and the Domain Labelling and Word Sense Disambiguation tasks we used for evaluation.
Models.For the experiments we decided to evaluate two very commonly used models: BERT and RoBERTa. We followed previous works on zero-shot domain labelling (Sainz and Rigau, 2021) for approach and model selection. As explained in Section 2 we required that the models were already fine-tuned to perform sentence pair classifications. In the case of the BERT models, we used the LM itself with the NSP head that was trained during pre-training, in the tables it is shown as NSP. For the case of RoBERTa, as it has not been pre-trained for any sentence classification task, we evaluated two checkpoints that were also fine-tuned with TE data: NLI and NLI*. The main difference between both checkpoints is the variety of data on which the models were trained. We evaluated the _large_ variant of those models. The NLI variation was trained just on MultiNLI (Williams et al., 2018) dataset and NLI* variations was also trained on SNLI (Bowman et al., 2015), Fever-NLI (Thorne et al., 2018) and Adversarial-NLI (Nie et al., 2020). Both models are publicly available at HuggingFace Model Hub (Wolf et al., 2020).
Domain Labelling taskis the task of classifying some text \(t\) into a set of domain labels \(D\). In
\begin{table}
\begin{tabular}{c|c c c|l} \hline \hline Sense & BabelDomains & CSI & WN Domains & Gloss \\ \hline
00006484-n & Biology & Biology & biology & The basic structural and functional unit of all organisms;... \\ \hline
02991048-n & Chemistry and mineralogy & Craft, Engineering and Technology & electronics & A device that delivers an electric current as the result of a chemical reaction. \\ \hline
02992529-n & Computing & Craft, Engineering and Technology & electricity telephony & A hand-held mobile radiotelephone for use in an area divided into small sections, each with its own short-range transmitter/receiver \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example of Domain inventories for 3 senses of the word _cell_.
our case, the text to classify are WordNet synset glosses and the domain labels are the ones defined by the domain inventories. The task was evaluated on a small manually annotated dataset released by Camacho-Collados and Navigli (2017). The dataset consist of domain annotations for 1540 WordNet synsets using BabelDomains inventory. For those 1540 synsets we also collected the domain information from CSI and WordNet Domains. The 3 checkpoints described above were evaluated with each domain inventory. To evaluate the models on domain labelling data we used the prompts described in Table 2 to convert domain labelling examples into NLI or NSP examples. The prompt is used to generated as many hypotheses as labels are in the inventory, by replacing the _gloss_ placeholder with the synset's gloss and the _label_ placeholder with the corresponding label each time.
WordNet glosses sometimes contains domain information inside them. For example, in the gloss shown in Figure 3 the domain information is highlighted in bold. We will call them domain _hints_. As we are using those glosses as inputs to predict the domain of the synsets, the hints give a huge advantage to the models. Therefore, for the evaluation we considered two alternatives: with and without hints.
WSD taskis the task of identifying the correct sense \(s\) a word \(w\) withing a context \(c\) among all its possible senses \(s\in S^{w}\). In this case, and following recent works we reframed the task from predicting senses to more coarse set of labels (domains) Lacerra et al. (2020). Therefore, the task aims to classify the domain of the correct sense \(d_{s}\) among the domains of the possible senses \(D^{w}\). As senses in WordNet are very fine-grained, several senses of the same domain may coexist, after replacing them with their domain the set of possible labels might be reduced, therefore \(|D^{w}|\leq|S^{w}|\). An example of two senses from the same domain is shown in Table 3. The task was evaluated on the standard commonly known SemEval Pradhan et al. (2007); Navigli et al. (2013); Moro and Navigli (2015) and Senseval Edmonds and Cotton (2001); Snyder and Palmer (2004) datasets. For each model, we also compared two different prompts shown in Table 2: the first is the same as the one used for Domain Labelling and is used to predict the domain of the whole context; the second instead adds a reference to the target word, and is intended to focus the model to predict the domain of the given word withing the context. Finally, we report a random guessing baseline and a supervised upper-bound from Lacerra et al. (2020).
## 5 Results
In this section we discuss the results obtained on each experiment. First we discuss the results obtained on the Domain Labelling task. Then, we show the results from Word Sense Disambiguation. And finally we analyze the correlation between both tasks as they share the label space.
Are Language Models able to discriminate domains in sense glosses?Figure 4 shows the results obtained for the Domain Labelling task. As a general overview, the three models obtain decent results considering no data for training was provided. Comparing NLI models vs the NSP model, we can conclude that NLI based models perform better in all cases, in concordance with previous works Wang et al. (2021). However, additional TE data (NLI vs NLI*) does not seem to be very useful for the task. Finally, the results shows that the domain hints in the gloss affects significantly to the performance, specially in WordNet Domains, where the labels are very fine-grained.
Do Language Models know about Word Senses?Figure 5 shows the results for each of
\begin{table}
\begin{tabular}{r|l} \hline \hline Task & Prompt \\ \hline Domain Labelling & \{gloss\} | The domain of the sentence is about \{label\}. \\ Word Sense Disambiguation & \{context\} | The domain of the sentence is about \{label\}. \\ & \{context\} | \{label\} is the domain of \{word\}. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prompts used for probing Language Models.
Figure 3: An example of WordNet gloss. The hint in the gloss is highlighted.
the WSD datasets along with random and supervised baselines. In general, the results suggest that **in fact the Language Models know about senses**. While still far from a supervised upper-bound, the three models have shown significantly better performance than a random classifier. Moreover, for the SemEval-15 task the models achieve a performance close to the upper-bound. Comparing the NSP model against the NLI models, the same pattern as in the Domain Labelling task occur, the NLI models are better in all scenarios. If we compare both TE models, both perform similarly when the _sentence prompt_ is used, for the _word prompt_ instead the NLI model shows slightly better results. Overall, the best combination is NLI model with the _word prompt_.
we compared the F1-score obtained on Domain Labelling and WSD tasks. Figure 6 shows the per-domain F1 scores on Domain Labelling and WSD tasks, each point represents the F1 obtained on a specific label. In the figure, we included the F1 for both _sentence prompt_ and _word prompt_ systems. The results shows **very little correlation** between both tasks. The Table 4 shows the Spearman's correlation for each task pair. The results again shows that both tasks are poorly correlated, even when we use the same prompt. However, this comparison might not be completely fair, there are 2 main reasons that could affect the results: the Domain Labelling glosses have a particular structure and different from WSD contexts, also, on WSD the system needs to predict the correct among **possible** labels rather than all the label space as in Domain Labelling. We should take into consideration those differences at the time of interpreting the results.
## 6 Related Work
Word Sense DisambiguationApproaches to WSD range from supervised that only use annotated data Agirre et al. (2014); Hadiwinoto et al. (2019); Bevilacqua and Navigli (2019) to knowledge-based Moro et al. (2014); Agirre et al. (2014); Scozzafava et al. (2020), as well as approaches that combine supervised and knowledge-based approaches Kumar et al. (2019); Bevilacqua and Navigli (2020); Blevins and Zettlemoyer (2020); Conia and Navigli (2021); Barba et al. (2021).
Knowledge-based approaches employ graph algorithms on a semantic network Moro et al. (2014); Agirre et al. (2014); Scozzafava et al. (2020), in which senses are connected through semantic relations and are described with definitions and usage examples. Unfortunately, their independence from annotated data comes at the expense of performing worse than supervised models Pilehvar and Navigli (2014).
Supervised approaches frame the task as a classification problem and use available annotated data to learn mapping the words in context to senses. Before supervised neural models emerged as state of the art in NLP, the task of supervised WSD was performed based on a variety of lexico-syntantic and semantic feature representations that are fed to a supervised machine learning classifier Zhong and Ng (2010). Instead, current state-of-the-art supervised models rely on the use of pretrained Transformers as core architecture of the model. Among these models we can find approaches that exclusively use annotated data to learn effective representations of the target word in context and feed it to some classification head Raganato et al. (2017); Hadiwinoto et al. (2019); Bevilacqua and Navigli (2019); Conia and Navigli (2021).
Some approaches have shown that an effective way to improve sense representation is to exploit the glosses provided by the sense inventories. Gloss representation are then incorporated to the sense embedding Peters et al. (2018), in which the most probable sense is retrieve according to the similarity with the given context. Multiple works have been shown effective in WSD such as LMSS Loureiro and Jorge (2019), SensEmBERT Scarlini et al. (2020), ARES Scarlini et al. (2020), SREF Wang and Wang (2020), EWISE Kumar et al. (2019) and EWISER Bevilacqua and Navigli (2020), among many others. Glosses have also been exploited in sequence-tagging approaches Huang et al. (2019); Yap et al. (2020), where the task is framed as sequence classification problem Barba et al. (2021). In a similar manner, Bevilacqua and Navigli (2020) propose a generative approach to cast WSD as sequence classification problem. In addition to glosses,
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Dom Lab. & WSD\({}_{\text{sent}}\) & WSD\({}_{\text{word}}\) \\ \hline Dom Lab. & 1.00 & 0.32 & 0.41 \\ WSD\({}_{\text{sent}}\) & 0.32 & 1.00 & 0.81 \\ WSD\({}_{\text{word}}\) & 0.41 & 0.81 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Spearman’s correlation of F1-Scores between tasks using shared labels. The scores correspond to the NLI model.
Figure 6: F1 correlation between Domain Labelling and WSD tasks.
other approaches presented ways to make use of the knowledge encoded in KBs such as WordNet. For instance, [11, 12] propagate sense embeddings using WordNet as a graph. Please refer to [13] to obtain further details of the recent trends in WSD.
Prompting Language Modelshas changed the paradigm of how Language Models can be used to extract even more potential from them. Initially with very large LM like GPT-3 [12] and later with smaller ones [10] prompts allowed the models to perform zero or few-shot classifications with simple natural language. This ability also allowed models to improve performance on data-scarce problems by large margin [14, 15, 16]. These prompts can be discrete [10, 15, 16] close to natural language or continuous [17] close to other efficient deep learning methods like Adapters [14]. Closer to our work, Textual Entailment [1] has been used as a source of external supervision to solve several text classification tasks [18, 19, 15, 16, 17, 18], Named Entity Recognition [14, 15, 16], Relation Extraction [17, 18], Event Extraction [16], Event Argument Extraction [15, 16], Intent Classification [15], Aspect-based Sentiment Analysis [19] and many more.
Domain InventoriesDomain information was added to Princeton WordNet [12] since version 3.0. In total 440 topics were represented as a synsets in the graph. The topic label assignment was achieved through pointers from source synsets to target synsets. Being the most frequent topic is law, jurisprudence. However, the manual assignment of topic labels to synsets in WordNet is very costly. As a consequence, semi-automatic methods were developed. For instance, WordNet Domains [1] is a semi-automatically annotated domain inventory that labels WordNet synsets with 165 hierarchically organised domains. The use of domain inventories such as WordNet Domains, allowed to reduce polysemy degree of WordNet synsets by grouping those that belong to the same domain [19]. However, far from being perfect, many synsets were labelled as factotum, meaning that the synset cannot be labelled with a particular domain. Several works were proposed to improve WordNet Domains, such as eXtended WordNet Domains [1, 16], that applied graph-based methods to propagate the labels through the WordNet structure.
Domain information is not only available in WordNet, for example IATE1 is a European Union inter-institutional terminology database. The domain labels of IATE are based on the Eurovoc thesaurus2 and were introduced manually. More recently, several new domain inventories appeared, such as BabelDomains [1] or Coarse Sense Inventory [1].
Footnote 1: [http://iate.europa.eu/](http://iate.europa.eu/)
Footnote 2: [https://op.europa.eu/en/web/eu-vocabularies/th-dataset/-/resource/dataset/eurovoc](https://op.europa.eu/en/web/eu-vocabularies/th-dataset/-/resource/dataset/eurovoc)
## 7 Conclusions
In this work we present an evaluation approach to test Language Models on the tasks of Domain Labelling and Word Sense Disambiguation without annotated data requirements. For the WSD task we followed Lacerra et al. (2020) to reduce the granularity level. Our results showed that the Language Models we tested here **have some notion of word senses**. They easily outperformed the baseline, and sometimes almost reached to supervised systems performance. In addition, our further analysis shows that there is very low error propagation from Domain Labelling to WSD as their errors are poorly correlated. For the future, we plan to evaluate larger Language Models on the task to try to understand to what extent scaling these LMs affects to sense recognition.
## Acknowledgments
Oscar is funded by a PhD grant from the Basque Government (PRE_2020\(1\)0246). This work is based upon work partially supported via the IARPA BETTER Program contract No. 2019-19051600006 (ODNI, IARPA), DeepKnowledge (PID2021-127777OB-C21) project funded by MCIN/AEI/10.13039/501100011033 and by FEDER Una manera de hacer Europa,
AWARE (TED2021-131617B-I00) project funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGeneration EU/ PRTR, and by the Basque Government (IXA excellence research group IT1570-22).
|
2308.00601 | On Simultaneous Symplectic Diagonalization in the sense of Williamson's
Theorem | Williamson's theorem is well known for symmetric matrices. In this paper, we
state and re-derive some of the cases of Williamson's theorem for symmetric
positive-semi definite matrices and symmetric matrices having negative index 1,
due to H\"ormander. We prove theorems that guarantee conditions under which two
symmetric positive-definite matrices can be simultaneously diagonalized in the
sense of Williamson's theorem and their corollaries. Finally, we provide an
application of this result to physical systems and another connecting the
decompositions for the degenerate and non-degenerate cases, involving phase
space constraints that we later apply to phase space cylinders and ellipsoids
via symplectic capacities. | Rudra Kamat | 2023-07-05T11:27:01Z | http://arxiv.org/abs/2308.00601v4 | # On Williamson's Symplectic Diagonalization in the Degenerate Case
###### Abstract
Williamson's normal form is well known for symmetric positive-definite matrices. In this paper, we consider an extension of Williamson's normal form for symmetric positive-semi definite matrices due to Hormander. We prove a theorem connecting the decompositions for the degenerate and non-degenerate cases, involving phase space constraints that we will later apply to phase space cylinders and ellipsoids.
## 1 Introduction
The phase space \(\mathbb{R}^{2n}_{z}\equiv\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{p}\) is equipped with the symplectic form
\[\sigma(z,z^{\prime})=p\cdot x^{\prime}-p^{\prime}\cdot x=(z^{\prime})\,\mathrm{ J}\,z\]
where \(\mathrm{J}=\begin{bmatrix}0_{n\times n}&I_{n\times n}\\ -I_{n\times n}&0_{n\times n}\end{bmatrix}\) is the standard symplectic matrix.
We denote by \(\mathrm{Sp}(n)\) the symplectic group of the symplectic phase space \((\mathbb{R}^{2n}_{z},\sigma)\); it consists of all linear automorphisms \(\mathrm{S}\) of \(\mathbb{R}^{2n}_{z}\) such that \(\sigma(\mathrm{S}\,z,\mathrm{S}\,z^{\prime})=\sigma(z,z^{\prime})\) for all \(z,z^{\prime}\); equivalently \(\mathrm{S}^{T}\,\mathrm{JS}=\mathrm{J}\).
Let \(\mathrm{M}\) be a (symmetric) positive-definite \(2n\times 2n\) matrix; we will write for short \(\mathrm{M}>0\). Now consider the matrix \(\mathrm{M}^{1/2}\,\mathrm{JM}^{1/2}\); it is anti-symmetric because \(\mathrm{J}^{T}=-\,\mathrm{J}\) hence its eigenvalues are of the type \(\pm i\lambda_{1},\ldots,\pm i\lambda_{n}\) with \(\lambda_{j}>0\) for \(1\leq j\leq n\).
By definition, the symplectic eigenvalues of \(M\) are the positive numbers \(\lambda_{1},\ldots,\lambda_{n}\) (the \(\lambda_{j}\) are conventionally arranged in non-decreasing order \(\lambda_{1}\leq\lambda_{2}\leq\cdot\cdot\cdot\leq\lambda_{n}\)). The sequence \((\lambda_{1},\ldots,\lambda_{n})\) is called the symplectic spectrum of \(M\) (for a detailed study of its properties see [2]).
A well-known result originally due to Williamson [8] in 1936 but which has been rediscovered by many authors [2, 4] says that \(M\) can be diagonalized using a symplectic automorphism; more precisely: there exists \(S\in Sp(n)\) such that
\[M=S^{T}\,DS\ \,\ \ D=\begin{bmatrix}\Lambda&0\\ 0&\Lambda\end{bmatrix} \tag{1}\]
where \(\Lambda=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})\) is the diagonal matrix whose diagonal entries are the symplectic eigenvalues of \(M\).
One should note that the diagonalizing symplectic matrix \(S\) is not unique; see Son _et al._[9] for a detailed analysis of the set of diagonalizing symplectic matrices.
The applications of Williamson's theorem are numerous, both in symplectic geometry and topology, and in mathematical physics (Hamiltonian and quantum mechanics); see for instance [4] and [2]. We mention that in a recent preprint [6] Kiran Kumar and Tonny give an interesting account of the developments of Williamson's result and its applications to operator theory.
One of the most important applications is the study of systems with quadratic Hamiltonians, such as linear oscillators or small oscillations around equilibrium points, because it helps in obtaining normal modes and simplifies the analysis of the dynamics. In addition it allows, using the theory of the metaplectic group the study of the quantum counterpart by allowing the explicit calculations of the solutions of Schrodinger's equation for systems whose classical counterpart have a quadratic Hamiltonian. This has immediate applications in the field of quantum optics because it allows a detailed study of Gaussian beams and their propagation, permitting the decomposition of a multi-mode optical field into independent modes.
Suppose for instance the Hamiltonian \(H\) is defined by
\[H(z)=\frac{1}{2}\,M\,z\cdot z\ \,\ \ M>0;\]
the corresponding Hamilton equations \(\dot{z}=J\,\nabla_{z}H\) are the linear system \(\dot{z}=JM\,z\) whose solution is simply \(z(t)=e^{t\,JM}z(0)\); due to the fact that \(JM\in\mathfrak{sp}(n)\) (the symplectic Lie algebra) we have \(S_{t}=e^{t\,JM}\in Sp(n)\) and the symplectic automorphisms \(S_{t}\) are explicitly calculated by considering the Williamson diagonalization \(M=S^{T}\,DS\) which yields, since \(SJ\,S^{T}=J\),
\[S_{t}=e^{t\,JM}=e^{t\,JS^{T}\,DS}=S^{-1}\,e^{t\,JD}\,S\,.\]
This reduces the study of the Hamiltonian system \(\dot{z}=J\,\nabla_{z}H\) to that when \(H\) has the simple form \(H=\Lambda x\cdot x+\Lambda p\cdot p\) that is in coordinates \(z_{j}=(x_{j},p_{j})\),
\[H(x,p)=\sum_{j=1}^{n}\lambda_{j}(x_{j}^{2}+p_{j}^{2})\quad,\quad\lambda_{j}>0.\]
The solutions are well-know in this case (\(H\) is a sum of harmonic oscillators). So far, so well.
Note that when the quadratic Hamiltonian \(H(z)=\frac{1}{2}\,M\,z\cdot z\) is positive-semi definite; i.e, when we have \(M=M^{T}\geq 0\) but \(\det M=0\), we can no longer apply Williamson's theorem since \(M\) no longer has a simple diagonal form.
It turns out that Hormander [5] (Thm. 21.5.3, p. 324) states the following result:
There exists \(\mathrm{S}\in\mathrm{Sp}(n)\) and integers \(k\) and \(m\) with \(1\leq k\leq k+m\leq n\) such that
\[(H\circ\mathrm{S})(x,p)=\sum_{j=1}^{k}\lambda_{j}(x_{j}^{2}+p_{j}^{2})+\sum_{j =k+1}^{k+m}x_{j}^{2} \tag{2}\]
where \(\lambda_{j}>0\) for \(1\leq j\leq k\). Note that replacing \(\mathrm{S}\) with \(\mathrm{S}^{\prime}=\mathrm{S}\mathrm{J}\) we could as well write
\[(H\circ\mathrm{S}^{\prime})(x,p)=\sum_{j=1}^{k}\lambda_{j}(x_{j}^{2}+p_{j}^{2} )+\sum_{j=k+1}^{k+m}p_{j}^{2}. \tag{3}\]
Hormander's approach is however rather abstract and does not indicate a general rule for the computation of the scalars \(\lambda_{j}\). In [7, 9] Son _et al._ have recently given a detailed argument of how determine these quantities (also see in this context Kiran Kumar and Tonny [6] who give useful bounds for the symplectic spectrum).
One particular case of Hormander's result is stated in [7]. We will restate and prove it later in the appendix.
Before we proceed to the results, let us state a standard, well-known result about simultaneous diagonalization of two real symmetric matrices that we will allude to for the symplectic case.
To be able to do that, we need to first define what simultaneous diagonalizability means to us.
**Definition 1.1**: _Two real symmetric \(2n\times 2n\) matrices \(\mathrm{A}\) and \(\mathrm{B}\) are said to be simultaneously diagonalizable or compatible if there exists \(\mathrm{T}\in\mathrm{O}(2n)\) such that both \(\mathrm{T}^{T}\,\mathrm{A}\,\mathrm{T}\) and \(\mathrm{T}^{T}\,\mathrm{B}\,\mathrm{T}\) are diagonal._
_Note that \(\mathrm{T}\) is unique upto the composition \(\mathrm{T}^{\prime}=\mathrm{T}\,\mathrm{H}\), where \(\mathrm{H}\in\mathrm{O}(2n)\)_
Armed with this definition, we can proceed to state the condition for the simultaneous diagonalization of real symmetric matrices in the form of the following lemma [11] (Thm. 1.3.21, p. 64):
**Lemma 1.2** (Compatibility Theorem): _Two real symmetric \(2n\times 2n\) matrices \(\mathrm{A}\) and \(\mathrm{B}\) are simultaneously diagonalizable iff they satisfy \([\mathrm{A},\mathrm{B}]=\mathrm{A}\,\mathrm{B}-\mathrm{B}\,\mathrm{A}=0\)._
## 2 Results
We will now try to formulate a similar condition for the symplectic decomposition guaranteed of real symmetric positive-semi definite matrices due to Hormander.
As before, we need to specify what simultaneous symplectic compatibility means to us before proceeding.
**Definition 2.1**: _Two real symmetric \(2n\times 2n\) positive-semi definite matrices \(\mathrm{A}\) and \(\mathrm{B}\) are said to be symplectically compatible if there exists \(\mathrm{S}\in\mathrm{Sp}(n)\) such that both \(\mathrm{S}^{T}\,\mathrm{A}\,\mathrm{S}\) and \(\mathrm{S}^{T}\,\mathrm{B}\,\mathrm{S}\)_
are diagonal. Note that \(\mathrm{S}\) is unique upto the composition \(\mathrm{S}^{\cdot}=\mathrm{S}\,\mathrm{H}\), where \(\mathrm{H}\in\mathrm{U}(n)\)_
Let us now state and prove a theorem that is analogous to lemma 1.2
**Theorem 2.2** (Symplectic Compatibility Theorem): _Two real symmetric \(2n\times 2n\) positive-semi definite matrices, \(\mathrm{A}\) and \(\mathrm{B}\), are symplectically compatible iff they satisfy the symplectic commutator \([\mathrm{A},\mathrm{B}]_{J}=\mathrm{A}\,\mathrm{J}\,\mathrm{B}-\mathrm{B}\, \mathrm{J}\,\mathrm{A}=0\)._
**Proof.**\(\Leftarrow\) Given two real symmetric \(2n\times 2n\) matrices \(\mathrm{A}\geq 0\) and \(\mathrm{B}\geq 0\).
By Hormander, we are guaranteed two \(\mathrm{P},\mathrm{Q}\in\mathrm{Sp}(n)\) such that
\(\mathrm{P}^{T}\,\mathrm{A}\,\mathrm{P}=\mathrm{D}_{A}\) and \(\mathrm{Q}^{T}\,\mathrm{B}\,\mathrm{Q}=\mathrm{D}_{B}\) where \(\mathrm{D}_{A},\mathrm{D}_{B}\) are \(2n\times 2n\) diagonal matrices.
Using \([\mathrm{A},\mathrm{B}]_{J}=0\) and the fact that diagonal matrices \(\mathrm{D}_{A},\mathrm{D}_{B}\) commute with all of \(\mathrm{GL}(2n)\), we get
\[\mathrm{P}^{\text{-}1T}\,\mathrm{P}^{T}\,\mathrm{A}\,\mathrm{P} \,\mathrm{P}^{-1}\,\mathrm{J}\,\mathrm{Q}^{\text{-}1T}\,\mathrm{Q}^{T}\, \mathrm{B}\,\mathrm{Q}\,\mathrm{Q}^{-1} =\mathrm{Q}^{\text{-}1T}\,\mathrm{Q}^{T}\,\mathrm{B}\,\mathrm{Q} \,\mathrm{Q}^{-1}\,\mathrm{J}\,\mathrm{P}^{\text{-}1T}\,\mathrm{P}^{T}\, \mathrm{A}\,\mathrm{P}\,\mathrm{P}^{-1}\] \[\mathrm{P}^{\text{-}1T}\,\mathrm{D}_{A}\,\mathrm{P}^{-1}\, \mathrm{J}\,\mathrm{Q}^{\text{-}1T}\,\mathrm{D}_{B}\,\mathrm{Q}^{-1} =\mathrm{Q}^{\text{-}1T}\,\mathrm{D}_{B}\,\mathrm{Q}^{-1}\, \mathrm{J}\,\mathrm{P}^{\text{-}1T}\,\mathrm{D}_{A}\,\mathrm{P}^{-1}\] \[\mathrm{Q}\,\mathrm{Q}^{T}\,\mathrm{P}^{\text{-}1T}\,\mathrm{P}^{- 1}\,\mathrm{D}_{A}\,\mathrm{J}\,\mathrm{D}_{B}\,\mathrm{Q}^{\text{-}1T}\, \mathrm{Q}^{-1}\,\mathrm{P}\,\mathrm{P}^{T} =\mathrm{D}_{A}\,\mathrm{J}\,\mathrm{D}_{B}\] \[(\mathrm{Q}\,\mathrm{Q}^{T}\,\mathrm{P}^{\text{-}1T}\,\mathrm{P} ^{-1})^{2} =I\]
Let \(\mathrm{Y}\) denote all the square roots of \(\mathrm{I}\). \(\mathrm{Y}\) is a family of diagonal matrices with \(\{-1,1\}\) on the diagonal.
Since the determinants should match, there are an even number of \(-1\) entries in all members of \(\mathrm{Y}\).
\[\mathrm{Q}\,\mathrm{Q}^{T}\,\mathrm{P}^{\text{-}1T}\,\mathrm{P}^{- 1} =\mathrm{Y}\] \[\mathrm{Q}\,\mathrm{Q}^{T} =\mathrm{Y}\,\mathrm{P}\,\mathrm{P}^{T}\]
Now we will use the unique polar decomposition that exists for a symplectic matrix.
Let \(\mathrm{U}_{Q},\mathrm{U}_{P}\) be the unitary parts of \(\mathrm{Q}\) and \(\mathrm{P}\) respectively and
\(\mathrm{R}_{Q}=(\mathrm{Q}\,\mathrm{Q}^{T})^{1/2}\), \(\mathrm{R}_{P}=(\mathrm{P}\,\mathrm{P}^{T})^{1/2}\) to be the corresponding magnitudes.
Let \(\mathrm{L}\) denote all the possible square roots of \(\mathrm{Y}\), it will be a family of diagonal matrices with \(\{-1,1,-i,i\}\) on the diagonal such that the determinants match.
Then, based on the equation we got, we can see that
\[\mathrm{R}_{Q} =\mathrm{L}\,\mathrm{R}_{P}\] \[\mathrm{Q}\,\mathrm{U}_{Q}^{-1} =\mathrm{L}\,\mathrm{P}\,\mathrm{U}_{P}^{-1}\] \[\mathrm{Q}^{-1}\,\mathrm{P} =\mathrm{V}\]
for some \(\mathrm{V}\in\mathrm{U}(n)\) since \(\mathrm{L}\in\mathrm{U}(n)\).
This makes sense since \(\mathrm{Sp}(n)\cap\mathrm{O}(2n)\cong\mathrm{U}(n)\).
Moreover, this fits our definition of symplectically compatible since \(\mathrm{Q}\) can simply be changed
to \(\mathrm{Q}\,\mathrm{V}\), keeping the symplectic decomposition (as guaranteed by Hormander) invariant.
\(\Rightarrow\) Now assume that two real symmetric \(2n\times 2n\) matrices \(\mathrm{A}\geq 0\) and \(\mathrm{B}\geq 0\) are symplectically compatible, then there exists \(\mathrm{S}\in\mathrm{Sp}(n)\), in accordance with our definition, such that \(\mathrm{S}^{T}\,\mathrm{A}\,\mathrm{S}=\mathrm{D}_{A}=\mathrm{diag}(\Lambda_{ A},\Lambda_{A})\) and \(\mathrm{S}^{T}\,\mathrm{B}\,\mathrm{S}=\mathrm{D}_{B}=\mathrm{diag}(\Lambda_{ B},\Lambda_{B})\).
It is then easy to see that
\[\mathrm{A}\,\mathrm{J}\,\mathrm{B} =\mathrm{S}^{\text{-}1T}\,\mathrm{S}^{T}\,\mathrm{A}\,\mathrm{S} \,\mathrm{S}^{-1}\,\mathrm{J}\,\mathrm{S}^{-1T}\,\mathrm{S}^{T}\,\mathrm{B}\, \mathrm{S}\,\mathrm{S}^{-1}=\mathrm{S}^{\text{-}1T}\,\mathrm{D}_{A}\,\mathrm{ J}\,\mathrm{D}_{B}\,\mathrm{S}^{-1}\] \[=\mathrm{S}^{\text{-}1T}\,\mathrm{D}_{B}\,\mathrm{J}\,\mathrm{D}_ {A}\,\mathrm{S}^{-1}=\mathrm{B}\,\mathrm{J}\,\mathrm{A}\]
Thus concludes the proof.
We can write the symplectic commutator in terms of the regular commutator:
\[[\mathrm{A},\mathrm{B}]_{J}=[\mathrm{A},\mathrm{J}\,\mathrm{B}]+[\mathrm{J}, \mathrm{B}]\,\mathrm{A}\]
**Proposition 2.3**: _There is an algebra isomorphism between symplectic commutators and regular commutator algebras._
**Proof.** We simply express
\[[\mathrm{A},\mathrm{B}]_{J} =\begin{bmatrix}A&B\end{bmatrix}\begin{bmatrix}0&\mathrm{J}\\ -\mathrm{J}&0\end{bmatrix}\begin{bmatrix}A\\ B\end{bmatrix}\] \[=\begin{bmatrix}A&B\end{bmatrix}\begin{bmatrix}0&\mathrm{I}\\ -\mathrm{I}&0\end{bmatrix}\begin{bmatrix}A\\ B\end{bmatrix}\]
Both the companion block matrices in the above expressions are non-degenerate and we can isomorphically map one to the other.
This naturally results in the following corollaries
**Corollary 2.4**: _The symplectic commutators satisfy the Jacobi identity_
\[[\mathrm{A},[\mathrm{B},\mathrm{C}]_{J}]_{J}+[\mathrm{B},[\mathrm{C},\mathrm{ A}]_{J}]_{J}+[\mathrm{C},[\mathrm{A},\mathrm{B}]_{J}]_{J}=0\]
**Corollary 2.5**: _Two commuting real symmetric positive-semi definite matrices are compatible as well as symplectically compatible._
We will now try to establish a geometric connection between the degenerate (symmetric positive-semi definite) and non-degenerate cases (symmetric positive-definite) in context of symplectic diagonalization from the perspective of phase space constraints
### Symplectic diagonalization and phase space constraints
Let M be a symmetric positive-semi definite automorphism on \(\mathbb{R}^{2n}\).
Consider the Hamiltonian \(H=\tilde{z}^{T}\operatorname{M}\tilde{z}\) for \(\tilde{z}\in\mathbb{R}^{2n}\).
Due to Hormander, there exists \(\operatorname{S}\in\operatorname{Sp}(n)\) and integers \(1\leq k\leq k+m\leq n\) such that
\[H=z^{T}\operatorname{S}^{T}\operatorname{M}\operatorname{S}z=\sum_{i=1}^{k} \lambda_{i}(x_{i}^{2}+p_{i}^{2})+\sum_{i=k+1}^{k+m}x_{i}^{2}\]
where \(z=\operatorname{S}^{-1}\tilde{z}\).
Let \(\tilde{\operatorname{M}}\) be a symmetric positive-definite automorphism on \(\mathbb{R}^{2n}\).
Consider the Hamiltonian \(\tilde{H}=\tilde{z}^{T}\operatorname{\tilde{M}}\tilde{z}\) for \(\tilde{z}\in\mathbb{R}^{2n}\).
Define the matrix \(\tilde{\operatorname{M}}\) such that it is symplectically compatible with M with symplectic eigenvalues \(\{\tilde{\lambda}_{j}\}\) for \(1\leq j\leq n\).
More explicitly,
\[\tilde{H}=z^{T}\operatorname{S}^{T}\operatorname{\tilde{M}}\operatorname{S}z= \sum_{i=1}^{n}\tilde{\lambda}_{i}(x_{i}^{2}+p_{i}^{2})\]
satisfying
\[\tilde{\lambda}_{i} =\lambda_{i};\ \ 1\leq i\leq k\] \[\tilde{\lambda}_{j} =1;\ \ (k+1)\leq j\leq(k+m)\]
Comparing both the decompositions, we can define the following constraints:
\[x_{j} =0;\ \ (k+m+1)\leq j\leq n\] \[p_{i} =0;\ \ (k+1)\leq i\leq n\]
We can collectively denote these by \(\chi_{j}=0\) for \(1\leq j\leq 2(n-k)+m\).
We shall call \(\chi_{j}\) the Hormander constraints.
Let \(\Gamma\) be the Hormander constraint surface generated from \(\chi_{j}=0\), then
\[z^{T}\operatorname{S}^{T}\operatorname{\tilde{M}}\operatorname{S }z|_{\Gamma} =z^{T}\operatorname{S}^{T}\operatorname{M}\operatorname{S}z\] \[\Rightarrow\tilde{H}|_{\Gamma} =H\]
By virtue of Lagrange multipliers,
\[\tilde{H}=H+C_{j}\chi_{j}\]
for \(C_{j}\in\mathbb{R}\)
Since the \(\mathrm{S}^{-1}\,\tilde{z}=z=(x_{1},\ldots,x_{n},p_{1},\ldots,p_{n})\)
\(=(x_{1},\ldots,x_{(k+m)},\chi_{1},\ldots,\chi_{(n-k-m)},p_{1},\ldots,p_{k},\chi _{(n-k-m+1)},\ldots,\chi_{(2n-2k+m)})\) is a symplectic basis,
the set of Hormander constraints \(\{\chi_{j}\}\) is a set of second-class constraints, using the Dirac-Bergmann terminology (See [10]).
Furthermore, \(C_{j}\) can be determined uniquely.
Define \(W_{ij}=\{\chi_{i},\chi_{j}\}\) where \(\{,\}\) are Poisson brackets.
Then, \(C_{j}=W_{ji}^{-1}\{\chi_{i},H\}\)[10]
We can now summarize the above arguments into the following
**Theorem 2.6**: _Let \(\mathrm{M}\) and \(\tilde{\mathrm{M}}\) be two symplectically compatible automorphisms on \(\mathbb{R}^{2n}\) (representing the quadratic forms of two Hamiltonians) that are symmetric positive-semi definite and symmetric positive-definite respectively that decompose, for \(\mathrm{S}\in\mathrm{Sp}(n)\), into_
\[H\circ\mathrm{S}=z^{T}\,\mathrm{S}^{T}\,\mathrm{M}\,\mathrm{S}\,z=\sum_{i=1}^{ k}\lambda_{i}(x_{i}^{2}+p_{i}^{2})+\sum_{i=k+1}^{k+m}x_{i}^{2}\]
_for \(1\leq k\leq k+m\leq n\) and_
\[\tilde{H}\circ\mathrm{S}=z^{T}\,\mathrm{S}^{T}\,\tilde{\mathrm{M}}\,\mathrm{S }\,z=\sum_{i=1}^{n}\tilde{\lambda}_{i}(x_{i}^{2}+p_{i}^{2})\]
_with \(\lambda_{i}=\tilde{\lambda}_{i}\) for \(1\leq i\leq k\) and \(\tilde{\lambda}_{j}=1\) for \((k+1)\leq j\leq(k+m)\)_
_Then, the Hamiltonian \(H\) can be extended off the Hormander constraint surface \(\Gamma\), in phase space, such that \(\tilde{H}=H+C_{j}\chi_{j}\) where \(\chi_{j}\) are the Hormander constraints and \(C_{j}\in\mathbb{R}\) is a uniquely determined scalar._
Before we try to apply theorem 2.6 to phase space cylinders and ellipsoids (as promised), it would be in our favour to state the defintion of symplectic capacity, listing its properties which would naturally prove useful to us.
**Definition 2.7** (Symplectic Capacity): _A symplectic capacity on \((\mathbb{R}^{2n}_{z},\sigma)\) is a mapping, which to every subset \(\Omega\) of \(\mathbb{R}^{2n}_{z}\), associates a number \(c_{\text{lin}}(\Omega)\geq 0\), or \(\infty\) having the following properties: [2, 4, 3]_
* \(c(\Omega)\leq c(\Omega^{\prime})\) _if_ \(\Omega\subset\Omega^{\prime}\)__
* \(c(f(\Omega))=c(\Omega)\) _for every symplectomorphism_ \(f\)__
* \(c(\lambda\Omega)=\lambda^{2}c(\Omega)\) _for every_ \(\lambda\in\mathbb{R}\)__
* \(c(B(R))=c(Z_{j}(R))=\pi R^{2}\) _where_ \(B(R):\sum_{i=1}^{n}(x_{i}^{2}+p_{i}^{2})\leq R\) _and_ \(Z_{j}(R):x_{j}^{2}+p_{j}^{2}\leq R\) _are the phase space ball and cylinder of radius_ \(R\) _respectively._
Let us try to define phase space cylinders and ellipsoids in the context of real symmetric positive-definite and semi definite matrices.
**Definition 2.8**: _A phase space cylinder of radius \(R\), \(Z_{j}(R)\), can be described as being a diagonal real symmetric matrix \(\mathrm{M}\geq 0\) whose kernel is a symplectic space with dimension \(2n-2\) such that the only non-zero entries are at \(j,n+j\), both equal to \(R^{-2}\) such that_
\[Z_{j}(R):z^{T}\,\mathrm{M}\,z\leq 1\Rightarrow x_{j}^{2}+p_{j}^{2}\leq R^{2}\]
**Definition 2.9**: _A phase space ball of radius \(R\), \(B(R)\), can be described as being a diagonal real symmetric matrix \(\tilde{\mathrm{M}}>0\), all of whose entries are equal to \(R^{-2}\) such that_
\[B(R):z^{T}\,\tilde{\mathrm{M}}\,z\leq 1\Rightarrow\sum_{i=1}^{n}(x_{i}^{2}+p_{i }^{2})\leq R^{2}\]
Since \(\mathrm{M},\tilde{\mathrm{M}}\) commute, they are symplectically compatible and from theorem 2.6, it follows that
\[z^{T}\,\tilde{\mathrm{M}}\,z=z^{T}\,\mathrm{M}\,z+C_{j}\chi_{j}\]
In other words, \(B(R)\) is the extension of \(Z_{j}(R)\) off the appropriate Hormander constraint surface.
Now from \(z^{T}\,\tilde{\mathrm{M}}\,z\leq 1\), we get \(B(R)\) (from definition 2.9) and from \(z^{T}\,\mathrm{M}\,z+C_{j}\chi_{j}\leq 1\), we get \(Z_{j}(R\sqrt{|1-C_{j}\chi_{j}|})\) (from definition 2.8).
If we need \(B(R)\subset Z_{j}(R\sqrt{|1-C_{j}\chi_{j}|})\), then \(C_{j}\chi_{j}\geq 2\), from the first property of symplectic capacities (given some \(c\) on \(\mathbb{R}_{z}^{2n}\)).
We may call the saturation of the inequality: \(C_{j}\chi_{j}=2\), the Gromov hypersurface.
## 3 Conclusion
Hormander's symplectic diagonalization for symmetric positive-semi definite matrices has many potential applications in quantum information and computing, where covariance matrices play an important role, as an extension to the readily used Williamson normal form.
The uses of theorem 2.6 and its corollary still require further investigation, since they may be repurposed as lemmas for newer results in phase space topology and uncertainty relations.
## 4 Appendix
### Special case of Hormander symplectic diagonalization
We state and prove a particular case of Hormander's result [7]
**Corollary 4.1**: _Let \(\mathrm{M}\) be a real symmetric positive-semi definite \(2n\times 2n\) matrix (shorthand notation \(\mathrm{M}\geq 0\)) whose kernel is a symplectic subspace of \(\mathbb{R}_{z}^{2n}\) of dimension \(2(n-k)\)._
_Then there exists a matrix \(S\in\mathrm{Sp}(n)\) such that \(\mathrm{S}^{T}\,\mathrm{M}\,\mathrm{S}=\mathrm{diag}(\Lambda,\Lambda)\) with \(\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{n})\) being a diagonal matrix such that \(0<\lambda_{1}\leq\cdots\leq\lambda_{k}\) and \(\lambda_{(k+1)}=\cdots=\lambda_{n}=0\)._
_More explicitly,_
\[\mathrm{S}(x,p)^{T}\circ\mathrm{M}\circ\mathrm{S}(x,p)=\sum_{j=1}^{k}\lambda_ {j}(x_{j}^{2}+p_{j}^{2})\]
**Proof.** Let \(\mathrm{N}=\ker\mathrm{M}\) with dimension \(2m=2(n-k)\).
Since \(\mathrm{N}\) is a symplectic subspace, it admits a symplectic basis \(\{\vec{q}_{i},\vec{q}_{(m+i)}\}_{1\leq i\leq(n-k)}\).
Let \(\mathrm{B}=\mathbb{R}^{2n}/\,\mathrm{N}\), we can always find a basis for the \(2k\) dimensional subspace \(\mathrm{B}\), call it \(\{\vec{b}_{i}\}_{1\leq i\leq 2k}\).
Consider the matrix,
\[\mathrm{W}=\begin{bmatrix}|&&|&&|&&|&&|\\ \vec{b}_{1}&\vec{b}_{2}&\cdots&\vec{b}_{2k}&\vec{q}_{1}&\vec{q}_{2}&\cdots&\vec {q}_{2(n-k)}\\ |&|&&|&&|&&|\\ \end{bmatrix}\]
Note that \(\vec{q}_{i}\,^{T}\,\mathrm{M}\,\vec{q}_{j}=0\) since \(\vec{q}_{i},\vec{q}_{j}\in N\)
\(\Rightarrow\vec{q}_{i}\,^{T}\,\mathrm{M}\,\vec{b}_{j}=0\)
\(\vec{b}_{i}\,^{T}\,\mathrm{M}\,\vec{b}_{j}=\tilde{\mathrm{Q}}>0\)
Consequently,
\[\mathrm{W}^{T}\,\mathrm{M}\,\mathrm{W}=\begin{bmatrix}\tilde{\mathrm{M}}_{2k \times 2k}&\\ &0_{2m\times 2m}\end{bmatrix}\]
Note that the \(2k\times 2k\) sub-matrix \(\tilde{M}\) is real symmetric positive-definite.
Therefore, by Williamson's theorem, there exists \(\tilde{\mathrm{S}}\in\mathrm{Sp}(k)\) such that
\[\tilde{\mathrm{S}}^{T}\,\tilde{\mathrm{M}}\,\tilde{\mathrm{S}}=\begin{bmatrix} \tilde{\Lambda}&0\\ 0&\tilde{\Lambda}\end{bmatrix}\]
where \(\tilde{\Lambda}=\mathrm{diag}(\tilde{\lambda}_{1},\ldots,\tilde{\lambda}_{k})\) such that \(\pm i\tilde{\lambda}_{j}\) are the eigenvalues of \(\mathrm{J}_{k\times k}\,\tilde{\mathrm{M}}\)
The columns of \(\tilde{\mathrm{S}}\) are the symplectic eigenvectors, let us arrange them in a way such that \(0<\tilde{\lambda}_{1}\leq\cdots\leq\tilde{\lambda}_{k}\), and label them\(\{\vec{p}_{i},\vec{p}_{(k+i)}\}_{1\leq i\leq k}\) (this is a basis for \(\mathrm{B}\)).
Now consider,
\[S=\begin{bmatrix}|&&|&|&|&|&|&|&|\\ \vec{s}_{1}&\cdots&\vec{s}_{k}&\vec{q}_{1}&\cdots&\vec{q}_{(n-k)}&\vec{s}_{(k+ 1)}&\cdots&\vec{s}_{2k}&\vec{q}_{(n-k+1)}&\cdots&\vec{q}_{2(n-k)}\\ |&&|&|&|&|&|&|&|&|\\ \end{bmatrix}\]
where \(\vec{s_{i}}=\begin{bmatrix}|\\ \vec{p}_{i}\\ |\\ 0_{m\times 1}\end{bmatrix}\)
Note that \(\vec{q}_{i}\,^{T}\,\mathrm{M}\,\vec{s}_{j}=0\) and \(\vec{s}_{i}\,^{T}\,\mathrm{M}\,\vec{s}_{j}=\tilde{\lambda}_{i}\delta_{ij}\), as guaranteed by Williamson's theorem.
Finally,
\[\mathrm{S}^{T}\,\mathrm{M}\,\mathrm{S}=\begin{bmatrix}\Lambda&0\\ 0&\Lambda\end{bmatrix}\]
where \(\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{n})\) with \(0<\lambda_{1}=\tilde{\lambda}_{1}\leq\cdots\leq\lambda_{k}=\tilde{\lambda}_{k}\) and \(\lambda_{(k+1)}=\cdots=\lambda_{n}=0\)
Since, \(\tilde{\mathrm{S}}\) is a symplectic matrix, its columns form a symplectic basis such that \(\vec{s}_{i}\,^{T}\,\mathrm{J}\,\vec{s}_{j}=\delta_{j,k+i}\)
Following the proof of Williamson's theorem in [2], we see that \(\vec{s}_{i}=\tilde{\lambda}_{i}^{-1/2}\vec{s^{\prime}}_{i}\)
where \(\{\vec{s^{\prime}}_{j}\pm i\vec{s^{\prime}}_{(k+j)}\}_{1\leq j\leq k}\) are eigenvalues of \(\mathrm{K}=-\tilde{\mathrm{M}}^{-1}J\) and \(\mathrm{K}^{-1}=\mathrm{J}\,\tilde{\mathrm{M}}\)
such that \(\mathrm{K}^{-1}\,\vec{s^{\prime}}_{j}=\tilde{\lambda}^{-1}\vec{s^{\prime}}_{( k+j)}\) and \(\mathrm{K}^{-1}\,\vec{s^{\prime}}_{(k+j)}=-\tilde{\lambda}^{-1}\vec{s^{\prime}}_ {j}\) for \(1\leq j\leq k\).
Therefore,
\[\vec{q}_{i}\,^{T}\,\mathrm{J}\,\vec{s}_{j}=\tilde{\lambda}^{1/2}\vec{q}_{i}\, ^{T}\,\mathrm{J}^{2}\,\mathrm{M}\,\vec{s}_{(k+j)}=-\tilde{\lambda}^{1/2}\vec {q}_{i}\,^{T}\,\mathrm{M}\,\vec{s}_{j}=0.\]
Similarly, \(\vec{q}_{i}\,^{T}J\vec{s}_{(k+j)}=0\)
\[\Rightarrow\mathrm{S}^{T}\,\mathrm{J}\,\mathrm{S}=\begin{bmatrix}0_{n\times n }&\mathrm{I}_{n\times n}\\ -\,\mathrm{I}_{n\times n}&0_{n\times n}\end{bmatrix}=\mathrm{J}\]
This shows that \(\mathrm{S}\in\mathrm{Sp}(n)\).
And finally,
\[\begin{bmatrix}x_{1}&\cdots&x_{n}&p_{1}&\cdots&p_{n}\end{bmatrix}\mathrm{S}^{ T}\circ\mathrm{M}\,\mathrm{S}\begin{bmatrix}x_{1}\\ \vdots\\ x_{n}\\ \vdots\\ p_{n}\end{bmatrix}=\sum_{j=1}^{k}\lambda_{j}(x_{j}^{2}+p_{j}^{2})\]
### Examples
Here, we shall give examples of symplectic eigenvalue computation:
i) \(\mathrm{Q}=\begin{bmatrix}5&3\\ 3&2\end{bmatrix}\)
\(\mathrm{JQ}=\begin{bmatrix}3&2\\ -5&-3\end{bmatrix}\) has eigenvalues \(\pm i\).
Therefore, the symplectic eigenvalue is 1.
ii) \(\mathrm{Q}=\begin{bmatrix}6&0\\ 0&3&\\ &&3&-1\\ &&-1&1\end{bmatrix}\)
\(\mathrm{JQ}=\begin{bmatrix}&&3&-1\\ &&-1&1\\ &&-6&0&\\ 0&-3&&\end{bmatrix}\) has eigenvalues \(\pm i\sqrt{\frac{3}{2}(7+\sqrt{33})}\), \(\pm i\sqrt{\frac{3}{2}(7-\sqrt{33})}\)
Therefore, the symplectic eigenvalues are \(\sqrt{\frac{3}{2}(7+\sqrt{33})},\sqrt{\frac{3}{2}(7-\sqrt{33})}\).
## Acknowledgement
I would like to thank Prof. Maurice de Gosson, Faculty of Mathematics, Universitat Wien for suggesting this project topic and helping me write the introduction section. Without his academic experience and mentoring, this work (with many future applications) would not have come to fruition.
|
2305.00825 | Covering grids with multiplicity | Given a finite grid in $\mathbb{R}^2$, how many lines are needed to cover all
but one point at least $k$ times? Problems of this nature have been studied for
decades, with a general lower bound having been established by Ball and Serra.
We solve this problem for various types of grids, in particular showing the
tightness of the Ball--Serra bound when one side is much larger than the other.
In other cases, we prove new lower bounds that improve upon Ball--Serra and
provide an asymptotic answer for almost all grids. For the standard grid
$\{0,\ldots,n-1\} \times \{0,\ldots,n-1\}$, we prove nontrivial upper and lower
bounds on the number of lines needed. To prove our results, we combine linear
programming duality with some combinatorial arguments. | Anurag Bishnoi, Simona Boyadzhiyska, Shagnik Das, Yvonne den Bakker | 2023-05-01T13:51:32Z | http://arxiv.org/abs/2305.00825v1 | # Covering grids with multiplicity
###### Abstract
Given a finite grid in \(\mathbb{R}^{2}\), how many lines are needed to cover all but one point at least \(k\) times? Problems of this nature have been studied for decades, with a general lower bound having been established by Ball and Serra. We solve this problem for various types of grids, in particular showing the tightness of the Ball-Serra bound when one side is much larger than the other. In other cases, we prove new lower bounds that improve upon Ball-Serra and provide an asymptotic answer for almost all grids. For the standard grid \(\{0,\ldots,n-1\}\times\{0,\ldots,n-1\}\), we prove nontrivial upper and lower bounds on the number of lines needed. To prove our results, we combine linear programming duality with some combinatorial arguments.
## 1 Introduction
A celebrated result of Alon and Furedi [3] in combinatorial geometry states that any multiset of hyperplanes that covers all but one point of a \(d\)-dimensional finite grid \(S_{1}\times\cdots\times S_{d}\subseteq\mathbb{F}^{d}\) over an arbitrary field \(\mathbb{F}\) must have size at least \(\sum_{i=1}^{d}(|S_{i}|-1)\). This lower bound is easily seen to be tight by taking all hyperplanes of the form \(x_{i}-a=0\) for \(1\leq i\leq d\) and \(a\in S_{i}\setminus\{b_{i}\}\), where \((b_{1},\ldots,b_{n})\) is the point that is uncovered. This is a significant theorem for a few different reasons; not only did the proof of Alon and Furedi play an important role in the development of the polynomial method [2, 7, 16], but this result and its generalisations have also seen several applications in a wide variety of mathematical disciplines [1, 7, 9, 10, 18].
One such generalisation that has been studied by several researchers is the multiplicity version of the problem, where the points of the grid should be covered multiple times. We introduce some notation to define this problem formally.
**Definition 1**.: Given finite subsets \(S_{1},S_{2},\ldots,S_{d}\) of some field \(\mathbb{F}\), we write \(\Gamma=\Gamma(S_{1},S_{2},\ldots,S_{d})\) for the grid \(S_{1}\times S_{2}\times\ldots\times S_{d}\subseteq\mathbb{F}^{d}\). Note that by translation we may, and will, assume \(\vec{0}\in\Gamma\). We call a point in \(\Gamma\) a _boundary point_ if any of its coordinates is equal to \(0\), and an _interior point_ otherwise.
For a given integer \(k\geq 1\), we call a multiset \(\mathcal{H}\) of hyperplanes in \(\mathbb{F}^{d}\) a _\(k\)-cover of \(\Gamma\)_ if every nonzero point of \(\Gamma\) is contained in at least \(k\) of the hyperplanes, while \(\vec{0}\) is not covered at all.
We denote by \(\operatorname{cov}_{k}(\Gamma;\mathbb{F})\) the minimum cardinality of a \(k\)-cover of \(\Gamma\) in \(\mathbb{F}^{d}\). In the case \(\mathbb{F}=\mathbb{R}\), we shall omit the field from the notation and simply write \(\operatorname{cov}_{k}(\Gamma)\).
In this notation, the Alon-Furedi Theorem establishes that \(\operatorname{cov}_{1}(\Gamma,\mathbb{F})=\sum_{i=1}^{d}(|S_{i}|-1)\) for any grid \(\Gamma\) over any field \(\mathbb{F}\). The multiplicity extension asks for the value of \(\operatorname{cov}_{k}(\Gamma;\mathbb{F})\) for multiplicities \(k\geq 2\), and we can start with a few trivial observations. First, if we remove any hyperplane from a \(k\)-cover, what we are left with is still a \((k-1)\)-cover, and so \(\operatorname{cov}_{k}(\Gamma;\mathbb{F})\geq\operatorname{cov}_{k-1}(\Gamma; \mathbb{F})+1\); that is, this extremal function is strictly increasing in \(k\). In the other direction, since the union of a \(k\)-cover and an \(\ell\)-cover yields a \((k+\ell)\)-cover, we have \(\operatorname{cov}_{k+\ell}(\Gamma;\mathbb{F})\leq\operatorname{cov}_{k}( \Gamma;\mathbb{F})+\operatorname{cov}_{\ell}(\Gamma;\mathbb{F})\), and so the function is subadditive in \(k\). Applying these recursive inequalities repeatedly until we reach the \(k=1\) case of Alon-Furedi, we have
\[\sum_{i=1}^{d}(|S_{i}|-1)+k-1=\operatorname{cov}_{1}(\Gamma;\mathbb{F})+k-1 \leq\operatorname{cov}_{k}(\Gamma;\mathbb{F})\leq k\operatorname{cov}_{1}( \Gamma;\mathbb{F})=k\sum_{i=1}^{d}(|S_{i}|-1). \tag{1}\]
The goal, then, is to narrow the considerable gap between these bounds, and there has been much previous research on some specific cases. Predating the work of Alon and Furedi [3], the study of affine blocking sets in finite geometry corresponds to setting \(\mathbb{F}=\mathbb{F}_{q}\) for some prime power \(q\) and taking \(\Gamma=\mathbb{F}_{q}^{d}\). For this grid, the classic paper of Jamison [17] uses the polynomial method to prove \(\operatorname{cov}_{1}(\Gamma;\mathbb{F}_{q})=d(q-1)\). Bruen [12] later used the polynomial method with multiplicities to provide lower bounds for the multiplicity version, showing \(\operatorname{cov}_{k}(\Gamma,\mathbb{F}_{q})\geq(d+k-1)(q-1)\). This is an improvement upon (1), but is generally not tight [19, 22]. In [8], the first three authors together with Tamas Meszaros obtained new bounds in the case \(q=2\) by exploiting an equivalence between \(k\)-covers and linear binary codes of minimum distance \(k\).
Recent work of Clifton and Huang [13] considered this problem over \(\mathbb{R}\), where the grid is the hypercube \(\Gamma=\{0,1\}^{d}\). For fixed dimension \(d\) and growing multiplicity \(k\), they used linear programming to determine \(\operatorname{cov}_{k}(\Gamma)\) asymptotically. On the other hand, when the dimension \(d\) is large with respect to the multiplicity \(k\), they applied the polynomial method to provide general lower bounds that are tight for \(k=2\) and \(k=3\). However, they conjectured that their lower bound of \(d+k+1\) is not tight for \(k\geq 4\), and that the true value of \(\operatorname{cov}_{k}(\Gamma)\) is \(d+\binom{k}{2}\) for all fixed \(k\) and large enough \(d\) (see [13, Conjecture 4.1]). A subsequent paper of Sauermann and Wigderson [21] determined the best bound one can obtain with the polynomial method, where one seeks the minimum possible degree of a polynomial that does not vanish at the origin but has zeroes of multiplicity \(k\) at all other points in the grid. While their result improves the Clifton-Huang lower bound to \(\operatorname{cov}_{k}(\Gamma)\geq d+2k-3\), it still falls short of the conjectured value of \(\operatorname{cov}_{k}(\Gamma)\) in this case, suggesting a strong separation between the algebraic and geometric problems.
Apart from these special cases, Ball and Serra [5, Theorem 5.3] applied the polynomial method to obtain a lower bound valid for any grid \(\Gamma=\Gamma(S_{1},S_{2},\ldots,S_{d})\) over any field \(\mathbb{F}\):
\[\operatorname{cov}_{k}(\Gamma;\mathbb{F})\geq\sum_{i=1}^{d}(|S_{i}|-1)+(k-1) \max_{1\leq i\leq d}(|S_{i}|-1). \tag{2}\]
This extends the bound of Bruen, and is a sizeable improvement on the lower bound of (1), but remains far removed from the upper bound. Indeed, in the symmetric case when \(|S_{i}|=n\) for all \(i\in[d]\), we have \((d+k-1)(n-1)\leq\operatorname{cov}_{k}(\Gamma;\mathbb{F})\leq kd(n-1)\). It is thus of great interest to determine whether the Ball-Serra bound can be tight, and to obtain better bounds when it is not.
In this paper we initiate the systematic study of the covering problem for two-dimensional finite grids over \(\mathbb{R}\). Our first set of results concerns how the dimensions of the grid affect the tightness of the Ball-Serra bound. We start by showing that when the grid is much wider than it is tall, the Ball-Serra bound is sharp.
**Theorem 1.1**.: _Let \(S_{1},S_{2}\subseteq\mathbb{R}\) satisfy \(|S_{1}|=n\), \(|S_{2}|=m\), and \(0\in S_{1}\cap S_{2}\), and set \(\Gamma=\Gamma(S_{1},S_{2})\subseteq\mathbb{R}^{2}\). If, for a positive integer \(k\), we have \(n\geq(k-1)(m-1)+1\), then_
\[\operatorname{cov}_{k}(\Gamma)=k(n-1)+(m-1).\]
Our next result shows that the lower bound on \(n\) from Theorem 1.1 cannot be improved in general. In fact, when \(n\leq(k-1)(m-1)\), the Ball-Serra bound can be improved for almost all \(n\times m\) grids. We make the "almost all" precise with the following definition.
**Definition 2**.: Let \(S_{1},S_{2}\subseteq\mathbb{R}\) with \(0\in S_{1}\cap S_{2}\), let \(\Gamma=\Gamma(S_{1},S_{2})\), and let \(\Delta\geq 0\) be an integer. We call \(\Gamma\)_\(\Delta\)-generic_ if any line containing two boundary points contains at most \(\Delta\) interior points. In the case \(\Delta=0\), when such lines avoid all interior points, we simply call \(\Gamma\)_generic_.
To see that grids are typically generic, suppose the nonzero points of \(S_{1}\) and \(S_{2}\) are sampled uniformly and independently from \([-1,1]\). If a line contains two boundary points and an interior point, then the boundary points must come from different axes, and so we may assume the points in question are \((0,b_{1})\), \((a_{1},0)\) and \((a_{2},b_{2})\) for some nonzero \(a_{1},a_{2}\in S_{1}\) and \(b_{1},b_{2}\in S_{2}\). For these points to be collinear, we require \(a_{1}(b_{1}-b_{2})=a_{2}b_{1}\), and the probability of such an equation holding for any choice of \(a_{i},b_{j}\) is \(0\). We are now ready to state our next result, which concerns \(\operatorname{cov}_{k}(\Gamma)\) in the special case where \(\Gamma\) is a generic grid.
**Theorem 1.2**.: _Let \(\Gamma=\Gamma(S_{1},S_{2})\subseteq\mathbb{R}^{2}\) be a generic grid with \(|S_{1}|=n\), \(|S_{2}|=m\) and \(0\in S_{1}\cap S_{2}\). Then_
\[\operatorname{cov}_{k}(\Gamma)\geq k(n-1)+\frac{k}{n+m-2}(m-1)^{2}.\]
_Furthermore, if \(\Gamma\) is an arbitrary \(n\times m\) grid, and if \(k\) is divisible by \(\frac{n+m-2}{\gcd(n-1,m-1)}\), then we have_
\[\operatorname{cov}_{k}(\Gamma)\leq k(n-1)+\frac{k}{n+m-2}(m-1)^{2}. \tag{3}\]
_In particular, if \(\Gamma\) is generic, we have equality above._
Note that \(\frac{k}{n+m-2}(m-1)^{2}\) is strictly larger than \(m-1\) precisely when \(n\leq(k-1)(m-1)\), and hence these theorems show the Ball-Serra bound is tight for generic grids if and only if \(n\geq(k-1)(m-1)+1\).
Theorem 1.2 shows that the Ball-Serra bound gets worse as the dimensions of the grid grow closer. For the majority of our paper, therefore, we focus on square grids \(\Gamma\), where \(n=m\). In this case, the Ball-Serra bound gives \(\operatorname{cov}_{k}(\Gamma)\geq(k+1)(n-1)\). Our next theorem gives a stark improvement for general square grids.
**Theorem 1.3**.: _Let \(\Gamma=\Gamma(S_{1},S_{2})\subset\mathbb{R}^{2}\) be a grid with \(|S_{1}|=|S_{2}|=n\) and \(0\in S_{1}\cap S_{2}\). Then, for any integer \(k\geq 2\), we have:_
1. \(\operatorname{cov}_{k}(\Gamma)\leq\big{\lceil}\frac{3}{2}k\big{\rceil}(n-1)\)_._
2. \(\operatorname{cov}_{k}(\Gamma)\geq(10-4\sqrt{5}+o(1))k(n-1)\)_, where the asymptotics are for_ \(n\to\infty\)_._
3. _if_ \(\Gamma\) _is_ \(\Delta\)_-generic for some_ \(\Delta\geq 0\)_, then_ \(\operatorname{cov}_{k}(\Gamma)\geq\Big{[}2-\frac{n-1}{2(n-1)-\Delta}\Big{]}k(n-1)\)_._
Note that part (a) improves the trivial upper bound from (1), while, since \(10-4\sqrt{5}\approx 1.0557\), part (b) gives a constant factor improvement over the Ball-Serra bound in (2), showing that it is never tight for large square grids. Moreover, as previously discussed, almost all grids are generic, and substituting \(\Delta=0\) into part (c) gives a lower bound that matches the upper bound of part (a) exactly when \(k\) is even and asymptotically when \(k\) is odd and large. In fact, part (c) gives an asymptotically tight bound for all \(o(n)\)-generic grids.
However, the most natural grid to consider is the _standard \(n\times n\) grid_, given by \(S_{1}=S_{2}=\{0,1,2,\ldots,n-1\}\), and we denote this by \(\Gamma_{n}=\Gamma(S_{1},S_{2})\). By considering the diagonal \(x+y=n-1\), we see that \(\Gamma_{n}\) is not \(\Delta\)-generic for any \(\Delta<n-2\), and so Theorem 1.3(c) is worse than the Ball-Serra bound when \(n>k\). By tailoring our methods to this specific grid, we obtain the following improvements on the general bounds.
**Theorem 1.4**.: _Let \(n,k\geq 2\) be integers, let \(S=\{0,1,2,\ldots,n-1\}\) and let \(\Gamma_{n}=\Gamma(S,S)\) be the standard \(n\times n\) grid. Then, as \(n,k\to\infty\), we have_
\[(2-e^{-1/2}+o(1))k(n-1)\leq\operatorname{cov}_{k}(\Gamma_{n})\leq(\sqrt{2}+o(1 ))k(n-1). \tag{4}\]
Note that \(2-e^{-1/2}\approx 1.3935\), while \(\sqrt{2}\approx 1.4142\), so there is still a gap between the best lower and upper bounds we obtain for standard grids. To obtain sharper bounds, it can help to restrict the class of \(k\)-covers we consider. As we will see in the proof of the upper bound in Theorem 1.4, our construction uses only three types of lines: horizontal, vertical, and lines of slope \(-1\). A subsequent computer search verified that for small values of \(n\) and \(k\), we can always find an optimal \(k\)-cover using only these three kinds of lines. In our final result, we provide a matching lower bound for these restricted \(k\)-covers, suggesting the upper bound of (4) may be correct in the unrestricted case as well.
**Theorem 1.5**.: _As \(n\to\infty\), the smallest \(k\)-cover of the standard grid \(\Gamma_{n}\) that only contains lines of slope \(0,\infty\), or \(-1\) has size at least \(\big{(}\sqrt{2}+o(1)\big{)}k(n-1)\)._
When proving these results, we shall establish the upper bounds by means of explicit constructions of \(k\)-covers. The lower bounds, meanwhile, will follow by applying duality to a linear programming relaxation of this problem, and we shall set up this framework in Section 2. Having described our methodology, we will prove Theorems 1.1 and 1.2 in Section 3. We then shift our focus to square grids, proving Theorems 1.3, 1.4, and 1.5 in Section 4. Finally, we provide some concluding remarks and open problems in Section 5.
## 2 The linear programming framework
In this section we introduce the linear programming method that will be used to prove our lower bounds. The use of linear programming in extremal combinatorics is well-established and has led to many results (see, for example, [15]), including the Clifton-Huang [13] lower bound on \(\operatorname{cov}_{k}(\Gamma)\) for \(\Gamma=\{0,1\}^{d}\) in the case when \(d\) is fixed and \(k\) tends to infinity. The standard template is as follows: assume without loss of generality that we are working with a minimization problem; first, we interpret our extremal problem as an instance of an integer programming problem. Then, since integer programming is intractable, we consider the linear programming relaxation, where we allow fractional solutions. Since an integer solution is in particular a fractional solution, the value of the linear program gives a lower bound to the original extremal problem. Crucially, we can then apply duality, and the task of finding lower bounds translates to finding feasible solutions to the dual linear program. We refer the reader to [20] for further background on linear programming.
To start with this plan of action, we need to recast the covering problem as an integer linear program. Given a grid \(\Gamma=\Gamma(S_{1},S_{2})\), for some finite sets \(S_{1},S_{2}\subseteq\mathbb{R}\) containing \(0\), we wish to determine \(\operatorname{cov}_{k}(\Gamma)\), the minimum number of lines in a \(k\)-cover of \(\Gamma\). We shall assume \(|S_{1}|,|S_{2}|\geq 2\), as the problem is otherwise trivial.
For every possible line \(\ell\) not containing the origin, we introduce a variable \(z(\ell)\) indicating its multiplicity in the \(k\)-cover. We thus wish to minimise \(\sum_{\ell}z(\ell)\), the size of the cover. In order to ensure that the solution returned by the integer program is a \(k\)-cover, we need to require that each nonzero point of \(\Gamma\) be covered at least \(k\) times while the origin be omitted altogether. The
latter constraint is easily seen to be satisfied. Hence, for each \((x,y)\in\Gamma\setminus\{(0,0)\}\), we require that the sum of \(z(\ell)\) over all origin-avoiding lines \(\ell\) containing \((x,y)\) be at least \(k\).
The one catch is that there are infinitely many lines in \(\mathbb{R}^{2}\). To obtain a finite program, we observe that we may restrict our attention to lines that contain at least two nonzero points in \(\Gamma\). Indeed, in any minimal \(k\)-cover of \(\Gamma\), every line must contain at least one nonzero point, and if a line contains only one point \((x,y)\), then we can replace it by a different origin-avoiding line passing through \((x,y)\) and at least one other point of \(\Gamma\). Thus, we need only consider lines from the set \(\mathcal{L}=\mathcal{L}(\Gamma)\) of origin-avoiding lines containing at least two points of \(\Gamma\). Since there are fewer than \(|\Gamma|^{2}\) pairs of nonzero points in \(\Gamma\), each of which determines a unique line, it follows that \(\mathcal{L}\) is finite.
In summary, \(\operatorname{cov}_{k}(\Gamma)\) is the solution to the following integer linear program \(\mathcal{I}=\mathcal{I}(\Gamma,k)\).
minimize \[\sum_{\ell\in\mathcal{L}}z(\ell)\] subject to \[\sum_{\begin{subarray}{c}\ell\in\mathcal{L}:\\ (x,y)\in\ell\end{subarray}}z(\ell)\geq k\quad\text{for all }(x,y)\in\Gamma \setminus\{(0,0)\}\] \[z(\ell)\in\mathbb{Z}_{\geq 0}\quad\text{for all }\ell\in \mathcal{L}\]
The final constraint, that the variables \(z(\ell)\) be integral, renders solving the program computationally infeasible. Instead, to obtain a polynomial-time solvable problem, we can relax the variables to be real-valued. Now that we are no longer constrained to the integers, we can also divide through by \(k\), removing the dependence of the program on this parameter. We therefore obtain the linear program \(\mathcal{P}=\mathcal{P}(\Gamma)\) with the normalised variables \(u(\ell)\) for \(\ell\in\mathcal{L}\).
minimize \[\sum_{\ell\in\mathcal{L}}u(\ell)\] (5) subject to \[\sum_{\begin{subarray}{c}\ell\in\mathcal{L}:\\ (x,y)\in\ell\end{subarray}}u(\ell)\geq 1\quad\text{for all }(x,y)\in\Gamma \setminus\{(0,0)\}\] \[u(\ell)\geq 0\quad\text{for all }\ell\in\mathcal{L}\]
Let us denote by \(\Phi(\Gamma)\) the solution to \(\mathcal{P}(\Gamma)\). The following result shows that \(\Phi(\Gamma)\) describes the asymptotic behaviour of \(\operatorname{cov}_{k}(\Gamma)\) when \(k\) is large with respect to the dimensions of the grid.
**Proposition 1**.: _For any grid \(\Gamma\) and integer \(k\geq 1\) we have_
\[k\Phi(\Gamma)\leq\operatorname{cov}_{k}(\Gamma)\leq k\Phi(\Gamma)+|\mathcal{L}|.\]
Proof.: As previously established, \(\operatorname{cov}_{k}(\Gamma)\) is the value of the integer linear program \(\mathcal{I}(\Gamma,k)\). If we let \((z(\ell):\ell\in\mathcal{L})\) be a solution to the program, then setting \(u(\ell)=\frac{1}{k}z(\ell)\) yields a feasible solution to the linear relaxation \(\mathcal{P}(\Gamma)\), with value \(\sum_{\ell}u(\ell)=\frac{1}{k}\sum_{\ell}z(\ell)=\frac{1}{k}\operatorname{ cov}_{k}(\Gamma)\). As \(\Phi(\Gamma)\) is the minimum possible value of a feasible solution to \(\mathcal{P}(\Gamma)\), the first inequality follows.
For the upper bound, let \((u^{*}(\ell):\ell\in\mathcal{L})\) be an optimal solution to the linear program \(\mathcal{P}(\Gamma)\), with value \(\Phi(\Gamma)\). If we then set \(z(\ell)=\lceil ku^{*}(\ell)\rceil\) for all \(\ell\in\mathcal{L}\), we obtain a feasible solution to \(\mathcal{I}(\Gamma,k)\). Thus,
\[\operatorname{cov}_{k}(\Gamma)\leq\sum_{\ell}z(\ell)=\sum_{\ell}\lceil ku^{*} (\ell)\rceil\leq\sum_{\ell}(ku^{*}(\ell)+1)=k\Phi(\Gamma)+|\mathcal{L}|.\qed\]
We remark that in practice one often obtains better error bounds; for instance, the \(|\mathcal{L}|\) term can be replaced by the size of the support of the optimal solution \(u^{*}\). Furthermore, for an infinite
sequence of multiplicities \(k\), we can do away with the error term altogether. Indeed, since all the coefficients of \(\mathcal{P}(\Gamma)\) are integral, there is a rational optimal solution \(u^{*}\) (the one returned by the Simplex Algorithm, for example). If \(k\) is divisible by the common divisor of the fractions \(u^{*}(\ell)\), then we can set \(z(\ell)=ku^{*}(\ell)\) without needing any rounding, thereby obtaining a solution to \(\mathcal{I}(\Gamma,k)\) of value precisely \(k\Phi(\Gamma)\).
Therefore, asymptotically as \(k\) tends to infinity, the problem reduces to determining \(\Phi(\Gamma)\). We can provide upper bounds by finding feasible solutions to \(\mathcal{P}(\Gamma)\) or, better yet, constructing \(k\)-covers of \(\Gamma\). To obtain lower bounds, we appeal to the theory of duality. The dual of \(\mathcal{P}(\Gamma)\), which we denote by \(\mathcal{D}(\Gamma)\), is the following linear program, where we have a variable \(w(x,y)\) for each point \((x,y)\in\Gamma\setminus\{(0,0)\}\), which we call the weight of the point.
maximize \[\sum_{(x,y)\in\Gamma\setminus\{(0,0)\}}w(x,y)\] (6) subject to \[\sum_{(x,y)\in\Gamma\setminus\{(0,0)\}:}w(x,y)\leq 1\quad \text{for all }\ell\in\mathcal{L}\] \[w(x,y)\geq 0\quad\text{for all }(x,y)\in\Gamma\setminus\{(0,0)\}\]
For convenience, given a set \(S\subseteq\Gamma\setminus\{(0,0)\}\), we write \(w(S)=\sum_{(x,y)\in S}w(x,y)\) for the weight of \(S\). The dual program thus asks for the maximum possible weight of the grid, provided every line in \(\mathcal{L}\) has weight at most \(1\). By the duality theorem for linear programming (see [20, Section 6.1]), the programs \(\mathcal{P}(\Gamma)\) and \(\mathcal{D}(\Gamma)\) have the same optimal objective value \(\Phi(\Gamma)\). We shall thus prove our lower bounds on \(\operatorname{cov}_{k}(\Gamma)\) by finding suitably large feasible weights on the grid \(\Gamma\).
## 3 Wide rectangular grids
In this section we will prove Theorems 1.1 and 1.2, establishing precisely when the Ball-Serra bound is tight for all grids of given dimensions. Our first result establishes the value of \(\operatorname{cov}_{k}(\Gamma)\) for all \(n\times m\) grids \(\Gamma\) whenever \(n\geq(k-1)(m-1)+1\).
Proof of Theorem 1.1.: The Ball-Serra bound (2) provides the requisite lower bound, as substituting \(|S_{1}|=n\) and \(|S_{2}|=m\) gives \(\operatorname{cov}_{k}(\Gamma)\geq k(n-1)+m-1\). To prove a matching upper bound, we provide an explicit construction of a \(k\)-cover containing this many lines.
Write \(S_{2}=\{0,t_{1},\ldots,t_{m-1}\}\), and let \(P_{1}\cup\cdots\cup P_{m-1}\) be an arbitrary partition of \(S_{1}\setminus\{0\}\) such that \(|P_{i}|\geq k-1\) for all \(i\in[m-1]\); such a partition exists since \(n-1\geq(k-1)(m-1)\).
Now, consider the following collection of lines:
1. the line \(y=t_{i}\) for all \(i\in[m-1]\);
2. \(k-1\) copies of the line \(x=s\) for all \(s\in S_{1}\setminus\{0\}\);
3. the line connecting \((0,t_{i})\) and \((s,0)\) for every \(i\in[m-1]\) and \(s\in P_{i}\).
In total, this collection contains \(m-1+(k-1)(n-1)+n-1=k(n-1)+m-1\) lines. It remains to verify that these lines form a valid \(k\)-cover of \(\Gamma\). Note first that no line in this collection passes through the origin \((0,0)\).
Any interior point of \(\Gamma\) is covered \(k\) times by the lines in (i) and (ii), leaving us to check the boundary points. A point of the form \((s,0)\), where \(s\in S_{1}\setminus\{0\}\), is covered \(k-1\) times by the lines in (ii) and once by the lines in (iii). Finally, a point \((0,s)\) for \(s\in S_{2}\setminus\{0\}\) is covered once by the lines in (i) and at least \(k-1\) times by the lines in (iii) since each \(P_{i}\) has size at least \(k-1\). Hence, every nonzero point of \(\Gamma\) is covered at least \(k\) times, as required.
We remark that the above construction can be generalised to higher dimensions to show that, for any \(n_{1}\times\cdots\times n_{d}\) grid \(\Gamma(S_{1},\ldots,S_{d})\) containing the origin, if \(n_{1}\geq n_{2}\geq\cdots\geq n_{d}\) and \(n_{1}\geq\operatorname{cov}_{k-1}(\Gamma(S_{2},\ldots,S_{d}))+1\), then the Ball-Serra bound is tight for \(\operatorname{cov}_{k}(\Gamma(S_{1},\ldots,S_{d}))\). Indeed, write \(S_{1}=\{0,s_{1},\ldots,s_{n_{1}-1}\}\) and let \(\mathcal{H}=\{H_{1},\ldots,H_{n_{1}-1}\}\) be any collection of \(n_{1}-1\) hyperplanes in \(\mathbb{R}^{d-1}\) containing a \((k-1)\)-cover of \(\Gamma(S_{2},\ldots,S_{d})\). We then form a \(k\)-cover of \(\Gamma(S_{1},S_{2},\ldots,S_{d})\) consisting of the following hyperplanes in \(\mathbb{R}^{d}\):
1. one copy of the hyperplane \(x_{i}=t\) for all \(i\in\{2,\ldots,d\}\) and \(t\in S_{i}\setminus\{0\}\);
2. \(k-1\) copies of the hyperplane \(x_{1}=s\) for all \(s\in S_{1}\setminus\{0\}\);
3. the hyperplane spanned by \(\{0\}\times H_{i}\) and \((s_{i},0,\ldots,0)\) for all \(i\in[n_{1}-1]\).
It is not difficult to check that this is indeed a \(k\)-cover of \(\Gamma(S_{1},\ldots,S_{d})\), and it consists of \(\sum_{i=1}^{d}n_{i}+(k-1)n_{1}\) hyperplanes, which matches the Ball-Serra lower bound (2). However, it is not clear how good the lower bound on \(n_{1}\) is; that is, how large \(n_{1}\) needs to be with respect to the other dimensions \(n_{i}\) in order to ensure that the Ball-Serra bound is tight.
In our next result, we show that the bound on \(n_{1}\) in Theorem 1.1 is best possible, since for generic grids that are slightly less wide, the Ball-Serra bound is no longer tight. In fact, we a give a general lower bound for \(\operatorname{cov}_{k}(\Gamma)\) when \(\Gamma\) is a generic \(n\times m\) grid, and prove that this bound is tight for infinitely many choices of \(k\). While we will not pursue this question further for higher dimensions in this paper, we remark that it was shown in [14] that for the grid \(\{0,\ldots,n_{1}-1\}\times\{0,\ldots,n_{2}-1\}\times\{0,\ldots,n_{3}-1\}\) with \(n_{1}\geq n_{2}\geq n_{3}\), the Ball-Serra bound is already tight when \(n_{1}\geq(k-1)(n_{2}-1)+1\), which is an improvement on the bound given by the above construction.
Proof of Theorem 1.2.: We wish to show that if \(\Gamma=\Gamma(S_{1},S_{2})\) is a generic grid, where \(S_{1},S_{2}\subseteq\mathbb{R}\) satisfy \(0\in S_{1}\cap S_{2}\) and \(|S_{1}|=n\geq m=|S_{2}|\), then we have \(\operatorname{cov}_{k}(\Gamma)\geq k(n-1)+\frac{k}{n+m-2}(m-1)^{2}\). Appealing to the linear programming framework developed in Section 2, it suffices to show \(\Phi(\Gamma)\geq(n-1)+\frac{(m-1)^{2}}{n+m-2}\), which can be done by defining a weighting on the nonzero points of \(\Gamma\) with this total weight in which every line in \(\mathcal{L}\) has weight at most \(1\).
To that end, define the weighting \(w:\Gamma\setminus\{(0,0)\}\to\mathbb{R}\) by
\[w((x,y))=\begin{cases}\frac{n-1}{n+m-2}&\text{if }y=0;\\ \frac{m-1}{n+m-2}&\text{if }x=0;\\ \frac{1}{n+m-2}&\text{otherwise.}\end{cases}\]
We start by computing the total weight of the grid:
\[w(\Gamma\setminus\{(0,0)\}) =(n-1)\frac{n-1}{n+m-2}+(m-1)\frac{m-1}{n+m-2}+(n-1)(m-1)\frac{1 }{n+m-2}\] \[=(n-1)\frac{n-1+m-1}{n+m-2}+\frac{(m-1)^{2}}{n+m-2}\] \[=n-1+\frac{(m-1)^{2}}{n+m-2},\]
and so, provided this weighting is feasible, it gives the desired lower bound.
To establish its feasibility, let us consider a line \(\ell\in\mathcal{L}(\Gamma)\). First suppose \(\ell\) contains two boundary points, say \((x,0)\) and \((0,y)\). Since \(\Gamma\) is generic, \(\ell\) cannot contain any other points, and hence \(w(\ell)=w((x,0))+w((0,y))=\frac{n-1}{n+m-2}+\frac{m-1}{n+m-2}=1\). Next, suppose \(\ell\) is a horizontal line of the form \(y=s\) for some \(s\in S_{2}\setminus\{0\}\). The line \(\ell\) then contains one point on the \(y\)-axis and \(n-1\) interior points, and thus \(w(\ell)=\frac{m-1}{n+m-1}+(n-1)\frac{1}{n+m-2}=1\). Finally, any other line \(\ell\) can contain at most one boundary point and at most \(m-1\) interior points \((x,y)\), one for each choice of \(y\in S_{2}\setminus\{0\}\). For such lines, we therefore have \(w(\ell)\leq\frac{n-1}{n+m-2}+(m-1)\frac{1}{n+m-2}=1\).
Hence, the weighting \(w\) is indeed feasible for the dual linear program \(\mathcal{D}(\Gamma)\), and has total weight \(n-1+\frac{(m-1)^{2}}{n+m-2}\), which proves \(\operatorname{cov}_{k}(\Gamma)\geq k(n-1)+\frac{k}{n+m-2}(m-1)^{2}\).
Now, given an arbitrary \(n\times m\) grid \(\Gamma=\Gamma(S_{1},S_{2})\), we provide a construction of a \(k\)-cover of \(\Gamma\) that matches the bound proven above for an infinite sequence of multiplicities \(k\), thereby determining \(\operatorname{cov}_{k}(\Gamma)\) for generic grids \(\Gamma\) and such multiplicities \(k\).
Defining \(a=\frac{n-1}{\gcd(n-1,m-1)}\) and \(b=\frac{m-1}{\gcd(n-1,m-1)}\), we are given that \(k\) is divisible by \(a+b\). We further define \(d_{1}=\frac{bk}{a+b}\) and \(d_{2}=\frac{ak}{a+b}\), noting that our divisibility assumption ensures these are integers and that \(d_{1}+d_{2}=k\). Let \(B\) be an arbitrary biregular bipartite multigraph with parts \(S_{1}\setminus\{0\}\) and \(S_{2}\setminus\{0\}\) with degrees \(d_{1}\) in the first part and \(d_{2}\) in the second. Note that such a multigraph exists, since \(d_{1}(n-1)=d_{2}(m-1)\), and we can assign \(d_{1}\) half-edges to each \(s_{1}\in S_{1}\setminus\{0\}\) and \(d_{2}\) half-edges to each \(s_{2}\in S_{2}\setminus\{0\}\), and then take an arbitrary matching between the two sets of half-edges.
Next, consider the following collection of lines:
1. \(d_{2}\) copies of the line \(x=s_{1}\) for each \(s_{1}\in S_{1}\setminus\{0\}\);
2. \(d_{1}\) copies of the line \(y=s_{2}\) for each \(s_{2}\in S_{2}\setminus\{0\}\);
3. for each \(\{s_{1},s_{2}\}\in E(B)\), a copy of the line connecting \((s_{1},0)\) to \((0,s_{2})\).
To see that these lines form a \(k\)-cover of \(\Gamma\), observe that every interior point is covered by \(d_{2}\) vertical lines from (i) and \(d_{1}\) horizontal lines from (ii), and is thus covered \(d_{1}+d_{2}=k\) times in total. For the boundary points, a point of the form \((s_{1},0)\) for \(s_{1}\in S_{1}\setminus\{0\}\) is covered \(d_{2}\) times by the lines in (i), while the biregularity of the multigraph \(B\) ensures it is covered \(d_{1}\) times by the lines in (iii). Similarly, each point of the form \((0,s_{2})\) for \(s_{2}\in S_{2}\setminus\{0\}\) is covered \(d_{1}\) times by the lines in (ii) and \(d_{2}\) times by those in (iii). Thus, the boundary points are also each covered \(k\) times. Finally, none of the lines in our collection passes through the origin.
We thus obtain our upper bound by calculating the size of this cover, which yields
\[\operatorname{cov}_{k}(\Gamma) \leq d_{2}(n-1)+d_{1}(m-1)+d_{1}(n-1)\] \[=(d_{1}+d_{2})(n-1)+d_{1}(m-1)\] \[=k(n-1)+\frac{bk}{a+b}(m-1)\] \[=k(n-1)+\frac{k}{n+m-2}(m-1)^{2},\]
as required.
## 4 Square grids
In the previous section, we saw that the Ball-Serra bound is tight when the grid is much wider than it is tall, and proved a lower bound for generic grids that becomes much larger than the Ball-Serra bound as the dimensions grow closer in size. Therefore, for the rest of this paper, we focus on \(n\times n\) grids. In Section 4.1 we prove Theorem 1.3, which provides general lower and upper bounds on \(\operatorname{cov}_{k}(\Gamma)\) for arbitrary \(n\times n\) grids \(\Gamma\). In Section 4.2 we focus on the standard grid \(\Gamma_{n}=\Gamma(\{0,1,\ldots,n-1\},\{0,1,\ldots,n-1\})\), proving Theorems 1.4 and 1.5.
### General results
We start by restating our general bounds for square grids.
**Theorem 1.3**.: _Let \(\Gamma=\Gamma(S_{1},S_{2})\subset\mathbb{R}^{2}\) be a grid with \(|S_{1}|=|S_{2}|=n\) and \(0\in S_{1}\cap S_{2}\). Then, for any integer \(k\geq 2\), we have:_
1. \(\mathrm{cov}_{k}(\Gamma)\leq\left\lceil\frac{3}{2}k\right\rceil(n-1)\)_._
2. \(\mathrm{cov}_{k}(\Gamma)\geq(10-4\sqrt{5}+o(1))k(n-1)\)_, where the asymptotics are for_ \(n\to\infty\)_._
3. _if_ \(\Gamma\) _is_ \(\Delta\)_-generic for some_ \(\Delta\geq 0\)_, then_ \(\mathrm{cov}_{k}(\Gamma)\geq\left\lceil 2-\frac{n-1}{2(n-1)-\Delta}\right\rceil k (n-1)\)_._
We will prove the upper bound of (a) via an explicit construction of a \(k\)-cover, and use the linear programming framework of Section 2 to establish the lower bounds of (b) and (c).
Proof.: (a) Note that when \(k\) is even, the upper bound \(\mathrm{cov}_{k}(\Gamma)\leq\frac{3}{2}k(n-1)\) follows from the upper bound in Theorem 1.2 when \(m=n\). We will obtain the upper bound for odd \(k\) with some appropriate rounding, and present a unified construction below.
Let \(\{x_{1},x_{2},\ldots,x_{n-1}\}\) be the nonzero elements of \(S_{1}\), and let \(\{y_{1},y_{2},\ldots,y_{n-1}\}\) be the nonzero elements of \(S_{2}\). We form a \(k\)-cover of \(\Gamma\) consisting of the following lines:
1. \(\left\lceil\frac{1}{2}k\right\rceil\) copies of the lines \(x=x_{i}\) and \(y=y_{i}\), for each \(i\in[n-1]\);
2. \(\left\lfloor\frac{1}{2}k\right\rfloor\) copies of line connecting \((x_{i},0)\) to \((0,y_{i})\), for each \(i\in[n-1]\).
There are \(2\big{\lceil}\frac{1}{2}k\big{\rceil}(n-1)\) lines in (i) and \(\left\lfloor\frac{1}{2}k\right\rfloor\) lines in (ii), and hence we have a total of \(\big{\lceil}\frac{3}{2}k\big{\rceil}(n-1)\) lines, and it is evident that none of these pass through the origin. To see that they form a \(k\)-cover, observe first that each interior point is covered by \(\big{\lceil}\frac{1}{2}k\big{\rceil}\) horizontal lines and \(\big{\lceil}\frac{1}{2}k\big{\rceil}\) vertical lines, and is thus covered at least \(k\) times in total. Meanwhile, each boundary point is covered \(\big{\lceil}\frac{1}{2}k\big{\rceil}\) times by the lines in (i) and \(\big{\lfloor}\frac{1}{2}k\big{\rfloor}\) times by the lines in (ii), and so is incident to exactly \(k\) lines. These lines therefore indeed form a \(k\)-cover, showing \(\mathrm{cov}_{k}(\Gamma)\leq\big{\lceil}\frac{3}{2}k\big{\rceil}(n-1)\).
(b) To prove lower bounds, we appeal to the dual linear program \(\mathcal{D}(\Gamma)\). Our goal is to define a weighting \(w\) of the nonzero points of \(\Gamma\) of large total weight in which no origin-avoiding line has weight more than \(1\). The key observation is that these lines can contain at most two boundary points, and so we can hope to get away with assigning large weights to the boundary points.
We shall first try a simple weighting \(w^{\prime}\), where all boundary points obtain a weight of \(\alpha\), and all interior points a weight of \(\beta\), for \(\alpha\) and \(\beta\) to be chosen later. Unfortunately, this initial attempt does not work. Indeed, we have \(w^{\prime}(\Gamma)=2(n-1)\alpha+(n-1)^{2}\beta=(2\alpha+(n-1)\beta)(n-1)\). Since there could be lines containing two boundary points and \(n-2\) interior points, we must have \(2\alpha+(n-2)\beta\leq 1\). Hence, \(w^{\prime}(\Gamma)\leq(1+\beta)(n-1)\). As lines parallel to the axes contain an boundary point and \(n-1\) interior points, we must also have \(\alpha+(n-1)\beta\leq 1\), which in particular implies \(\beta\leq\frac{1}{n-1}\). Thus, \(w^{\prime}(\Gamma)\leq n\), and so the best lower bound we can hope for from such a weighting is \(\mathrm{cov}_{k}(\Gamma)\geq kn\), which is worse than the Ball-Serra bound (2) for \(n\geq k+2\).
To salvage this idea, we will instead only assign weight to some of the boundary points, with the aim of ensuring that any origin-avoiding line containing two positively-weighted boundary points cannot contain too many interior points. For this, we use the following claim, bounding the number of points contained in certain lines.
**Claim 1**.: _Suppose we enumerate the members of \(S_{1}\) as \(x_{1}<x_{2}<\ldots<x_{n}\) and those of \(S_{2}\) as \(y_{1}<y_{2}<\ldots<y_{n}\), and suppose \(i_{0}\in[n]\) is such that \(x_{i_{0}}=0\). Let \(j\in[n]\) and \(\ell\) be a line passing through \((0,y_{j})\). If \(\ell\) has positive slope, then \(\ell\) contains at most \(n-|j-i_{0}|\) points of \(\Gamma\). If \(\ell\) has negative slope, then \(\ell\) contains at most \(n-|(n-j)-i_{0}|\) points of \(\Gamma\)._
Proof of Claim 1.: Suppose first that \(\ell\) is a line of positive slope passing through \((0,y_{j})\). We define \(S_{1}^{-}=\{x\in S_{1}:x\leq 0\}\) and \(S_{2}^{-}=\{y\in S_{2}:y\leq y_{j}\}\), observing that \(|S_{1}^{-}|=i_{0}\) and \(|S_{2}^{-}|=j\). Since \(\ell\) is of positive slope, it follows that, if \((x,y)\in\ell\) for any \(x\in S_{1}^{-}\), we must have \(y\in S_{2}^{-}\).
If \(j\geq i_{0}\), then, since each value in \(S_{1}^{-}\) can correspond to at most one value in \(S_{2}^{-}\), it follows that there will be at least \(j-i_{0}\) coordinates in \(S_{2}^{-}\), and thus in \(S_{2}\), that are not mapped to by \(\ell\), and so \(\ell\) can contain at most \(n-(j-i_{0})\) points of \(\Gamma\). Similarly, if \(j<i_{0}\), since each value
in \(S_{2}^{-}\) corresponds to at most one value from \(S_{1}^{-}\), there will be at least \(i_{0}-j\) values in \(S_{1}\) not covered by \(\ell\), and so \(\ell\) contains at most \(n-(i_{0}-j)\) points from \(\Gamma\). This shows that lines of positive slope through \((0,y_{j})\) can contain at most \(n-|j-i_{0}|\) points of \(\Gamma\).
For lines of negative slope, we can reflect the grid in the \(x\)-axis, considering \(S_{2}^{\prime}=\{-y:y\in S_{2}\}\). This reverses the ordering of the elements, so \(y_{i}^{\prime}=-y_{n-i}\). The line \(\ell\) then corresponds to a line \(\ell^{\prime}\) of positive slope passing through \((0,y_{n-j}^{\prime})\), and thus contains at most \(n-|(n-j)-i_{0}|\) points of the grid.
With this claim in mind, we now define an improved weighting of the points of \(\Gamma\). As in the claim, enumerate the elements of \(S_{1}\) as \(x_{1}<x_{2}<\ldots<x_{n}\) and those of \(S_{2}\) as \(y_{1}<y_{2}<\ldots<y_{n}\), and let \(i_{0},j_{0}\in[n]\) be such that \(x_{i_{0}}=0\) and \(y_{j_{0}}=0\). Given parameters \(\alpha,\beta\) and \(t\), to be chosen later, we define the weights on \(\Gamma\setminus\{(0,0)\}\) as follows:
\[w((x_{i},y_{j}))=\begin{cases}\alpha&\text{if $i\neq i_{0}$ and $j\neq j_{0}$;}\\ \beta&\text{if $j=j_{0}$, or if $i=i_{0}$ and $\min\{|j-i_{0}|,|n-j-i_{0}|\}\geq t$;}\\ 0&\text{otherwise.}\end{cases}\]
That is, we assign weight \(\alpha\) to all interior points and weight \(\beta\) to all boundary points except those in intervals around \(y_{i_{0}}\) and \(y_{n-i_{0}}\) on the \(y\)-axis.
For the weighting to be valid, we require that each origin-avoiding line have weight at most \(1\). If the line \(\ell\) contains two boundary points with positive weight, then let it pass through \((0,y_{j})\) on the \(y\)-axis. By definition of \(w\), we must have \(|j-i_{0}|,|(n-j)-i_{0}|\geq t\), and so by Claim 1, \(\ell\) contains at most \(n-t\) points of \(\Gamma\), and hence at most \(n-t-2\) interior points. Thus, the weight of any such line is at most \(2\beta+(n-t-2)\alpha\).
Otherwise, the line \(\ell\) can contain at most one weighted boundary point and at most \(n-1\) interior points, giving a total weight of not more than \(\beta+(n-1)\alpha\). Hence, our parameters must satisfy \(2\beta+(n-t-2)\alpha\leq 1\), \(\beta+(n-1)\alpha\leq 1\), \(\alpha,\beta\geq 0\), and \(t\in\mathbb{N}\).
With regards to the objective function, we note that there are at most \(2(2t-1)\) boundary points with weight zero, and thus \(w(\Gamma)\geq 2(n-2t)\beta+(n-1)^{2}\alpha\), and we wish to maximise this quantity subject to the constraints above. Some routine calculations then yield that we should set \(t=\left\lceil\frac{1}{2}\sqrt{(5n+1)(n-1)}-n\right\rceil\), \(\alpha=\frac{1}{n+t}\) and \(\beta=\frac{t+1}{n+t}\), for which we have \(w(\Gamma)=\left(10-4\sqrt{5}+o(1)\right)(n-1)\). Proposition 1 gives the desired lower bound:
\[\operatorname{cov}_{k}(\Gamma)\geq k\Phi(\Gamma)\geq kw(\Gamma)=\left(10-4 \sqrt{5}+o(1)\right)k(n-1).\]
(c) For the final part of the theorem, we assume the grid \(\Gamma\) is \(\Delta\)-generic, meaning that any line through two boundary points can contain at most \(\Delta\) interior points. In the framework of part (b), where we assign a weight of \(\alpha\) to all interior points, and a weight of \(\beta\) to all boundary points, we then obtain the constraint \(\beta+(n-1)\alpha\leq 1\) from the lines with at most one boundary point, and the constraint \(2\beta+\Delta\alpha\leq 1\) from the lines with two boundary points. The total weight, which we seek to maximise, is \(2(n-1)\beta+(n-1)^{2}\alpha\).
This optimisation problem is solved by taking \(\beta=1-\frac{n-1}{2(n-1)-\Delta}\) and \(\alpha=\frac{1}{2(n-1)-\Delta}\). It is readily verified that both constraints are then satisfied with equality, and the total weight of the grid is \(\left[2-\frac{n-1}{2(n-1)-\Delta}\right](n-1)\), whence the result follows by once again applying Proposition 1.
A few remarks are in order at this point. First, we note that the lower bound of part (b) can be improved if we have additional information about where the origin is in the grid. For example, suppose the origin is in the lower-left corner; that is, \(\min S_{1}=\min S_{2}=0\). Then any line containing two boundary points must be of negative slope, and hence when we apply Claim 1, we see that it is enough to leave only the largest values on the \(y\)-axis unweighted. When one
solves the corresponding optimisation problem, we find that \(\operatorname{cov}_{k}(\Gamma)\geq\big{(}4-2\sqrt{2}+o(1)\big{)}\,k(n-1)\) for such grids \(\Gamma\), a considerable improvement in the constant factor, as \(4-2\sqrt{2}\approx 1.1716\).
Second, as explained in the introduction, almost all grids \(\Gamma\) are generic, and parts (a) and (c) of Theorem 1.3 determine \(\operatorname{cov}_{k}(\Gamma)\) precisely when \(k\) is even and asymptotically when \(k\) is odd and large. However, even when the grid \(\Gamma\) is not generic but only \(\Delta\)-generic, provided \(\Delta=o(n)\), part (c) is robust enough to resolve the problem asymptotically. We give some natural examples of this below.
**Corollary 2**.: _Given \(n\in\mathbb{N}\), let \(\Gamma_{\exp,n}=\Gamma(E,E)\), where \(E=\{0,1,2,4,\ldots,2^{n-2}\}\), and let \(\Gamma_{\operatorname{quad},n}=\Gamma(S,S)\), where \(S=\{0,1,4,\ldots,(n-1)^{2}\}\). Then, if \(\Gamma\in\{\Gamma_{\exp,n},\Gamma_{\operatorname{quad},n}\}\), we have \(\operatorname{cov}_{k}(\Gamma)=\big{(}\frac{3}{2}+o(1)\big{)}\,k(n-1)\) as \(k,n\to\infty\)._
Proof.: By Theorem 1.3(a), we know \(\operatorname{cov}_{k}(\Gamma)\leq\big{\lceil}\frac{3}{2}k\big{\rceil}(n-1)\), which is \(\big{(}\frac{3}{2}+o(1)\big{)}\,k(n-1)\) as \(k\to\infty\), and so we need only demonstrate the lower bound. By Theorem 1.3(c), it suffices to show \(\Gamma\) is \(\Delta\)-generic for some \(\Delta=o(n)\).
We begin with \(\Gamma=\Gamma_{\exp,n}\), and show that is \(1\)-generic. Indeed, suppose \(\ell\) is a line containing two boundary points of \(\Gamma\), say \((0,2^{i})\) and \((2^{j},0)\). It is then a straightforward calculation to see that, for any \(r\), the line \(\ell\) passes through \((2^{r},2^{i}-2^{i-j+r})\). In order for this to be an interior point of the grid \(\Gamma_{\exp,n}\), we require that \(2^{i}-2^{i-j+r}\) is a positive power of \(2\) strictly smaller than \(2^{i}\), which only happens for \(r=j-1\). Thus, the line \(\ell\) contains exactly one interior point, and hence \(\Gamma_{\exp,n}\) is \(1\)-generic.
The quadratic grid \(\Gamma_{\operatorname{quad},n}\) requires somewhat more delicate treatment. Again, let us suppose \(\ell\) is a line containing the boundary points \((0,a^{2})\) and \((b^{2},0)\), and let \((r^{2},s^{2})\) be an interior point lying on \(\ell\). We then have \(s^{2}=a^{2}\Big{(}1-\frac{r^{2}}{b^{2}}\Big{)}\), or \((ab)^{2}=(ar)^{2}+(bs)^{2}\). Thus, we can bound the number of interior points on \(\ell\) by the number of ways of writing \((ab)^{2}\) as a sum of squares. Following Beiler [6], if \(Q\) is the set of prime divisors of \(ab\) that are congruent to \(1\) modulo \(4\), and if \(\beta_{q}\) is the multiplicity of \(q\in Q\) in the prime factorisation of \((ab)^{2}\), then there are \(\prod_{q\in Q}(\beta_{q}+1)\) ways to write \((ab)^{2}\) as a sum of squares.
To simplify the notation in the calculation below, we shall assume \(q=5,13\) and \(17\) are included in \(Q\), setting \(\beta_{q}=0\) in case they do not divide \(ab\). Now observe that we have
\[\sum_{q\in Q}(\beta_{q}+1) =3+\beta_{5}+\beta_{13}+\beta_{17}+\sum_{q\in Q,q\geq 29}(\beta_{q}+1)\] \[\leq 3+\beta_{5}+\beta_{13}+\beta_{17}+\sum_{q\in Q,q\geq 29}2 \beta_{q}\] \[\leq 3+\sum_{q\in Q}\beta_{q}\log_{5}q=3+\log_{5}\left(\prod_{q\in Q }q^{\beta_{q}}\right)\leq 3+\log_{5}\big{(}(ab)^{2}\big{)},\]
and so \(\sum_{q\in Q}(\beta_{q}+1)\leq 3+4\log_{5}n\).
Now, given some natural numbers \(m_{i}\), it is simple to show that \(\prod_{i}m_{i}\leq 3^{\frac{1}{3}\sum_{i}m_{i}}\), and hence the number of ways to express \((ab)^{2}\) as a sum of squares is at most \(3^{\frac{1}{3}\sum_{q\in Q}(\beta_{q}+1)}\leq 3\cdot 3^{\frac{4}{3}\log_{5}n}=3n^{ \frac{4}{3}\log_{5}3}\). It thus follows that \(\Gamma_{\operatorname{quad},n}\) is \(\Delta\)-generic for \(\Delta=3n^{\frac{4}{3}\log_{5}3}=o(n)\).
### Standard grids
While the results of Section 4.1 resolve the problem asymptotically for very many grids, there is no questioning the fact that the most natural case to consider is that of the standard grid \(\Gamma(S,S)\), where \(S=\{0,1,\ldots,n-1\}\). For convenience, we denote this grid by \(\Gamma_{n}\). By considering the line \(x+y=n-1\), we see that \(\Gamma_{n}\) is not \(\Delta\)-generic for any \(\Delta<n-2\), which means the
lower bound of Theorem 1.3(c) is worse than the Ball-Serra bound for \(n>k\). By tailoring our methods for this specific grid, though, we will obtain much better bounds. We begin by showing that a strict \(k\)-cover of the standard grid requires far fewer lines than the upper bound given by Theorem 1.3, which, as shown in the previous subsection, is asymptotically tight for most grids.
Proof of Theorem 1.4 (upper bound).: We construct a \(k\)-cover of \(\Gamma_{n}\) of size \((\sqrt{2}+o(1))k(n-1)\). Let \(t\in[n-1]\) be a parameter, to be determined later, and consider the following collection of lines:
1. \(\left\lceil\frac{i}{n+t-1}k\right\rceil\) copies of the lines \(x=i\) and \(y=i\) for each \(i\in[n-1]\);
2. \(k-\left\lceil\frac{i}{n+t-1}k\right\rceil\) copies of the line \(x+y=i\) for every \(1\leq i<n+t-1\).
We begin by showing that the above collection of lines gives a \(k\)-cover of \(\Gamma_{n}\). First, it is clear that no line passes through the origin. Now consider a point \((s_{1},s_{2})\in\Gamma_{n}\setminus\{(0,0)\}\). If \(s_{1}+s_{2}<n+t-1\), then \((s_{1},s_{2})\) is covered \(\left\lceil\frac{s_{1}}{n+t-1}k\right\rceil+\left\lceil\frac{s_{2}}{n+t-1}k\right\rceil\) times by the lines in (i) and another \(k-\left\lceil\frac{s_{1}+s_{2}}{n+t-1}k\right\rceil\) times by those in (ii), and thus at least \(k\) times in total. On the other hand, if \(s_{1}+s_{2}\geq n+t-1\), then lines in (i) alone cover the point \(\left\lceil\frac{s_{1}}{n+t-1}k\right\rceil+\left\lceil\frac{s_{2}}{n+t-1}k \right\rceil\geq k\) times, as required. Calculating the size of this \(k\)-cover, we obtain
\[\operatorname{cov}_{k}(\Gamma_{n}) \leq 2\sum_{i=1}^{n-1}\biggl{\lceil}\frac{i}{n+t-1}k\biggr{\rceil} +\sum_{i=1}^{n+t-2}\biggl{(}k-\biggl{\lceil}\frac{i}{n+t-1}k\biggr{\rceil} \biggr{)}\] \[\leq k\Biggl{[}2\sum_{i=1}^{n-1}\frac{i}{n+t-1}+\sum_{j=1}^{n+t-2 }\frac{j}{n+t-1}\Biggr{]}+2n\] \[\leq k\biggl{[}\frac{2}{n+t-1}\binom{n}{2}+\frac{1}{n+t-1}\binom{ n+t-1}{2}\biggr{]}+2n\] \[=k\biggl{[}\frac{n(n-1)}{n+t-1}+\frac{n+t-2}{2}\biggr{]}+2n. \tag{7}\]
The upper bound given by (7) is valid for any \(t\in[n-1]\); we now want to choose a value of \(t\) that makes the right-hand side as small as possible. The function \(g(t)=\frac{n(n-1)}{n+t-1}+\frac{n+t-2}{2}\) has its minimum at \(t_{0}=\sqrt{2(n-1)n}-(n-1)\). Since our parameter must be an integer, we choose \(t=\left\lceil\sqrt{2(n-1)n}-(n-1)\right\rceil=(\sqrt{2}-1+o(1))(n-1)\), and substituting this value of \(t\) into (7) yields the claimed upper bound of \((\sqrt{2}+o(1))k(n-1)\) on \(\operatorname{cov}_{k}(\Gamma_{n})\).
We now turn our attention to the lower bound.
Proof of Theorem 1.4 (lower bound).: By Proposition 1, it suffices to find a feasible solution to the dual linear program \(\mathcal{D}(\Gamma_{n})\), that is, a weighting of the nonzero points of \(\Gamma_{n}\) in which every origin-avoiding line has weight at most \(1\), that has total weight at least \((2-e^{-1/2}+o(1))(n-1)\).
Let \(t\) be the largest integer such that \(\sum_{i=1}^{t}\frac{1}{n-i}\leq\frac{1}{2}\) and consider the following weighting on the points of \(\Gamma_{n}\setminus\{(0,0)\}\):
\[w((x,y))=\begin{cases}\frac{1}{2}&\text{if $xy=0$;}\\ \frac{1}{n-i}&\text{if $x+y=n-1+i$ for some $i\in[t]$;}\\ 0&\text{otherwise.}\end{cases}\]
We first show that \((w((x,y)):(x,y)\in\Gamma_{n})\) gives a feasible solution to the dual linear program \(\mathcal{D}(\Gamma_{n})\). Clearly \(w((x,y))\geq 0\) for all \((x,y)\in\Gamma_{n}\setminus\{(0,0)\}\). Now, let \(\ell\in\mathcal{L}\) be any line. If \(\ell\)
contains two boundary points, then any interior point \((x,y)\) on \(\ell\) satisfies \(x+y\leq n-1\), and thus has weight zero. It follows that \(w(\ell)=1\). Otherwise, if \((x,y)\mapsto x+y\) is constant on \(\ell\), then \(\ell=\{(x,y):x+y=n-1+i\}\) for some \(i\in[n-1]\). Then all points on \(\ell\) have weight zero, unless \(1\leq i\leq t\), in which case \(\ell\) contains \(n-i\) points of weight \(\frac{1}{n-i}\) each. Thus \(w(\ell)\leq 1\) in this case. Finally, if \((x,y)\mapsto x+y\) is not constant on \(\ell\), it must be injective. Then, \(\ell\) contains at most one boundary point, which has weight \(\frac{1}{2}\), and the weight from the remaining points is at most \(\sum_{i=1}^{t}\frac{1}{n-i}\), which by the choice of \(t\) is at most \(\frac{1}{2}\). So in total we again have \(w(\ell)\leq 1\).
To compute the total weight of the grid, observe that each diagonal line of the form \(x+y=i\) has weight one if \(1\leq i\leq n-1+t\) and zero otherwise. Thus, the total weight of the grid is \(n-1+t\).
It remains to estimate \(t\). Note that \(t\leq\frac{n}{2}\), since \(\sum_{i=1}^{t}\frac{1}{n-i}\geq\sum_{i=1}^{t}\frac{1}{n}=\frac{t}{n}\), and so both \(n-1\) and \(n-1-t\) go to infinity linearly with \(n\). It is well known that, as \(m\to\infty\), the partial sums \(H_{m}\) of the Harmonic series satisfy \(H_{m}=\sum_{j=1}^{m}\frac{1}{j}=\log m+\gamma+o(1)\), where \(\gamma\) is a constant. Hence,
\[\sum_{i=1}^{t}\frac{1}{n-i}=H_{n-1}-H_{n-1-t}=\log\left(\frac{n-1}{n-1-t} \right)+o(1).\]
Thus, we must have \(\log\left(\frac{n-1}{n-1-t}\right)=\frac{1}{2}+o(1)\), or \(\log\left(1-\frac{t}{n-1}\right)=-\frac{1}{2}+o(1)\). This gives \(1-\frac{t}{n-1}=e^{-1/2}+o(1)\), or \(t=\left(1-e^{-1/2}+o(1)\right)(n-1)\), which results in the claimed bound.
The cover we constructed for the upper bound only uses lines of slope \(0\), \(\infty\), and \(-1\). This may seem rather limited, and it is natural to wonder if one can do better by making use of other lines as well. However, we verified by computer search that for small values of \(n\) and \(k\), one is always able to build an optimal \(k\)-cover consisting only of these three types of lines. This motivated us to further study this restricted class of \(k\)-covers, and in our final result we prove that the smallest \(k\)-cover of this form has size \(\left(\sqrt{2}+o(1)\right)k(n-1)\).
Proof of Theorem 1.5.: For convenience, we will hereon call lines of slope \(-1\)_diagonals_. Since the cover constructed for the upper bound in Theorem 1.4 only uses horizontal, vertical, and diagonal lines, it also provides the same upper bound of \(\left(\sqrt{2}+o(1)\right)k(n-1)\) in this restricted setting.
To obtain a matching lower bound, we will again appeal to the linear programming approach. Recall that previously we obtained lower bounds by assigning weights to the points, whose sum was as large as possible, provided that the total weight along every origin-avoiding line was at most \(1\). In this restricted setting, since we are only able to use horizontal, vertical, and diagonal lines, the dual linear program only has constraints on the weights along these lines. This gives us much more freedom in choosing the weights, and thus we may hope to find a feasible weighting with a larger sum.
Observe that in our search for an optimal weighting, we have \(n^{2}-1\) degrees of freedom (the weights of the individual nonzero points) but only \(4(n-1)\) constraints (the horizontal, vertical, and diagonal lines). In order to reduce the search space and simplify our task, we shall impose the following additional conditions on the weighting \(w\).
* The weighting is symmetric across the main diagonal; that is, \(w(x,y)=w(y,x)\) for all \((x,y)\in\Gamma_{n}\setminus\{(0,0)\}\).
* On each diagonal, every interior point has the same weight.
* Every vertical line avoiding the origin has weight one (and hence so does every horizontal line).
* There is some \(t\in[n-1]\) such that the weight of the diagonal \(x+y=i\) is \(1\) if \(1\leq i\leq n+t-1\), is at most \(1\) if \(i=n+t\), and is \(0\) if \(i\geq n+t+1\).
Some remarks are in order now. First, note that the requirement of symmetry is without loss of generality, since if \(w\) is any feasible weighting, then \(w^{\prime}\) defined by \(w^{\prime}(x,y)=\frac{1}{2}(w(x,y)+w(y,x))\) is a symmetric feasible weighting of the same total weight.
With regards to the total weight, by summing along the diagonals, we see that \(w(\Gamma_{n}\setminus\{(0,0)\})\) is at least \(n+t-1\) and at most \(n+t\). Thus, our goal is to maximise the value of \(t\) for which we can find such a feasible weighting.
Now, for some fixed \(t\), observe that the weights of approximately half the points are already determined by the conditions. Since the diagonals \(x+y=i\), for \(i\geq n+t+1\), have weight \(0\), we must have \(w(x,y)=0\) whenever \(x+y>n+t\). On the other hand, if \(n\leq i\leq n+t-1\), then we know the diagonal has weight \(1\). Since the diagonal consists entirely of \(2n-1-i\) internal points, all of which must have the same weight, we have \(w(x,y)=\frac{1}{2n-1-(x+y)}\) whenever \(n\leq x+y\leq n+t-1\). Finally, when \(x+y=n+t\), we must have \(w(x,y)=z\) for some \(0\leq z\leq\frac{1}{n-t-1}\) to ensure the diagonal has weight at most \(1\).
We now turn our attention to the lower triangular points of the grid; that is, \((x,y)\) with \(1\leq x+y\leq n-1\). In our previous weighting, we assigned weight \(\frac{1}{2}\) to the boundary points and left the interior points unweighted. Now that we have some more freedom, we will look to spread the weight throughout the grid. With that in mind, we introduce parameters \(\alpha_{i}\), \(1\leq i\leq n-1\), such that \(w(i,0)=w(0,i)=\frac{1}{2}-\alpha_{i}\). Since the diagonal \(x+y=i\) has total weight \(1\), it follows that \(w(x,y)=\frac{2\alpha_{x+y}}{x+y-1}\) for the \(x+y-1\) interior points \((x,y)\) with \(x+y=i\), \(x,y\neq 0\).
Thus, using the symmetry and the conditions along the diagonals, we have shown that our weighting takes the form
\[w((x,y))=\begin{cases}\frac{1}{2}-\alpha_{x+y}&\text{if $x=0$ or $y=0$};\\ \frac{2\alpha_{x+y}}{x+y-1}&\text{if $x,y\neq 0$ and $1\leq x+y\leq n-1$};\\ \frac{1}{2n-1-i}&\text{if $x+y=i$ for some $n\leq i\leq n+t-1$};\\ z&\text{if $x+y=n+t$};\\ 0&\text{if $x+y\geq n+t+1$}.\end{cases}\]
for some parameters \(\alpha_{1},\ldots,\alpha_{n-1},z\in\mathbb{R}_{\geq 0}\). To finish, we will use the condition that the vertical lines have total weight \(1\) to solve for \(\alpha_{i}\).
Indeed, by considering the line \(x=n-1\), we have \(\frac{1}{2}-\alpha_{n-1}+\sum_{i=n}^{n+t-1}\frac{1}{2n-1-i}+z=1\), which yields
\[\alpha_{n-1}=\sum_{i=n}^{n+t-1}\frac{1}{2n-1-i}+z-\frac{1}{2}. \tag{8}\]
Now compare the weights of the points on the lines \(x=n-2\) and \(x=n-1\). Since the diagonals are constant along their interior points, we have \(w(n-2,y)=w(n-1,y-1)\) for all \(2\leq y\leq n-1\). Hence the differences are that \(w(n-2,0)\) and \(w(n-2,1)\) replace \(w(n-1,0)\) and \(w(n-1,n-1)\). Thus,
\[w(\{x=n-2\})-w(\{x=n-1\}) =w(n-2,0)+w(n-2,1)-w(n-1,0)-w(n-1,n-1)\] \[=\tfrac{1}{2}-\alpha_{n-2}+\frac{2\alpha_{n-1}}{n-2}-\big{(} \tfrac{1}{2}-\alpha_{n-1}\big{)}-0,\]
and since both vertical lines have weight \(1\), this gives \(\alpha_{n-2}=\Big{(}1+\frac{2}{n-2}\Big{)}\alpha_{n-1}\). Repeating this argument for the lines \(x=i-1\) and \(x=i\) for each \(2\leq i\leq n-1\), we obtain the following recurrence relation:
\[\alpha_{i-1}=\bigg{(}1+\frac{2}{i-1}\bigg{)}\alpha_{i}-z\mathbf{1}_{i=t+1}- \frac{1}{n-i}\mathbf{1}_{i\leq t}, \tag{9}\]
where \(\mathbf{1}_{A}\) is the indicator function of the event \(A\) defined as
\[\mathbf{1}_{A}=\begin{cases}1&\text{if $A$ is true;}\\ 0&\text{if $A$ is false.}\end{cases}\]
For the initial condition, observe that the line \(x+y=1\) has no interior points, and so for it to have weight \(1\), we must have \(\alpha_{1}=0\). Combining this with (9), we obtain:
\[\alpha_{i}=\frac{1}{i(i+1)}\sum_{j=1}^{\min\{t,i\}}\frac{j(j-1)}{n-j}+\mathbf{ 1}_{i\geq t+1}\frac{t(t+1)}{i(i+1)}z\quad\text{ for all $2\leq i\leq n-1$.} \tag{10}\]
Substituting the value of \(\alpha_{n-1}\) from (10) into (8), we can then solve for \(z\) to obtain:
\[z=\Bigg{(}\frac{1}{2}-\sum_{j=1}^{t}\frac{1}{n-j}\bigg{(}1-\frac {j(j-1)}{(n-1)n}\bigg{)}\Bigg{)}\frac{n(n-1)}{n(n-1)-t(t+1)} \tag{11}\] \[=\frac{(n-1)n\Big{(}\frac{1}{2}-\frac{t(2n+t-1)}{2(n-1)n}\Big{)} }{(n-1)n-t(t+1)}, \tag{12}\]
where the second equality is due to the fact that
\[\sum_{j=1}^{t}\frac{1}{n-j}\bigg{(}1-\frac{j(j-1)}{(n-1)n}\bigg{)}=\frac{1}{n (n-1)}\sum_{j=1}^{t}(n+j-1)=\frac{t(2n+t-1)}{2n(n-1)}.\]
Recall that feasibility dictates \(0\leq z\leq\frac{1}{n-t-1}\). If \(n>1\), we have \(z\geq 0\) when \(0\leq t\leq\frac{1}{2}\Big{(}\sqrt{8n^{2}-8n+1}-2n+1\Big{)}\) and \(z\leq\frac{1}{n-t-1}\) for \(\frac{1}{2}\Big{(}\sqrt{8n^{2}-8n+1}-2n-1\Big{)}\leq t<n-1\). Taking \(t\) to be an integer satisfying \(\frac{1}{2}\Big{(}\sqrt{8n^{2}-8n+1}-2n-1\Big{)}\leq t\leq\frac{1}{2}\Big{(} \sqrt{8n^{2}-8n+1}-2n+1\Big{)}\), we have \(t=(\sqrt{2}-1+o(1))(n-1)\). It follows that the total weight of the grid is \(\big{(}\sqrt{2}+o(1)\big{)}(n-1)\).
We are not quite done, as there is one final condition to verify -- to ensure that all our weights are non-negative, we must have \(0\leq\alpha_{i}\leq\frac{1}{2}\) for all \(1\leq i\leq n-1\). From (9) we have:
\[\alpha_{i} =\frac{i-1}{i+1}\bigg{(}\alpha_{i-1}+\frac{1}{n-i}\bigg{)} \text{ if $2\leq i\leq t$.}\] \[\alpha_{i} =\frac{i-1}{i+1}(\alpha_{i-1}+z)\leq\frac{i-1}{i+1}\bigg{(} \alpha_{i-1}+\frac{1}{n-i}\bigg{)} \text{ if $i=t+1$}\] \[\alpha_{i} =\frac{i-1}{i+1}\alpha_{i-1}<\alpha_{i-1} \text{ if $t+2\leq i\leq n-1$}\]
Thus, it suffices to show that \(\alpha_{i}\leq\frac{1}{2}\) for all \(2\leq i\leq t+1\). We will do so by showing that for \(1\leq i\leq t+1\), we have \(\alpha_{i}\leq\frac{i-1}{2(n-i-1)}\) by induction on \(i\). We know that \(\alpha_{1}=0\), so the base case is clear. Let \(i>1\) and assume the induction hypothesis; then
\[\alpha_{i}\leq\frac{i-1}{i+1}\bigg{(}\alpha_{i-1}+\frac{1}{n-i}\bigg{)}\leq \frac{i-1}{i+1}\bigg{(}\frac{i-2}{2(n-i)}+\frac{1}{n-i}\bigg{)}\leq\frac{i-1} {2(n-i-1)}.\]
We have \(\frac{i-1}{2(n-i-1)}\leq\frac{1}{2}\) whenever \(i\leq n-i\), which is true since \(i\leq t+1=(\sqrt{2}-1+o(1))(n-1)\).
Hence our weighting is indeed feasible and \(w(\Gamma_{n}\setminus\{(0,0)\})=\big{(}\sqrt{2}+o(1)\big{)}(n-1)\). It thus follows that any \(k\)-cover of \(\Gamma_{n}\) using only horizontal, vertical, and diagonal lines must have size at least \(\big{(}\sqrt{2}+o(1)\big{)}k(n-1)\)
Conclusion
In this paper, we studied line coverings with multiplicities for two-dimensional real grids. We determined the minimum size of a cover in several cases, but some natural and interesting questions remain open, and we highlight them below.
In Section 3, we investigated for which grids the Ball-Serra bound is tight. We proved that, when \(n\) is sufficiently large with respect to \(m\) and \(k\), the Ball-Serra bound is tight for any \(n\times m\) grid. Moreover, we showed that the threshold value for \(n\) given by Theorem 1.1 is tight for _most_ grids. It can be shown, however, that this bound on \(n\) is not best possible for _all_ grids. For example, for the grid \(\Gamma(S_{1},S_{2})\), where \(S_{1}=\{0,1,2,\ldots,n-1\}\) and \(S_{2}=\{-1,0,1\}\), and any \(k\geq 3\), we can show that the Ball-Serra bound is tight already for \(n=2(k-1)\), as opposed to the lower bound \(n\geq 2k-1\) of Theorem 1.1. However, in this grid the omitted point \((0,0)\) is not a corner point, while, as in the square grid setting, it is more natural to consider grids in which \((0,0)\) is a corner. For such grids, one could investigate when the Ball-Serra bound holds.
**Question 1**.: _Let \(\Gamma\) be the grid \(\{0,1,2,\ldots,n-1\}\times\{0,1,2,\ldots,m-1\}\) and \(k\geq 2\) be an integer. How large must \(n\) be with respect to \(m\) and \(k\) to have \(cov(\Gamma)=k(n-1)+(m-1)\)?_
Our main result for standard grids establishes reasonably good asymptotic lower and upper bounds on \(\operatorname{cov}_{k}(\Gamma_{n})\). It would be of interest to close the remaining gap.
**Question 2**.: _What is the true asymptotic value of \(\operatorname{cov}_{k}(\Gamma_{n})\)?_
We tend to believe that \(\operatorname{cov}_{k}(\Gamma_{n})=(\sqrt{2}+o(1))k(n-1)\). In Theorem 1.5, we showed this to be the case when we only use lines of slope \(0\), \(\infty\), and \(-1\). However, for the weighting we used to establish the lower bound, one can show that lines of slope \(1\) near the origin (e.g., \(y=x+1\)) have weight larger than \(1\) when \(n\) is large. We believe that these are the only problematic lines, and so as an intermediate step one could attempt to verify that our weighting remains feasible if one only forbids lines of slope \(1\). This would imply that any \(k\)-cover of \(\Gamma_{n}\) of size smaller than \(\big{(}\sqrt{2}+o(1)\big{)}k(n-1)\) must contain many lines of slope \(1\). To show that such a construction is unlikely to exist, it might be helpful to consider what happens if we restrict ourselves to lines of slope \(0\), \(\infty\), \(-1\), and \(1\).
In our work thus far we observed that the standard grid \(\Gamma_{n}\) requires many fewer lines to cover than any other \(n\times n\) grid we considered. Our general lower bound from Theorem 1.3(b) (and the improvement for grids in which \((0,0)\) is a corner discussed after the proof) is not strong enough to establish this fact, and we propose the following problem.
**Question 3**.: _Is it true that \(\operatorname{cov}_{k}(\Gamma_{n})\leq\operatorname{cov}_{k}(\Gamma)\) for any \(n\times n\) grid \(\Gamma\) in which \((0,0)\) is a corner?_
More broadly, it would be of interest to improve the lower bound from Theorem 1.3(b), which we do not believe to be best possible.
Another direction that might lead to interesting findings is to consider translates of the standard grid in which the omitted point is not in the lower-left corner. How does the position of the origin then affect the value of \(\operatorname{cov}_{k}(\Gamma)\)? For instance, what are the asymptotics of \(\operatorname{cov}_{k}(\Gamma)\), where \(\Gamma=\Gamma(\{-\lfloor n/2\rfloor,\ldots,\lceil n/2\rceil\},\{-\lfloor n/2 \rfloor,\ldots,\lceil n/2\rceil\})\)?
While we mainly focused on two-dimensional real grids, it would be natural to investigate the problem in higher dimensions as well. For example, in Section 3, we remarked that the Ball-Serra bound can be tight for higher-dimensional grids as well, provided that one of the sides is much longer than the others. How much longer does that side need to be for the bound to be attained? Some first results in this direction were shown in the bachelor's thesis of the fourth author [14]. Once again, it would be particularly interesting to investigate \(\operatorname{cov}_{k}(\Gamma_{n}^{(d)})\) for the standard \(d\)-dimensional grid \(\Gamma_{n}^{(d)}=\{0,\ldots,n-1\}^{d}\).
Finally, while all of our results are stated for grids over \(\mathbb{R}\), the questions we considered and our general framework extend to grids over any field. Some of our results, for example Theorem 1.1, extend to arbitrary fields. It will be interesting to prove similar results for other fields, and in particular for fields of positive characteristic.
|
2310.15720 | Ensemble of Task-Specific Language Models for Brain Encoding | Language models have been shown to be rich enough to encode fMRI activations
of certain Regions of Interest in our Brains. Previous works have explored
transfer learning from representations learned for popular natural language
processing tasks for predicting brain responses. In our work, we improve the
performance of such encoders by creating an ensemble model out of 10 popular
Language Models (2 syntactic and 8 semantic). We beat the current baselines by
10% on average across all ROIs through our ensembling methods. | Arvindh Arun, Jerrin John, Sanjai Kumaran | 2023-10-24T10:52:41Z | http://arxiv.org/abs/2310.15720v2 | # Ensemble of Task-Specific Language Models for Brain Encoding
###### Abstract
Language models have been shown to be rich enough to encode fMRI activations of certain Regions of Interest in our Brains. Previous works have explored transfer learning from representations learned for popular natural language processing tasks for predicting brain responses. In our work, we improve the performance of such encoders by creating an ensemble model out of 10 popular Language Models (2 syntactic and 8 semantic). We beat the current baselines by 10% on average across all ROIs through our ensembling methods.
## I Introduction
Brain encoding is the process of converting textual, visual, or any sensory information to neural activity patterns. Language models can be used to predict neural activity. Research has been conducted on the potential of task-specific fine-tuned language models to predict fMRI brain activity. Language models (LMs) trained on large corpora often tend to express cognitive understanding. Transformers are a class of LMs that do it the best and are hard to probe and interpret. However, further scope exists for incorporating information from multiple task-specific models to improve encoding accuracy. Our work investigates the effectiveness of combining multiple task-specific models to predict fMRI brain activity across different brain regions.
This approach uses encoding models based on task features to predict brain activity across various Regions of Interest in the brain (ROIs). However, rather than relying solely on task features, we seek to enhance encoding accuracy by incorporating information from multiple task-specific models. By combining the insights from these models, we may improve our predictions' accuracy and gain a more comprehensive understanding of the underlying neural processes.
We develop an effective ensemble of task-specific language models that can improve the accuracy and efficiency of brain encoding. We evaluate the models and discuss the implications of our findings for the field of cognitive neuroscience. Our work contributes to the ongoing efforts to understand the neural processes underlying cognition and their relationship with language. By leveraging language models' capabilities, we can better understand how the brain processes and represents information during various cognitive tasks. Our findings have implications for developing more accurate and robust encoding models for fMRI data analysis and provide insights into the relationship between language and brain activity.
## II Related Work
Gauthier and Levy (2019) [2] investigate the robustness of human brain representations of sentence understanding. They compare various sentence encoding models on a brain decoding task where the sentence the participant sees must be predicted from the fMRI signal evoked by the sentence. They use a pre-trained BERT architecture as a baseline and fine-tune it on various natural language understanding tasks to determine which leads to improvements in brain-decoding performance. However, they find that none of the tested sentence encoding tasks yield significant increases in brain decoding performance. Further task ablations and representational analyses reveal that tasks producing syntax-light representations significantly improve brain decoding performance. These results offer constraints on the space of NLU models that best account for human neural representations of language and suggest limitations on decoding fine-grained syntactic information from fMRI human neuroimaging.
Schrimpf et al. (2021) [3] report on integrating recent artificial neural networks from machine learning with human recordings during language processing. The study finds that the most powerful models predict neural and behavioral responses across different datasets up to noise levels. Furthermore, their study shows that models that better predict the next word in a sequence also better predict brain measurements. These findings suggest that predictive processing plays a fundamental role in shaping language comprehension mechanisms in the brain. The study provides evidence for a neurally-mechanistic account of how meaning might be extracted from language, which has long been lacking in the field of cognitive neuroscience.
Oota et al. (2022) [1] explore the efficacy of task-specific learned Transformer representations for predicting brain responses in two diverse datasets: Pereira and Narratives. They use encoding models based on task features learned from ten popular natural language processing tasks, including both syntactic and semantic tasks, and find that features from coreference resolution, NER, and shallow syntax parsing explain greater variance for the reading activity, while tasks such as paraphrase generation, summarization, and natural language inference show better encoding performance for the listening activity. Their experiments provide insights into the cognitive aspects of language processing in the human brain. Our work
is an extension of their work.
## III Dataset
We use the Pereira dataset, which contains brain responses from subjects reading sentences. We utilize the data from sentence-based experiments (experiments 2 and 3) conducted by Pereira et al. (2018)[4]. A total of 627 sentences from 48 topics, presented to five subjects, were analyzed. These sentences were part of 168 passages, each containing 3-4 sentences. Our analysis focuses on nine brain ROIs belonging to four brain networks: the Default Mode Network (DMN), Language Network, Task Positive Network (TP), and Visual Network. The DMN is linked to semantic processing, while the Language Network is associated with language processing, understanding, word meaning, and sentence comprehension. The TP network is related to attention and salience information, and the Visual Network is responsible for visual object processing and object recognition.
## IV Methodology
### _Baselines_
For the baselines, one encoder model is used for each task. Feature spaces, describing stimulus sentences, are extracted and used to predict brain activity in the encoding model. The encoder is trained using Ridge regression for several NLP tasks. A model is trained for each subject and each ROI. For the data. train-test split was done in the ratio 4:1.
We encode fMRI data using several natural language processing (NLP) tasks. These tasks include coreference resolution (CR), paraphrase detection (PD), summarization (Sum), named entity recognition (NER), natural language inference (NLI), question answering (QA), sentiment analysis (SA), semantic role labeling (SRL), syntactic structure approximation (SS), and word sense disambiguation (WSD). Each task serves a specific purpose in analyzing and understanding language, such as identifying entities and their relationships, detecting sentiment, and determining the meaning of words in context.
* Coreference Resolution (CR): finds all expressions in a text that refer to the same entity.
* Paraphrase Detection (PD): rewords a given passage in shorter or different words while preserving its meaning.
* Summarization (Sum): selects a few important sentences from a document or paragraph.
* Named Entity Recognition (NER): detects named entities, such as person names, location names, and company names, in a given text.
* Natural Language Inference (NLI): investigates the entailment relationship between two texts, a premise, and a hypothesis.
* Question Answering (QA): selects an answer from a set of candidate answers given a passage and a question.
* Sentiment Analysis (SA): determines whether a piece of text is positive, negative, or neutral.
* Semantic Role Labeling (SRL): assigns labels to words or phrases in a sentence that indicate their semantic role in the sentence, such as that of an agent, goal, or result.
* Shallow Syntax Parsing (SS): approximates the phrase-syntactic structure of sentences.
* Word Sense Disambiguation (WSD): determines which sense or meaning of a word is activated by its use in a particular context.
### _Ensembling Methods_
We hypothesize that combining multiple task-specific models for a single ROI will be able to generate better representations as each ROI is not specifically dedicated to only a sole task. This serves as our primary motivation to try out different ensembling methods with the task-specific models' outputs. For reproducibility, we have made the code public 1.
Footnote 1: Code - [https://github.com/jr-john/ensemble_brain_encoders](https://github.com/jr-john/ensemble_brain_encoders)
We explore multiple ensemble methods ranging from simple centroid calculation (Averaging) to learning a meta-model to combine the representations. Let \(n=11\) correspond to the number of task-specific Language Models we consider for our experiments. Each of the \(n\) embeddings generated is of the same dimension - \((768,)\). Let \(u_{i}\) correspond to the embedding vector generated by the \(i\)-th LM and \(u_{f}\) to the final ensemble embedding.
#### Iv-B1 Average
We take the dimension-wise average of embeddings generated by all the task-specific pre-trained LMs. Geometrically, this corresponds to the centroid of the polyhedron formed by all the \(n\) points in the embedding representation space.
\[u_{f}=\frac{\sum_{i=1}^{n}u_{i}}{n}\]
The sum here is dimension-preserving, i.e., it is taken across each of the \(768\) dimensions.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline ROIs & Language & \multicolumn{3}{c|}{**Model Name**} \\ \hline Sub & TH & RH & Body & Face & Object & Sense & Fusion & RH & LH \\ \hline P01 & 5265 & 6172 & 3774 & 4963 & 8085 & 4141 & 12829 & 17190 & 35120 \\ M02 & 4930 & 5861 & 3873 & 4782 & 7552 & 3173 & 11729 & 15070 & 30594 \\ M04 & 5906 & 5401 & 3867 & 4803 & 7812 & 3602 & 12278 & 18011 & 34024 \\ M07 & 5629 & 5001 & 4199 & 4993 & 8617 & 3214 & 14547 & 17020 & 3408 \\ M15 & 5315 & 6141 & 4112 & 4941 & 8323 & 3496 & 12835 & 15995 & 31610 \\ \hline \end{tabular}
\end{table} TABLE I: Number of Voxels in each ROI in the Pereira Dataset. LH - Left Hemisphere. RH - Right Hemisphere.
#### Iii-A2 Weighted Average
The weights were decided heuristically, by looking at which task has higher predictivity for a particular ROI and the similarity between the tasks. We weigh the outputs of different LMs with these weights. \(w_{i}\), a scalar, corresponds to the weight assigned to the \(i\)-th LM's embedding vector. Operation \((\cdot)\) corresponds to the vector dot product.
\[u_{f}=\frac{\sum_{i=1}^{n}w_{i}\cdot u_{i}}{n}\]
where the weight \(w_{i}\) is defined using the power mean of accuracy \(x_{i}\) for each task-specific model as,
\[w_{i}=\left(\frac{1}{n}\sum_{i=1}^{n}x_{i}^{p}\right)^{\frac{1}{p}}\]
where \(p\) is a factor that controls the influence of weights. We experiment with various \(p\) values. When the p-value is close to 0, the weights assigned to the embeddings with the larger weights dominate while when the p-value is large, the weights assigned to the embeddings with the smaller weights dominate. We observed that increasing the p-value, improves the model performance and it keeps on improving till p = 5, and then starts decreasing.
#### Iii-A3 Dynamic Weights
Instead of statically defining the weights using heuristics, the weights are learned from the training data. They are not predetermined, rather training adjusts the weights while optimizing the loss function. The weights determine the importance of each model output embedding, that is, it tells if the task is important for a particular ROI. The loss function used is MSE loss and optimizer is
Fig. 1: 2v2 Accuracy and Pearson correlation
Adam optimizer. The weights are constrained to be positive and are normalized to unity.
#### Iv-B4 Stacking with PCA
Initially, the aim was to stack all the weak learner output embeddings and pass it to the meta-model. However, this required huge amounts of memory and was not feasible which made dimensionality reduction a necessity. We pass all the outputs through PCA to generate representative low-dimensional embeddings. We then concatenate them together, passing them through a meta-learner to generate the final predictions. Let \(f\) be a function that denotes the meta-model,
\[u_{f}=f\left(+\sharp_{i=1}^{n}\text{PCA}(u_{i})\right)\]
#### Iv-B5 Stacking with Average
PCA leads to some amount of information loss. Discarding the dimensions with low variance will leads to loss of some information. Applying PCA on the different base model outputs, leads to each one having very different distributions. Therefore, instead of PCA and then concatenation, we take an average of all the outputs and then pass it through a meta-learner.
\[u_{f}=f\left(\frac{\sum_{i=1}^{n}u_{i}}{n}\right)\]
### _Evaluation_
We evaluate our models using popular brain encoding evaluation metrics - 2v2 Accuracy and Pearson Correlation. Let \(N\) be the number of samples given a subject and a brain region. Let \(\{Y_{i}\}_{i=1}^{N}\) and \(\{\hat{Y}_{i}\}_{i=1}^{N}\) denote the actual and predicted voxel value vectors for the \(i\)-th sample. Thus, \(Y\in\mathbb{R}^{N\times V}\) and \(\hat{Y}\in\mathbb{R}^{N\times V}\) where \(V\) is the number of voxels in that region.
**Pearson Correlation (PC)** is defined as,
\[\text{PC}=\frac{1}{N}\sum_{i=1}^{N}\text{corr}(Y_{i},\hat{Y}_{i})\]
**2v2 Accuracy** is calculated as,
\[2V2\text{Acc} =\frac{1}{\binom{N}{2}}\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}I\left[ \cos(D(Y_{i},\hat{Y_{i}})+\cos(D(Y_{j},\hat{Y_{j}}))\right.\] \[<\cos(D(Y_{i},\hat{Y_{j}}))+\cos(D(Y_{j},\hat{Y_{i}}))\right]\]
For both metrics, a higher value indicates better encodings.
## V Results
The results are shown in Figure 1. The approach of weighted average using the heuristic weights performs the best out of all, followed by simple averaging and then stacking after averaging. These three approaches clearly outperform the baselines by about 10%. This shows that averaging across output embeddings from task-specific models is a very good way to ensemble the models. The approach of stacking with PCA did not work very well because the dimensionality reduction lost some information and also changed the distribution. As for the ensemble method of finding the weights dynamically, the most probable issue is that weights were overfitting on the training data since the number of stimulus sentences is very less per subject. The results are available in the analysis notebook in the codebase (link provided above).
## VI Conclusion
Thus we were able to create several ensembles of task-specific language models. Most of them outperformed the baselines in terms of encoder accuracy and performance. The method of averaging seemed to be the best ensemble technique. Also, the weights from approaches 2 and 3 give insight into which task is involved in each ROI. If a particular feature gives good predictivity for a particular ROI, then information for that specific is most likely to be encoded in that ROI. Thus, the weights tell which tasks are important for which ROI and which ROI has better predictivity for each task.
|
2309.00528 | Trust your Good Friends: Source-free Domain Adaptation by Reciprocal
Neighborhood Clustering | Domain adaptation (DA) aims to alleviate the domain shift between source
domain and target domain. Most DA methods require access to the source data,
but often that is not possible (e.g. due to data privacy or intellectual
property). In this paper, we address the challenging source-free domain
adaptation (SFDA) problem, where the source pretrained model is adapted to the
target domain in the absence of source data. Our method is based on the
observation that target data, which might not align with the source domain
classifier, still forms clear clusters. We capture this intrinsic structure by
defining local affinity of the target data, and encourage label consistency
among data with high local affinity. We observe that higher affinity should be
assigned to reciprocal neighbors. To aggregate information with more context,
we consider expanded neighborhoods with small affinity values. Furthermore, we
consider the density around each target sample, which can alleviate the
negative impact of potential outliers. In the experimental results we verify
that the inherent structure of the target features is an important source of
information for domain adaptation. We demonstrate that this local structure can
be efficiently captured by considering the local neighbors, the reciprocal
neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art
performance on several 2D image and 3D point cloud recognition datasets. | Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, Shangling Jui, Jian Yang | 2023-09-01T15:31:18Z | http://arxiv.org/abs/2309.00528v1 | # Trust your Good Friends: Source-free Domain Adaptation by Reciprocal Neighborhood Clustering
###### Abstract
Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might not align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors. To aggregate information with more context, we consider expanded neighborhoods with small affinity values. Furthermore, we consider the density around each target sample, which can alleviate the negative impact of potential outliers. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets.
Domain adaptation, source-free domain adaptation
## 1 Introduction
Most deep learning methods rely on training on large amounts of labeled data, while they cannot generalize well to a related yet different domain. One research direction to address this issue is Domain Adaptation (DA), which aims to transfer learned knowledge from a source to a target domain. Most existing DA methods demand labeled source data during the adaptation period, however, it is often not practical that source data are always accessible, such as when applied on data with privacy or property restrictions. Therefore, recently, there have emerged several works [27, 28, 31, 34] tackling a new challenging DA scenario where instead of source data only the source pretrained model is available for adapting, _i.e._, source-free domain adaptation (SFDA). Among these methods, USFDA [27] addresses universal DA [93] and SF [28] addresses open-set DA [61]. In both universal and open-set DA the label set is different for source and target domains. SHOT [34] and 3C-GAN [31] are for closed-set DA where source and target domains have the same categories. 3C-GAN [31] is based on target-style image generation with a conditional GAN, and SHOT [34] is based on mutual information maximization and pseudo labeling. BAIT [89] extends MCD [60] to the SFDA setting. FR or BUFR [14] is based on source feature restoring. However, these methods ignore the intrinsic neighborhood structure of the target data in feature space which can be very valuable to tackle SFDA. Though recent G-SFDA [90] consider neighborhood clustering to address SFDA, it fails to distinguish the potential noisy nearest neighbors, which may lead to performance degradation.
In this paper, we focus on source-free domain adaptation. Our main observation is that current DA methods do not exploit the intrinsic neighborhood structure of the target data. We use this term to refer to the fact that, even though the target data might have shifted in the feature space (due to the covariance shift), target data of the same class is still expected to form a cluster in the embedding space. This can be implied to some degree from the t-SNE visualization of target features on the source model which suggests that significant cluster structure is preserved (see Fig. 1 (c)). This assumption is implicitly adopted by most DA methods, as instantiated by a recent DA work [68]. A well-established way to assess the structure of points in high-dimensional spaces is by considering the nearest neighbors of points, which are expected to belong to the same class. However, this assumption is not true for all points; the blue curve in Fig. 1(b) shows that around 75% of the nearest neighbors has the correct label. In this paper, we observe that this problem can be mitigated by considering reciprocal nearest neighbors (RNN); the reciprocal neighbors of a point have the point as their neighbor. Reciprocal neighbors have been studied before in different contexts [24, 53, 97]. The reason why reciprocal neighbors are more trustworthy is
illustrated in Fig. 1(a). Furthermore, Fig. 1(b) shows the ratio of neighbors which have the _correct prediction_ for different kinds of nearest neighbors. The curves show that reciprocal neighbors indeed have more chances to predict the _true_ label than non-reciprocal nearest neighbors (nRNN).
The above observation and analysis motivate us to assign different weights to the supervision from nearest neighbors. Our method, called Neighborhood Reciprocity Clustering (_NRC_), achieves source-free domain adaptation by encouraging reciprocal neighbors to concord in their label prediction. In addition, we will also consider a weaker connection to the non-reciprocal neighbors. We define affinity values to describe the degree of connectivity between each data point and its neighbors, which is used to encourage class-consistency between neighbors. Moreover we propose to use a self-regularization to decrease the negative impact of potential noisy neighbors. Inspired by recent graph based methods [98, 2, 9] which show that the higher order neighbors can provide relevant context, and also considering neighbors of neighbors is more likely to provide datapoints that are close on the data manifold [69]. Thus, to aggregate wider local information, we further retrieve the expanded neighbors, _i.e_, neighbor of the nearest neighbors, for auxiliary supervision.
Though deploying the above neighborhood clustering can lead to good performance, this clustering objective may deteriorate feature representations when based on features that are outliers, since outliers typically have no semantic-similar nearest neighbors. To alleviate this circumstance, we further propose to estimate the feature density based on nearest neighbor retrieval. We then only consider those features in high density regions for clustering and give less credit to the potential outlier features. We denote this augmented version as **NRC++**.
Our contributions can be summarized as follows, to achieve source-free domain adaptation: (I) We explicitly exploit the fact that same-class data forms cluster in the target embedding space, we do this by considering the predictions of neighbors and reciprocal neighbors. (II) We show that considering an extended neighborhood of data points further improves results. (III) We propose to estimate the feature density based on nearest neighbor retrieval. We then decrease the contribution of the potential outlier features in the clustering, leading to further performance gains. (IV) The experimental results on three 2D image datasets and one 3D point cloud dataset show that our method achieves state-of-the-art performance compared with related methods.
This paper is an extension of our conference submission [88]. We have extended the technical contribution, and considered new settings and a new dataset in our new version. We here summarize the main extensions: (1) more comprehensive related works have been discussed; (2) to reduce the negative impact of outliers, we estimate the density around each data point and decrease the contribution of outliers on the clustering. This newly proposed method, called NRC++,improves results on most of the experiments; (3) we evaluate our method on additional domain adaptation settings: partial set, multi-source and multi-target domain adaptation, as well as the previous classical closed domain
Fig. 1: **(a) Illustration of our method. In the left shows we distinguish reciprocal and non-reciprocal neighbors. The adaptation is achieved by pushed the features towards reciprocal neighbors heavily. (b) Ratio of different type of nearest neighbor features of which: the _predicted_ label is the same as the feature, K is the number of nearest neighbors. (c) t-SNE visualization of target features by source model. The features in (b) and (c) are on task Ar\(\rightarrow\)Rw of Office-Home.**
adaptation. (4) we conduct experiments and present results on the new challenging dataset: DomainNet [49].
## 2 Related Work
**Domain Adaptation.** Most DA methods tackle domain shift by aligning the feature distributions. Early DA methods such as [67, 39, 71] adopt moment matching to align feature distributions. And in recent years, plenty of works have emerged that achieve alignment by adversarial training. DANN [16] formulates domain adaptation as an adversarial two-player game. The adversarial training of CDAN [40] is conditioned on several sources of information. DIRT-T [66] performs domain adversarial training with an added term that penalizes violations of the cluster assumption. Additionally, [29, 43, 60] adopts prediction diversity between multiple learnable classifiers to achieve local or category-level feature alignment between source and target domains. AFN [82] shows that the erratic discrimination of target features stems from much smaller norms than those found in the source features. SRDC [68] proposes to directly uncover the intrinsic target discrimination via discriminative clustering to achieve adaptation. More related, [48] resorts to K-means clustering for open-set adaptation while considering global structure. Our method instead only focuses on nearest neighbors (local structure) for source-free adaptation. The most relevant paper to ours is DANCE [58], which is for universal domain adaptation and based on neighborhoods clustering. But they compute the entropy of instance discrimination [79] between all features, thus the non-local neighborhood clustering. In our method, we encourage prediction consistency between only a few semantically close neighbors. There are also several different domain adaptation paradigms, such as partial-set domain adaptation [32, 36, 94, 4] where the label space of the source domain contains the one of the target domain, open-set domain adaptation [37, 61] where the label space of the source domain is included in the one of the target domain, universal domain adaptation [58, 93] where there exist both domain specific and domain shared categories, multi-source domain adaptation [45, 46, 72, 44] where there are multiple different labeled source domains for training, and multi-target domain adaptation [56, 46] where there are multiple unlabeled target domains for training and evaluation.
**Source-free Domain Adaptation.** Source-present methods need supervision from the source domain during adaptation. Recently, there are several methods investigating source-free domain adaptation. For the closed-set DA setting, BAIT [89] extends MCD [60] to source-free setting, and SHOT [34] proposes to fix the source classifier and match the target features to the fixed classifier by maximizing mutual information and a proposed pseudo label strategy which considers global structure. SHOT++ [35] uses both the self-supervised and the semi-supervised learning techniques for further improving SHOT. And several other methods address SFDA by generating features, 3C-GAN [31] synthesizes labeled target-style training images based on the conditional GAN to provide supervision for adaptation, while SFDA [38] tackles the segmentation task by synthesizing fake source samples. Along with attention mechanism to avoid forgetting on the source domain, G-SFDA [90] propose neighborhood clustering which enforces prediction consistency between local neighbors. Based on Instance Discrimination [79], HCL [21] adopts features from current and historical models to cluster features, as well as a generated pseudo label conditioned on historical consistency. Recently, FR or BUFR [14] proposes to restore the source features to address SFDA, by adapting the feature-extractor with only target data such that the approximate feature distribution under the target data realigns with that saved distribution on the source. USFDA [27] and FS [28] explore source-free universal DA [93] and open-set DA [61], and they propose to synthesize extra training samples to make the decision boundary compact, thereby allowing to recognize the open classes. DECISION [1] addresses source-free multi-source domain adaptation where the model is first pretrained on multiple labeled source domains and then adapted to the target domain without access to source data anymore. Recently [91] proposes a simple clustering objective to achieve adaptation by clustering features. To address the imbalance issue in the feature clustering stage, [54] proposes a dynamic pseudo labeling strategy. And recently there also emerge several works on test-time adaptation [74, 75, 10, 47, 5] which can be actually regarded as an online source-free domain adaptation task, while the training and evaluation protocol are different. We will not detail them in this paper.
**Graph Clustering.** Our method shares some similarities with graph clustering work such as [63, 77, 86, 87] by utilizing neighborhood information. However, our methods are fundamentally different. Unlike those works which require labeled data to train the graph network for estimating the affinity, we instead adopt reciprocity to assign affinity.
## 3 Method
**Notation.** We denote the labeled source domain data with \(n_{s}\) samples as \(\mathcal{D}_{s}=\{(x_{i}^{s},y_{i}^{s})\}_{i=1}^{n_{s}}\), where the \(y_{i}^{s}\) is the corresponding label of \(x_{i}^{s}\), and the unlabeled target domain data with \(n_{t}\) samples as \(\mathcal{D}_{t}=\{x_{j}^{t}\}_{j=1}^{n_{t}}\). Both domains have the same \(C\) classes (closed-set setting). Under the SFDA setting \(\mathcal{D}_{s}\) is only available for model pretraining. Our method is based on a neural network, which we split into two parts: a feature extractor \(f\), and a classifier \(g\). The feature output by the feature extractor is denoted as \(\mathbf{z}(x)=f\left(x\right)\), the output of network is denoted as \(p(x)=\delta(g(z))\in\mathcal{R}^{C}\) where \(\delta\) is the softmax function, for readability we will omit the input and use \(\mathbf{z},p\) in the following sections.
**Overview.** We assume that the source pretrained model has already been trained. As discusses in the introduction, the target features output by the source model form clusters. We exploit this intrinsic structure of the target data for SFDA by considering the neighborhood information, and the adaptation is achieved with the following objective:
\[\mathcal{L}=-\frac{1}{n_{t}}\sum_{x_{i}\in\mathcal{D}_{t}}\sum_{x_{j}\in \text{Neigh}(x_{i})}\frac{D_{sim}(p_{i},p_{j})}{D_{dis}(x_{i},x_{j})} \tag{1}\]
where the \(\text{Neigh}(x_{i})\) means the nearest neighbors of \(x_{i}\), \(D_{sim}\) computes the similarity between predictions, and \(D_{dis}\) is a constant measuring the semantic distance (dissimilarity) between data. The principle behind the objective is to push
the data towards their semantically close neighbors by encouraging similar predictions. In the next sections, we will define \(D_{sim}\) and \(D_{dis}\).
### _Encouraging Class-Consistency with Neighborhood Affinity_
To achieve adaptation without source data, we use the prediction of the nearest neighbor to encourage prediction consistency. While the target features computed with the source model are not necessarily discriminative, meaning some neighbors belong to different classes and will provide incorrect supervision. To decrease the potentially negative impact of those neighbors, we propose to weigh the supervision from neighbors according to the connectivity (semantic similarity). We define _affinity_ values to signify the connectivity between the neighbor and the feature, which corresponds to the \(\frac{1}{D_{dis}}\) in Eq. 1 indicating the semantic similarity.
To retrieve the nearest neighbors for batch training, similar to [58, 79, 99], we build two memory banks: \(\mathcal{F}\) stores all target features, and \(\mathcal{S}\) stores corresponding prediction scores:
\[\mathcal{F}=[\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{n_{t}}]\text{ and }\mathcal{S}=[p_{1},p_{2},\ldots,p_{n_{t}}] \tag{2}\]
We use the cosine similarity for nearest neighbors retrieving. The difference between ours and [58, 79] lies in the fact that we utilize the memory bank to retrieve nearest neighbors while [58, 79] adopts the memory bank to compute the instance discrimination loss. Before every mini-batch training, we simply update the old items in the memory banks corresponding to current mini-batch. Note that updating the memory bank is only done to replace the old low-dimension vectors with new ones computed by the model, and does not require any additional computation.
We then use the prediction of the neighbors to super-vise the training weighted by the affinity values, with the following objective adapted from Eq. 1:
\[\mathcal{L}_{\mathcal{N}}=-\frac{1}{n_{t}}\sum_{i}\sum_{k\in\mathcal{N}_{K}^{ i}}A_{ik}\mathcal{S}_{k}^{\top}p_{i} \tag{3}\]
where we use the dot product to compute the similarity between predictions, corresponding to \(D_{sim}\) in Eq.1, the \(k\) is the index of the \(k\)-th nearest neighbors of \(\mathbf{z}_{i}\), \(\mathcal{S}_{k}\) is the \(k\)-th item in memory bank \(\mathcal{S}\), \(A_{ik}\) is the affinity value of \(k\)-th nearest neighbors of feature \(\mathbf{z}_{i}\). Here the \(\mathcal{N}_{K}^{i}\) is the index set1 of the \(K\)-nearest neighbors of feature \(\mathbf{z}_{i}\). Note that all neighbors are retrieved from the feature bank \(\mathcal{F}\). With the affinity value as weight, this objective pushes the features to their neighbors with strong connectivity and to a lesser degree to those with weak connectivity.
Footnote 1: All indexes are in the same order for the dataset and memory banks.
To assign larger affinity values to semantic similar neighbors, we divide the nearest neighbors retrieved into two groups: reciprocal nearest neighbors (RNN) and non-reciprocal nearest neighbors (nRNN). The feature \(\mathbf{z}_{j}\) is regarded as the RNN of the feature \(\mathbf{z}_{i}\) if it meets the following condition:
\[j\in\mathcal{N}_{K}^{i}\wedge i\in\mathcal{N}_{M}^{j} \tag{4}\]
Other neighbors which do not meet the above condition are nRNN. Note that the normal definition of reciprocal nearest neighbors [53] applies \(K=M\), while in this paper \(K\) and \(M\) can be different. We find that reciprocal neighbors have a higher potential to belong to the same cluster as the feature (Fig. 1(b)). Thus, we assign a high affinity value to the RNN features. Specifically for feature \(\mathbf{z}_{i}\), the affinity value of its \(j\)-th K-nearest neighbor is defined as:
\[A_{i,j}=\begin{cases}1&\text{if }j\in\mathcal{N}_{K}^{i}\wedge i\in\mathcal{N}_{M} ^{j}\\ r&\text{otherwise},\end{cases} \tag{5}\]
where \(r\) is a hyperparameter. If not specified \(r\) is set to 0.1.
To further reduce the potential impact of noisy neighbors in \(\mathcal{N}_{K}\), which belong to the different class but still are RNN, we propose a simply yet effective way dubbed _self-regularization_, that is, to not ignore the current prediction of ego feature:
\[\mathcal{L}_{self}=-\frac{1}{n_{t}}\sum_{i}^{n_{t}}\mathcal{S}_{i}^{\top}p_{i} \tag{6}\]
where \(\mathcal{S}_{i}\) means the stored prediction in the memory bank, note this term is a _constant vector_ and is identical to the \(p_{i}\) since we update the memory banks before the training, here the loss is only back-propagated for variable \(p_{i}\).
To avoid the degenerated solution [17, 65] where the model predicts all data as some specific classes (and does not predict other classes for any of the target data), we encourage the prediction to be balanced. We adopt the prediction diversity loss which is widely used in clustering [17, 18, 23] and also in several domain adaptation works [65, 34, 66]:
\[\mathcal{L}_{div}=\sum_{c=1}^{C}\text{KL}(\bar{p}_{c}||q_{c}), \text{with}\quad\bar{p}_{c}=\frac{1}{n_{t}}\sum_{i}p_{i}^{(c)}, \tag{7}\] \[\text{and}\quad q_{\{c=1,\ldots,C\}}=\frac{1}{C}\]
Fig. 2: Illustration of Neighborhood Density for Outlier Detection. C1 is not an outlier as a few nearest neighbors of several features in the memory bank contain C1, while C2 is an outlier and should not be included during training, since no features in the memory banks contain it as nearest neighbor.
where the \(p_{i}^{(c)}\) is the score of the \(c\)-th class and \(\bar{p}_{c}\) is the empirical label distribution, it represents the predicted possibility of class \(c\) and q is a uniform distribution.
### _Expanded Neighborhood Affinity_
As mentioned in Sec. 1, a simple way to achieve the aggregation of more information is by considering more nearest neighbors. However, a drawback is that larger neighborhoods are expected to contain more datapoint from multiple classes, defying the purpose of class consistency. A better way to include more target features is by considering the \(M\)-nearest neighbor of each neighbor in \(\mathcal{N}_{K}\) of \(\mathbf{z}_{i}\) in Eq. 4, _i.e._, the expanded neighbors. These target features are expected to be closer on the target data manifold than the features that are included by considering a larger number of nearest neighbors [69]. The expanded neighbors of feature \(\mathbf{z}_{i}\) are defined as \(E_{M}(\mathbf{z}_{i})=\mathcal{N}_{M}(\mathbf{z}_{j})\ \forall j\in\mathcal{N}_{K}(\mathbf{z}_{i})\), _note that \(E_{M}(\mathbf{z}_{i})\) is still an index set and \(i\) (ego feature) \(\notin\)\(E_{M}(\mathbf{z}_{i})\)_. We directly assign a small affinity value \(r\) to those expanded neighbors, since they are further than nearest neighbors and may contain noise. We utilize the prediction of those expanded neighborhoods for training:
\[\mathcal{L}_{E}=-\frac{1}{n_{t}}\sum_{i}\sum_{k\in\mathcal{N}_{K}^{k}}\sum_{m \in E_{M}^{k}}r\mathcal{S}_{m}^{\top}p_{i} \tag{8}\]
where \(E_{M}^{k}\) contain the \(M\)-nearest neighbors of neighbor \(k\) in \(\mathcal{N}_{K}\).
Although the affinity values of all expanded neighbors are the same, it does not necessarily mean that they have equal importance. Taking a closer look at the expanded neighbors \(E_{M}(\mathbf{z}_{i})\), some neighbors will show up more than once, for example \(\mathbf{z}_{m}\) can be the nearest neighbor of both \(\mathbf{z}_{h}\) and \(\mathbf{z}_{j}\) where \(h,j\in\mathcal{N}_{K}(\mathbf{z}_{i})\), and the nearest neighbors can also serve as expanded neighbor. It implies that those neighbors form compact cluster, and we posit that those duplicated expanded neighbors have potential to be semantically closer to the ego-feature \(\mathbf{z}_{i}\). Thus, we do not remove duplicated features in \(E_{M}(\mathbf{z}_{i})\), as those can lead to actually larger affinity value for those expanded neighbors. This is one advantage of utilizing expanded neighbors instead of more nearest neighbors, we will verify the importance of maintaining the duplicated features in the experimental section.
### _Neighborhood Density for Outlier Detection_
In previous sections, we directly deploy nearest neighborhood clustering for source-free domain adaptation. However, it may deteriorate the feature representation when the features in the current batch exist as outliers. An outlier typically will not be retrieved as nearest neighbor of other features, and more importantly, whether the retrieved nearest neighbors of the outlier belong to the same semantic cluster is often unsure. Thus, in this section, we propose to filter out the potential outlier features, and exclude them in the objective computation.
To find those outlier features, we resort to nearest neighbor retrieval of the features in the memory bank. For each feature \(z_{j}\) in the memory bank, we retrieve its \(U\) nearest neighbors. The density of the feature \(i\) can be estimated by counting _how many_ samples have \(i\) as its nearest neighbor. This is given by \(||\mathcal{D}(i)||\) where
\[\mathcal{D}(i):= \{j|i\in\mathcal{N}_{U}^{j}\}. \tag{9}\]
The more samples in \(\mathcal{D}(i)\), the larger the density around the sample \(x_{i}\).
Having identified the outliers, we can now proceed and exclude from the clustering. We therefore define \(B\), similar to Eq. 5, to be:
\[B_{i,j}=\begin{cases}1&\text{if }j\in(\mathcal{D}(i)\bigcap\mathcal{N}_{V}^{j}) \\ r&\text{otherwise},\end{cases} \tag{10}\]
and the loss is given by:
\[\mathcal{L}_{\mathcal{D}}=-\frac{1}{n_{t}}\sum_{i}\sum_{j\in\mathcal{D}(i)}B_ {ij}\mathcal{S}_{j}^{\top}p_{i}. \tag{11}\]
Here the \(r\) is a hyperparameter, if not specified it is set to 0.1. We identify the method that includes this loss with _NRC++_.
As illustrated in Fig. 2, in Eq. 10 when the feature \(i\) is an outlier, which means \(\mathcal{D}_{i}\) is the empty set, it will be excluded in Eq. 11. If the feature \(x_{i}\) is not an outlier, then Eq. 11 will have a similar meaning as Eq. 3. Note that in Eq. 10 the second summation is over \(D(i)\) which is different from Eq. 3. As a result, both losses are considering different neighbors. When applied jointly they constitute a clustering algorithm that is less sensitive to outliers. And Fig 3 shows the retrieved samples which are located in higher density (larger \(||\mathcal{D}(i)||\)) and lower density regions (smaller \(||\mathcal{D}(i)||\)).
**Final objective.** Our method, called Neighborhood Reciprocity Clustering (NRC and NRC++), is illustrated in Algorithm. 1. The final objective for adaptation is:
\[\mathcal{L}=\mathcal{L}_{\mathcal{N}}+\mathcal{L}_{\mathcal{D}}+\mathcal{L}_{ E}+\mathcal{L}_{self}+\lambda_{div}\mathcal{L}_{div}, \tag{12}\]
Fig. 3: Examples located in high density (left) and lower density (right). The examples are from VisDA-C [50].
where hyper-parameter \(\lambda_{div}\) aims to balance \(\mathcal{L}_{div}\). In our experiment, we gradually reduce \(\lambda_{div}\) value with weight decay, since we consider that \(\mathcal{L}_{div}\) plays a more important role at the early training stage since the target data are probably disorderly clustered together. We reduce the influence of \(\mathcal{L}_{div}\) when the target data starts to form semantic clusters.
## 4 Experiments
**Datasets.** We use three 2D image benchmark datasets and a 3D point cloud recognition dataset. **Office-31**[57] contains 3 domains (Amazon(**A**), Webcam(**W**), DSLR(**D**)) with 31 classes and 4,652 images. **Office-Home**[73] contains 4 domains (Real(**Rw**), Clipart(**Cl**), Art(**Ar**), Product(**Pr**)) with 65 classes and a total of 15,500 images. **VisDA**[50] is a more challenging dataset, with 12-class synthetic-to-real object recognition tasks, its source domain contains of 152k synthetic images while the target domain has 55k real object images. **DomainNet**[49] is the most challenging with six distinct domains (345 classes and about 0.6 million images ): Clipart (**C**), Real (**R**), Infograph (**I**), Painting (**P**), Sketch (**S**), and Quickdraw (**Q**). **PointDA-10**[52] is the first 3D point cloud benchmark specifically designed for domain adaptation, it has 3 domains with 10 classes, denoted as ModelNet-10, ShapeNet-10 and ScanNet-10, containing approximately 27.7k training and 5.1k testing images together.
**Evaluation.** We compare with existing source-present and source-free DA methods. _All results are the average of three random runs._**SF** in the tables denotes source-free. In this paper, we do not compare with shot++[35], which mainly uses extra self-supervised and semi-supervised learning procedures to improve the generalizability of the model, thus further improving the final performance.
**Model details.** For fair comparison with related methods, we also adopt the backbone of ResNet-50 [20] for Office-Home and ResNet-101 for VisDA, and PointNet [51] for PointDA-10. Specifically, for 2D image datasets, we use the same network architecture as SHOT [34], _i.e._, the final part of the network is: fully connected layer - Batch Normalization [22] - fully connected layer - with weight normalization [62]. And for PointDA-10 [51], we use the code released by the authors for fair comparison with PointDAN [51], and only use the backbone without any of their proposed modules. To train the source model, we also adopt label smoothing as SHOT does. We adopt SGD with momentum 0.9 and batch size of 64 for all 2D datasets, and Adam for PointDA-10. The learning rate for Office-31 and Office-Home is set to 1e-3 for all layers, except for the last two newly added fc layers, where we apply 1e-2. Learning rates are set 10 times smaller for VisDA. Learning rate for PointDA-10 is set to 1e-6. We train 30 epochs for Office-31 and Office-Home while 15 epochs for VisDA, and 100 for PointDA-10. For the number of nearest neighbors (K, U, V) and expanded neighborhoods (M), we use 3, 20, 5, 2 for Office-31, Office-Home and PointDA-10, since VisDA is much larger we set K, M to 5, and U, V to 20, 5. As for the decay factor \(\mathcal{L}_{div}\) in Eq. 12, it is defined as \((1+10\times\frac{current\ iter}{max\_iter})^{-1}\).
### _Vanilla Domain Adaptation_
**2D image datasets.** We first evaluate the target performance of our method compared with existing DA and SFDA methods on three 2D image datasets. As shown in Tab. I-3, the top part shows results for the source-present methods _with access to source data during adaptation_. The bottom shows results for the source-free DA methods. On Office-31, our method gets similar results compared with source-free method 3C-GAN and lower than source-present method RSDA. And our method achieves state-of-the-art performance on Office-Home and VisDA, especially on VisDA our method surpasses the source-free method SHOT and source-present method RWOT by a wide margin (3% and 1.9% respectively). When excluding potential outliers, as done by our method (i.e., _NRC++_ ), we outperform all baselines and NRC. Especially for the VisDA dataset, we improve the accuracy from 85.9% to 88.1%. The reported results clearly demonstrate the efficiency of the proposed method for source-free domain adaptation. Interestingly, like already observed in the SHOT paper, source-free meth
\begin{table}
\begin{tabular}{c|c|c c c c c c c} \hline \hline Method & SF & A\(\rightarrow\)D & A\(\rightarrow\)W & D\(\rightarrow\)W & W\(\rightarrow\)D & D\(\rightarrow\)A & W\(\rightarrow\)A & Avg \\ \hline MCD [60] & ✗ & 92.2 & 88.6 & 98.5 & **100.0** & 69.5 & 69.7 & 86.5 \\ CDAN [40] & ✗ & 92.9 & 94.1 & 98.6 & **100.0** & 71.0 & 69.3 & 87.7 \\ CBST [100] & ✗ & 86.5 & 87.8 & 98.5 & **100.0** & 70.9 & 71.2 & 85.8 \\ MDD [96] & ✗ & 90.4 & 90.4 & 98.7 & 99.9 & 75.0 & 73.7 & 88.0 \\ MDD+IA [25] & ✗ & 92.1 & 90.3 & 98.7 & 99.8 & 75.3 & 74.9 & 88.8 \\ BNM [11] & ✗ & 90.3 & 91.5 & 98.5 & **100.0** & 70.9 & 71.6 & 87.1 \\ DMRL [78] & ✗ & 93.4 & 90.8 & 99.0 & **100.0** & 73.0 & 71.2 & 87.9 \\ BDG [84] & ✗ & 93.6 & 93.6 & 99.0 & **100.0** & 73.2 & 72.0 & 88.5 \\ MCC [26] & ✗ & 95.6 & 95.4 & 98.6 & 100.0 & 72.6 & 73.9 & 89.4 \\ SRDC [68] & ✗ & 95.8 & 95.7 & 99.2 & 100.0 & 76.7 & 77.1 & 90.8 \\ RWOT [81] & ✗ & 94.5 & 95.1 & **99.5** & 100.0 & **77.5** & 77.9 & 90.8 \\ RSDA [19] & ✗ & 95.8 & **96.1** & 99.3 & **100.0** & 77.4 & **78.9** & **91.1** \\ \hline SHOT [34] & ✓ & 94.0 & 90.1 & 98.4 & 99.9 & 74.7 & 74.3 & 88.6 \\
3C-GAN [31] & ✓ & 92.7 & 93.7 & 98.5 & 99.8 & 75.3 & 77.8 & 89.6 \\ HCL [21] & ✓ & 94.7 & 92.5 & 98.2 & 100.0 & 75.9 & 77.7 & 89.8 \\
**NRC** & ✓ & **96.0** & 90.8 & 99.0 & **100.0** & 75.3 & 75.0 & 89.4 \\
**NRC++** & ✓ & 95.9 & 91.2 & 99.1 & **100.0** & 75.5 & 75.0 & 89.5 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Accuracies (%) on Office-31 for ResNet50-based methods.
ods outperform methods that have access to source data during adaptation.
**3D point cloud dataset.** We also report the result for the PointDA-10. As shown in Tab. IV, our method outperforms PointDA [52], which demands source data for adaptation and is specifically tailored for point cloud data with extra attention modules, by a large margin (4%). Similarly, we can draw the same conclusion: introducing the density loss helps us to reduce the negative impact of the outliers resulting in better performance for _NRC++_.
### _Partial-set domain adaptation_
We also show that our method can be extended to partial-set domain adaptation, where the target label space is a subset of the source domain. The challenge here is that the model may fail to distinguish which categories the target samples come from. Specially for the dataset we use, _i.e._, Office-Home, there are totally 25 classes (the first 25 in the alphabetical order) out of 65 classes in the target domain for **Office-Home** (as also used in [34]). Here we directly deploy our method to source-free partial-set DA without introducing extra processes. As reported in Tab. IV, our **NRC** has better result than source-aware methods, and slightly outperforms SHOT. **NRC++** does not lead to a large performance gain on this setting. The results indicate the generalization ability of our method.
### _Multi-Source Domain Adaptation_
We also evaluate our method on the multi-source single-target setting on Office-Home and the large-scale Domain-Net benchmark. The difference between single-source (normal) domain adaptation and multi-source domain adaptation is that in multi-source DA the domain shift between each source domain may deteriorate the model training. However, here we directly deploy our method to source-free multi-source domain adaptation, where the training stages are similar to source-free domain adaptation, except that the source model is trained with data from multiple source domains. The SHOT methods in Tab. VIII are the baselines for source-free multi-source DA, where SHOT w/o domain labels means only using one source model, while SHOT-Ens (the reported results are from DECISION [1]) means using
\begin{table}
\begin{tabular}{l|c|c c c c c c c c c c c c c} \hline Method & SF & plane & bcycl & bus & car & horse & knife & mcycl & person & plant & sktbrd & train & truck & Per-class \\ \hline ResNet-101 [20] & ✗ & 55.1 & 53.3 & 61.9 & 59.1 & 80.6 & 17.9 & 79.7 & 31.2 & 81.0 & 26.5 & 73.5 & 8.5 & 52.4 \\ DANN [16] & ✗ & 81.9 & 77.7 & 82.8 & 44.3 & 81.2 & 29.5 & 65.1 & 28.6 & 51.9 & 54.6 & 82.8 & 7.8 & 57.4 \\ DAN [39] & ✗ & 87.1 & 63.0 & 76.5 & 42.0 & 90.3 & 42.9 & 85.9 & 53.1 & 49.7 & 36.3 & 85.8 & 20.7 & 61.1 \\ ADR [59] & ✗ & 94.2 & 48.5 & 84.0 & 72.9 & 90.1 & 74.2 & 92.6 & 72.5 & 80.8 & 61.8 & 82.2 & 28.8 & 73.5 \\ CDAN [40] & ✗ & 85.2 & 66.9 & 83.0 & 50.8 & 84.2 & 74.9 & 88.1 & 74.5 & 83.4 & 76.0 & 81.9 & 38.0 & 73.9 \\ CDAN+BSP [6] & ✗ & 92.4 & 61.0 & 81.0 & 57.5 & 89.0 & 80.6 & 90.1 & 77.0 & 84.2 & 77.9 & 82.1 & 38.4 & 75.9 \\ SAFN [82] & ✗ & 93.6 & 61.3 & 84.1 & 70.6 & 94.1 & 79.0 & 91.8 & 79.6 & 89.9 & 55.6 & 89.0 & 24.4 & 76.1 \\ SWD [30] & ✗ & 90.8 & 82.5 & 81.7 & 70.5 & 91.7 & 69.5 & 86.3 & 77.5 & 87.4 & 63.6 & 85.6 & 29.2 & 76.4 \\ MDD [96] & ✗ & - & - & - & - & - & - & - & - & - & - & - & 74.6 \\ DMRL [78] & ✗ & - & - & - & - & - & - & - & - & - & - & - & 75.5 \\ DMLA [80] & ✗ & - & - & - & - & - & - & - & - & - & - & - & 75.5 \\ DMLA [80] & ✗ & 88.7 & 80.3 & 80.5 & 71.5 & 90.1 & 93.2 & 85.0 & 71.6 & 89.4 & 73.8 & 85.0 & 36.9 & 78.8 \\ STAR [43] & ✗ & 95.0 & 84.0 & 84.6 & 73.0 & 91.6 & 91.8 & 85.9 & 78.4 & 94.4 & 84.7 & 87.0 & 42.2 & 82.7 \\ RWOT [81] & ✗ & 95.1 & 80.3 & 83.7 & **90.0** & 92.4 & 68.0 & **92.5** & 82.2 & 87.9 & 78.4 & 90.4 & **68.2** & 84.0 \\ RSDA-MSTN [19] & ✗ & - & - & - & - & - & - & - & - & - & - & - & - & 75.8 \\ \hline
3C-GAN [31] & ✓ & 94.8 & 73.4 & 68.8 & 74.8 & 93.1 & 95.4 & 88.6 & **84.7** & 89.1 & 84.7 & 83.5 & 48.1 & 81.6 \\ SHOT [34] & ✓ & 94.3 & 88.5 & 80.1 & 57.3 & 93.1 & 94.9 & 80.7 & 80.3 & 91.5 & 89.1 & 86.3 & 58.2 & 82.9 \\ HCL [21] & ✓ & 93.3 & 85.4 & 80.7 & 68.5 & 91.0 & 88.1 & 86.0 & 78.6 & 86.6 & 88.8 & 80.0 & 74.7 & 83.5 \\
**NRC** & ✓ & 96.8 & 91.3 & 82.4 & 62.4 & 96.2 & 95.9 & 86.1 & 80.6 & 94.8 & 94.1 & 90.4 & 59.7 & 85.9 \\
**NRC++** & ✓ & **97.4** & **91.9** & **88.2** & 83.2 & **97.3** & **96.2** & 90.2 & 81.1 & **96.3** & **94.3** & **91.4** & 49.6 & **88.1** \\ \hline \end{tabular}
\end{table} TABLE II: Accuracies (%) on Office-Home for ResNet50-based methods.
\begin{table}
\begin{tabular}{l|c|c c c c c c c c c c c c} \hline Method & SF & Ar\(\rightarrow\)Cl & Ar\(\rightarrow\)Pr & Ar\(\rightarrow\)Rw & Cl\(\rightarrow\)Ar & Cl\(\rightarrow\)Pr & Cl\(\rightarrow\)Rw & Pr\(\rightarrow\)Ar & Pr\(\rightarrow\)Cl & Pr\(\rightarrow\)Rw & Rw\(\rightarrow\)Ar & Rw\(\rightarrow\)Cl & Rw\(\rightarrow\)Pr & **Avg** \\ \hline ResNet-50 [20] & ✗ & 34.9 & 50.0 & 58.0 & 37.4 & 41.9 & 46.2 & 38.5 & 31.2 & 60.4 & 53.9 & 41.2 & 59.9 & 46.1 \\ DAN [39] & ✗ & 43.6 & 57.0 & 67.9 & 45.8 & 56.5 & 60.4 & 44.0 & 43.6 & 67.7 & 63.1 & 51.5 & 74.3 & 56.3 \\ DANN [16] & ✗ & 45.6 & 59.3 & 70.1 & 47.0 & 58.5 & 60.9 & 46.1 & 43.7 & 68.5 & 63.2 & 51.8 & 76.8 & 57.6 \\ MCD [60] & ✗ & 48.9 & 68.3 & 74.6 & 61.3 & 67.6 & 68.8 & 57.0 & 47.1 & 75.1 & 69.1 & 52.2 & 79.6 & 64.1 \\ CDAN [40] & ✗ & 50.7 & 70.6 & 76.0 & 57.6 & 70.0 & 70.0 & 57.4 & 50.9 & 77.3 & 70.9 & 56.7 & 81.6 & 65.8 \\ SAFN [82] & ✗ & 52.0 & 71.7 & 76.3 & 64.2 & 69.9 & 71.9 & 63.7 & 51.4 & 77.1 & 70.9 & 57.1 & 81.5 & 67.3 \\ Symnets [95] & ✗ & 47.7 & 72.9 & 78.5 & 64.2 & 71.3 & 74.2 & 64.2 & 48.8 & 79.5 & 74.5 & 52.6 & 82.7 & 67.6 \\ MDD [96] & ✗ & 54.9 & 73.7 & 77.8 & 60.0 & 71.4 & 71.8 & 61.2 & 53.6 & 78.1 & 72.5 & **60.2**
multiple source models, their results indicate that using multiple source model could further improve the performance. As reported in Tab. VIII, without using domain labels, we are able to achieve the best score on the challenging DomainNet benchmark compared to the source-free multi-source DA methods, and comparable ones with baselines on office-home. For example, comparing with SHOT-Ens, on the DomainNet dataset we observe improvements of \(1.1\%\) despite not using domain labels. NRC++ further improves performance from 47.3% to 48.2%.
### _Multi-Target domain adaptation_
We also evaluate our method for single-source multi-target domain adaptation on Office-31. In multi-target domain adaptation, the model is trained with a single labeled source domain and multiple unlabeled target domains, the final goal is to learn a good classifier for all target domains. Like multi-source domain adaptation, directly deploying a normal domain adaptation method to multi-target domain adaptation will usually lead to bad performance, due to the negative transfer [8] caused by the different target domains. In this subsection, we show that our method can directly work quite well under multi-target domain adaptation, even under the source-free setting. As reported in Tab. IX, the
\begin{table}
\begin{tabular}{c c} \hline \hline Prior information & Per-class (\%) \\ \hline Uniform distribution & 57.7 \\ Real target category distribution & 56.9 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Accuracies (%) on PointDA-10. _The results except ours are from PointDAN [52]_.
\begin{table}
\begin{tabular}{c c c} \hline \hline VisDA & Runtime (s/epoch) & Per-class (\%) \\ \hline SHOT & 618.82 & 82.9 \\ \hline NRC & 540.89 & 85.9 \\
**NRC**(20\% for memory bank) & 507.15 & 85.3 \\
**NRC**(10\% for memory bank) & 499.49 & 85.2 \\
**NRC**(5\% for memory bank) & 499.28 & 85.1 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Analysis of used prior information for the target category distribution in the diversity loss \(\mathcal{L}_{div}\), on Ar\(\rightarrow\)Cl, Office-Home.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c} \hline \hline \(\mathcal{L}_{div}\) & \(\mathcal{L}_{N}\) & \(\mathcal{L}_{E}\) & \(\mathcal{L}_{E}\) & A & \(\mathcal{L}_{D}\) & Avg & \multicolumn{1}{c}{\(\mathcal{L}_{div}\)} & \multicolumn{1}{c}{\(\mathcal{L}_{N}\)} & \(\mathcal{L}_{E}\) & \(\mathcal{L}_{E}\) & A & \(\mathcal{L}_{D}\) & Acc & \multicolumn{1}{c}{**Method\&Dataset**} & Acc \\ \hline ✓ & & & & & & & & & & 47.8 & \\ ✓ & ✓ & & & & & & & & 81.5 & \\ ✓ & ✓ & & & ✓ & & & ✓ & & 82.7 & VisDA w/o \(E\) (\(K\)=30) & 84.0 \\ ✓ & ✓ & ✓ & & & & & & 61.2 & \\ ✓ & ✓ & ✓ & & & & ✓ & & 85.9 & OH (\(K\)=3,\(M\)=2) & **72.2** \\ ✓ & ✓ & ✓ & ✓ & ✓ & & ✓ & & **88.1** & OH w/o \(E\) (\(K\)=9) & _69.5_ \\ \hline \hline \end{tabular}
\end{table} TABLE V: Ablation study of different modules on Office-Home (**left**) and VisDA (**middle**), comparison between using expanded neighbors and larger nearest neighbors (**right**).
Fig. 4: **(Left and middle) Ablation study of \(\mathcal{L}_{self}\) on Office-Home and VisDA respectively. (**Right**) Performance with different \(r\) on VisDA.**
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c} \hline \hline & SF & Mo\(\rightarrow\)Sh & Mo\(\rightarrow\)Sc & Sh\(\rightarrow\)Mo & Sh\(\rightarrow\)Sc & Sc\(\rightarrow\)Mo & Sc\(\rightarrow\)Sh & Avg \\ \hline MMD [41] & ✗ & 57.5 & 27.9 & 40.7 & 26.7 & 47.3 & 54.8 & 42.5 \\ DANN [15] & ✗ & 58.7 & 29.4 & 42.3 & 30.5 & 48.1 & 56.7 & 44.2 \\ ADDA [70] & ✗ & 61.0 & 30.5 & 40.4 & 29.3 & 48.9 & 51.1 & 43.5 \\ MCD [60] & ✗ & 62.0 & 31.0 & 41.4 & 31.3 & 46.8 & 59.3 & 45.3 \\ PointDAN [52] & ✗ & 64.2 & **33.0** & 47.6 & **33.9** & 49.1 & 64.1 & 48.7 \\ \hline Source-only & & & 43.1 & 17.3 & 40.0 & 15.0 & 33.9 & 47.1 & 32.7 \\
**NRC** & ✓ & 64.8 & 25.8 & 59.8 & 26.9 & 70.1 & 68.1 & 52.6 \\
**NRC++** & ✓ & **67.2** & 27.6 & **60.2** & 30.4 & **74.5** & **71.2** & **55.1** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Accuracies (%) on PointDA-10. _The results except ours are from PointDAN [52]_.
source model has the worst result (i.e., 68.4 \(\%\)) without any domain adaptation technique. Using both the source data and the domain label, D-CGCT achieve the best score (i.e., 88.8 \(\%\)). While our method, without both the source data and the domain label, still obtains 85.0 \(\%\) accuracy, which indicates that our method gets comparable results even under this more challenging setting.
### _Analysis_
**Ablation study on neighbors \(\mathcal{N}\), \(E\), affinity \(A\) and density loss \(\mathcal{D}\).** In the first two tables of Tab. V, we conduct the ablation study on Office-Home and VisDA. The 1-st row contains results from the source model and the 2-nd row from only training with the diversity loss \(\mathcal{L}_{div}\). From the remaining rows, several conclusions can be drawn.
First, the original supervision, which considers all neighbors equally can lead to a decent performance (69.1 on Office-Home). Second, considering higher affinity values for reciprocal neighbors leads to a large performance gain (71.1 on Office-Home). Last but not the least, the expanded neighborhoods can also be helpful, but only when combined with the affinity values \(A\) (72.2 on Office-Home). Using expanded neighborhoods without affinity obtains bad performance (65,2 on Office-Home). We conjecture that those expanded neighborhoods, especially those neighbors of nRNN, may be noisy as discussed in Sec. 3.2. Removing the affinity \(A\) means we treat all those neighbors equally, which is not reasonable. Furthermore, as reported in the penultimate rows (Tab. V(left, middle)) outlier exclusion (with \(\mathcal{L}_{D}\)) further improves the model performance (e.g., from 85.9 to 88.1 on VisDA), indicating that considering the density around each samples is useful and empirically effective.
We also show that duplication in the expanded neighbors is important in the last row of Tab. V, where the \(\mathcal{L}_{\hat{E}}\) means we remove duplication in Eq. 8. The results show that the performance will degrade significantly when removing them, implying that the duplicated expanded neighbors are indeed more important than others.
Next we ablate the importance of the expanded neighborhood in the right of Tab. V. We show that if we increase the number of datapoints considered for class-consistency by simply considering a larger K, we obtain significantly lower scores. We have chosen \(K\) so that the total number of points considered is equal to our method (i.e. 5+5*5=30 and 3+3*2=9). Considering neighbors of neighbors is more likely to provide datapoints that are close on the data manifold [69], and are therefore more likely to share the class label with the ego feature.
**Runtime analysis.** Instead of storing all feature vectors in the memory bank, we follow the same memory bank setting as in [13] which is for nearest neighbor retrieval. The method only stores a fixed number of target features, we update the memory bank at the end of each iteration by taking the \(n\) (batch size) embeddings from the current training iteration and concatenating them at the end of the memory bank, and discard the oldest \(n\) elements from the memory bank. We report the results with this type of memory bank of different buffer size in the Tab. VI. The results show that indeed this could be an efficient way to reduce computation on very large datasets.
**Analysis on the prior information for the target category distribution in \(\mathcal{L}_{div}\).** In Tab. VII, we show the different choice of the prior information for the target category distribution in \(\mathcal{L}_{div}\). Originally we use the uniform distribution, here we also use the ground truth target class distribution. The results show simply utilizing uniform distribution is enough, even surpassing the one with the real class distribution. We posit that the reason may be due to the mini-batch training, as every mini-batch may have different label distribution.
**Ablation study on self-regularization.** In the left and middle of Fig 4, we show the results with and without self-regularization \(\mathcal{L}_{self}\). The \(\mathcal{L}_{self}\) can improve the performance when adopting only nearest neighbors \(\mathcal{N}\) or all neighbors \(\mathcal{N}+E\). The results imply that self-regularization can effectively reduce the negative impact of the potential noisy neighbors, especially on the Office-Home dataset.
**Sensitivity to hyperparameter.** There are three hyperparameters in our method: K and M which are the number of nearest neighbors and expanded neighbors, \(r\) which is the affinity value assigned to nRNN. We show the results with different \(r\) in the right of Fig. 4. _Note we keep the affinity of expanded neighbors as \(0.1\). \(r=1\)_ means no affinity. \(r=-1\)_ means treating supervision of nRNN feature as totally wrong, which is not always the case and will lead to quite lower result. \(r=0\) can also achieve good perfor
Fig. 5: (**Left**) The three curves are (on VisDA): target accuracy (_Blue_), ratio of features which have 5-nearest neighbors all sharing the same predicted label (_dashed Red_), and ratio of features which have 5-nearest neighbors all sharing the same and _correct_ predicted label (_dashed Black_). (**Right**) Ablation study on choice of K and M on VisDA.
Fig. 6: (Ratio of different type of nearest neighbor features which have the correct predicted label, before and after adaptation.
mance, signifying RNN can already work well. Results with \(r=0.1/0.15/0.2\) show that our method is not sensitive to the choice of a reasonable \(r\). Note in DA, there is no validation set for hyperparameter tuning, we show the results varying the number of neighbors in the right of Fig. 5, demonstrating the robustness to the choice of \(K\) and \(M\).
**Training curve.** We show the evolution of several statistics during adaptation on VisDA in the left of Fig. 5. The blue curve is the target accuracy. The dashed red and black curves are the ratio of features which have 5-nearest neighbors all sharing the same (_dashed Red_), or the same and also **correct** (_dashed Black_) predicted label. The curves show that the target features are clustering during the training. Another interesting finding is that the curve 'Per Shared' correlates with the accuracy curve, which might therefore be used to determine training convergence.
**Accuracy of supervision from neighbors.** We also show the accuracy of supervision from neighbors on task Ar\(\rightarrow\)Rw of Office-Home in Fig. 6. It shows that after adaptation, the ratio of all types of neighbors having more correct predicted label, proving the effectiveness of the method.
**t-SNE visualization.** We show the t-SNE feature visualization on task Ar\(\rightarrow\)Rw of target features before (Fig. 1(c)) and after (Fig. 7) adaptation. After adaptation, the features are more compactly clustered.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline
**Partial-set** & **SF** & Ar\(\rightarrow\)Cl & Ar\(\rightarrow\)Pr & Ar\(\rightarrow\)Re & Cl\(\rightarrow\)Ar & Cl\(\rightarrow\)Pr & Cl\(\rightarrow\)Re & Pr\(\rightarrow\)Ar & Pr\(\rightarrow\)Cl & Pr\(\rightarrow\)Re & Re\(\rightarrow\)Ar & Re\(\rightarrow\)Cl & Re\(\rightarrow\)Pr & **Avg.** \\ \hline ResNet-50 [20] & ✗ & 46.3 & 67.5 & 75.9 & 59.1 & 59.9 & 62.7 & 58.2 & 41.8 & 74.9 & 67.4 & 48.2 & 74.2 & 61.3 \\ IWAN [94] & ✗ & 53.9 & 54.5 & 78.1 & 61.3 & 48.0 & 63.3 & 54.2 & 52.0 & 81.3 & 76.5 & 56.8 & 82.9 & 63.6 \\ SAN [3] & ✗ & 44.4 & 68.7 & 74.6 & 67.5 & 65.0 & 77.8 & 59.8 & 44.7 & 80.1 & 72.2 & 50.2 & 78.7 & 65.3 \\ DRCN [32] & ✗ & 54.0 & 76.4 & 83.0 & 62.1 & 64.5 & 71.0 & 70.8 & 49.8 & 80.5 & 77.5 & 59.1 & 79.9 & 69.0 \\ ETN [4] & ✗ & 59.2 & 77.0 & 79.5 & 62.9 & 65.7 & 75.0 & 68.3 & 55.4 & 84.4 & 75.7 & 57.7 & 84.5 & 70.5 \\ SAFN [83] & ✗ & 58.9 & 76.3 & 81.4 & 70.4 & 73.0 & 77.8 & 72.4 & 55.3 & 80.4 & 75.8 & 60.4 & 79.9 & 71.8 \\ RTNet\({}_{adv}\)[7] & ✗ & 63.2 & 80.1 & 80.7 & 66.7 & 69.3 & 77.2 & 71.6 & 53.9 & 84.6 & 77.4 & 57.9 & 85.5 & 72.3 \\ Ba\({}^{3}\)US [36] & ✗ & 60.6 & 83.2 & 88.4 & 71.8 & 72.8 & 83.4 & 75.5 & 61.6 & 86.5 & 79.3 & 62.8 & 86.1 & 76.0 \\ TSCDA [55] & ✗ & 63.6 & 82.5 & 89.6 & 73.7 & 73.9 & 81.4 & 75.4 & 61.6 & 87.9 & **83.6** & 67.2 & 88.8 & 77.4 \\ \hline SHOT-IM [34] & ✓ & 57.9 & 83.6 & 88.8 & 72.4 & 74.0 & 79.0 & 76.1 & 60.6 & 90.1 & 81.9 & **68.3** & 88.5 & 76.8 \\ SHOT [34] & ✓ & 64.8 & **85.2** & 92.7 & 76.3 & **77.6** & **88.8** & **79.7** & 64.3 & 89.5 & 80.6 & 66.4 & 85.8 & 79.3 \\
**NRC** & ✓ & 66.2 & 84.2 & **92.9** & 77.5 & 75.2 & 83.1 & 76.6 & 68.1 & 88.3 & 82.4 & 67.5 & **88.6** & 79.5 \\
**NRC++** & ✓ & **66.3** & 85.0 & 92.8 & **78.0** & 75.3 & 83.5 & 76.7 & **68.3** & **90.6** & 82.5 & 67.7 & 88.5 & **79.6** \\ \hline \hline \end{tabular}
\end{table} TABLE 10: Accuracy on Office-Home using ResNet-50 as backbone for **partial-set DA**.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SF} & \multirow{2}{*}{
\begin{tabular}{c} w/o Domain \\ Labels \\ \end{tabular} } & \multicolumn{8}{c}{**DomainNet**} & \multicolumn{8}{c}{**Office-Home**} \\ \cline{3-14} & & & \(\mapsto\)C & \(\mapsto\)I & \(\mapsto\)P & \(\mapsto\)Q & \(\mapsto\)R & \(\mapsto\)S & Avg & \(\mapsto\)Ar & \(\mapsto\)Cl & \(\mapsto\)Pr & \(\mapsto\)Rw & Avg \\ \hline SImPA150 [72] & ✗ & ✗ & 66.4 & 26.5 & 56.6 & 18.9 & 68.0 & 55.5 & 48.6 & 70.8 & 56.3 & 80.2 & 81.5 & 72.2 \\ CMSDA [64] & ✗ & ✗ & 70.9 & 26.5 & 57.5 & 21.3 & 68.1 & 59.4 & 50.4 & 71.5 & **67.7** & 84.1 & 82.9 & **76.6** \\ DRT [33] & ✗ & ✗ & 71.0 & **31.6** & 61.0 & 12.3 & 71.4 & **60.7** & 51.3 & - & - & - & - \\ STEM [45] & ✗ & ✗ & **72.0** & 28.2 & **61.5** & **25.7** & **72.6** & 60.2 & **53.4** & - & - & - & - \\ \hline DECISION [1] & ✓ & ✗ & 61.5 & 21.6 & 54.6 & 18.9 & 67.5 & 51.0 & 45.9 & 74.5 & 59.4 & 84.4 & 83.6 & 75.5 \\ CAiDA [12] & ✓ & ✗ & - & - & - & - & - & - & **75.2** & 60.5 & 84.7 & **84.2** & 76.2 \\ SHOT [34] & ✓ & ✓ & 58.3 & 22.7 & 53.0 & 18.7 & 65.9 & 48.4 & 44.5 & 72.1 & 57.2 & 83.4 & 81.3 & 73.5 \\ SHOT [34] &-Ens & ✓ & ✗ & 58.6 & 25.2 & 55.3 & 15.3 & 70.5 & 52.4 & 46.2 & 72.2 & 59.3 & 82.8 & 82.9 & 74.3 \\ \hline Source & ✗ & ✓ & 57.0 & 23.4 & 54.1 & 14.6 & 67.2 & 50.3 & 44.4 & 58.0 & 57.3 & 74.2 & 77.9 & 66.9 \\
**NRC** & ✓ & ✓ & 65.3 & 24.4 & 55.9 & 16.1 & 69.3 & 53.0 & 47.3 & 70.8 & 60.1 & 84.8 & 83.7 & 74.8 \\
**NRC++** & ✓ & ✓ & 66.1 & 24.8 & 57.2 & 17.3 & 70.1 & 54.0 & 48.2 & 71.2 & 61.1 & **84.9** & 83.8 & 75.3 \\ \hline \hline \end{tabular}
\end{table} TABLE 8: Accuracy on both DomainNet and Office-Home for Multi-Source Domain Adaptation.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SF} & \multicolumn{2}{c}{w/o Domain} & \multicolumn{8}{c}{**Office-31**} \\ \cline{3-14} & & & \(\mapsto\)C & \(\mapsto\)I & \(\mapsto\)P & \(\mapsto\)Q & \(\mapsto\)R & \(\mapsto\)S & Avg & \(\mapsto\)Ar & \(\mapsto\)Cl & \(\mapsto\)Pr & \
## 5 Conclusions
We introduced a source-free domain adaptation (SFDA) method by uncovering the intrinsic target data structure. We proposed to achieve the adaptation by encouraging label consistency among local target features. We further considered density to reduce the negative impact of outliers. We differentiated between nearest neighbors, reciprocal neighbors and expanded neighborhood. Experimental results verified the importance of considering the local structure of the target features. Finally, our experimental results on both 2D image and 3D point cloud datasets testify the efficacy of our method.
## Acknowledgments
We acknowledge the support from Huawei Kirin Solution, and the project PID2022-143257NB-100, financed by CIN/AEI/10.13039/501100011033 and FSE+, and Grant PID2021-128178OB-I00 funded by MCIN/AEI/10.13039/501100011033 and by ERDF A way of making Europe, Ramon y Cajal fellowship Grant RYC2019-027020-I funded by MCIN/AEI/ 10.13039/501100011033 and by ERDF A way of making Europe, and the CERCA Programme of Generalitat de Catalunya. Yaxing acknowledges the support from the project funded by Youth Foundation 62202243 (China).
|
2303.12177 | EZtune: A Package for Automated Hyperparameter Tuning in R | Statistical learning models have been growing in popularity in recent years.
Many of these models have hyperparameters that must be tuned for models to
perform well. Tuning these parameters is not trivial. EZtune is an R package
with a simple user interface that can tune support vector machines, adaboost,
gradient boosting machines, and elastic net. We first provide a brief summary
of the the models that EZtune can tune, including a discussion of each of their
hyperparameters. We then compare the ease of using EZtune, caret, and
tidymodels. This is followed with a comparison of the accuracy and computation
times for models tuned with EZtune and tidymodels. We conclude with a
demonstration of how how EZtune can be used to help select a final model with
optimal predictive power. Our comparison shows that EZtune can tune support
vector machines and gradient boosting machines with EZtune also provides a user
interface that is easy to use for a novice to statistical learning models or R. | Jill Lundell | 2023-03-03T03:38:31Z | http://arxiv.org/abs/2303.12177v1 | # EZtune: A Package for Automated Hyperparameter Tuning in R
###### Abstract
Statistical learning models have been growing in popularity in recent years. Many of these models have hyperparameters that must be tuned for models to perform well. Tuning these parameters is not trivial. EZtune is an R package with a simple user interface that can tune support vector machines, adaboost, gradient boosting machines, and elastic net. We first provide a brief summary of the the models that EZtune can tune, including a discussion of each of their hyperparameters. We then compare the ease of using EZtune, caret, and tidymodels. This is followed with a comparison of the accuracy and computation times for models tuned with EZtune and tidymodels. We conclude with a demonstration of how how EZtune can be used to help select a final model with optimal predictive power. Our comparison shows that EZtune can tune support vector machines and gradient boosting machines with EZtune also provides a user interface that is easy to use for a novice to statistical learning models or R.
## 1 Introduction
Statistical learning models provide powerful alternatives to more traditional statistical models, such as regression. However, many of these models have hyperparameters that must be tuned in order to achieve optimal prediction accuracy. Many methods have been proposed for tuning hyperparameters for statistical learning models, but few of these methods are supported with research. The popular R packages [1] tidymodels[2] and caret[3], automatically tune hyperparameters, but they can be prohibitively difficult to implement for a less experienced R user or someone new to machine learning. We introduce a package called EZtune that automatically tunes hyperparameters for support vector machines (SVMs) [4], gradient boosting machines (GBMs) [5], adaboost [6], and elastic net [7]. EZtune has a simple user interface that is accessible to a novice R user, uses a method to tune hyperparameters that is well documented, and its ability to consistently tune an accurate model is backed by research [8]. First, we provide a short introduction to SVMs, boosted trees, and elastic net with a focus on their respective hyperparameters. This is followed by an overview of EZtune, tidymodels, and caret. Next, we compare the performance of EZtune with tidymodels and glmnet[9] for hyperparameter tuning. The No Free Lunch theorem indicates that no one model type outperforms all other models in every situation [10]. Thus, we conclude with a demonstration of how EZtune can be used to tune different support vector machines, gradient boosting machines, and elastic net models to select the model with the best performance.
## 2 Overview of tuning parameters
The following section briefly summarizes SVMs, boosted trees, and elastic net and identifies the hyperparameters for each model. The focus of each summary is the identification of hyperparameters that require tuning for each model type.
### Support Vector Machines
SVMs use separating hyperplanes to create decision boundaries for classification and regression models [4]. The separating hyperplane is called a soft margin because it allows some points to be on the wrong side of the hyperplane. The cost parameter, \(C\), dictates the tolerance for points to be on the wrong side of the margin. A large value of \(C\) allows many points to be on the wrong side while smaller values of \(C\) have a much lower tolerance for misclassified points. A kernel, \(K\), maps the classifier into a higher dimensional space. Hyperplanes are used to classify in the higher dimensional space, which results in non-linear boundaries in the original space. The SVM is modeled as:
\[f(x)=\beta_{0}+\sum_{i\in S}\alpha_{i}K(x,x_{i};\gamma)\]
where, \(K\) is a kernel with tuning parameter \(\gamma\), \(S\) is the set of support vectors (points on the boundary of the margin), and \(\alpha_{i}\) is computed using \(C\) and the margin. The hyperparameters for SVM classification are \(C\) and \(\gamma\). Common kernels are polynomial, radial, and linear. tidymodels and caret will tune all three types of kernels whereas EZtune provides automatic tuning only for radial kernels. However, radial kernels work well in most situations.
Support vector regression (SVR) has an additional tuning parameter, \(\epsilon\). SVR attempts to find a function, or hyperplane, such that the deviations between the hyperplane and the responses, \(y_{i}\), are less than \(\epsilon\) for each observation [11]. The cost represents the number of points that can be further than \(\epsilon\) away from the hyperplane. Essentially, SVMs try to maximize the number of points that are on the correct side of the margin and SVR tries to maximize the number of points that fall within \(\epsilon\) of the margin. The only mathematical restriction for the hyperparameters for SVM and SVR is that they are greater than \(0\).
### Boosted trees
Boosted trees are part of the family of ensemble methods which combine many weak learners, or classifiers, into a single, accurate, classifier. A weak learner typically does not perform well alone, but combining many weak learners can create a strong classifier [12]. With boosted trees, a weak learning model is constructed using a regression or classification tree with only a few terminal nodes. The misclassified points or residuals from this tree are examined and the information is used to fit a new tree. The model is updated by adding the new tree to the previously fitted trees. The ensemble is iteratively updated in this manner and final predictions are made by a weighted vote of the weak learners.
The primary difference between various boosted tree algorithms is the method used to learn from misclassified observations at each iteration. Adaboost fits a small tree to the training data by applying the same weight to all observations in the training data [6]. The misclassified points are then given greater weight than the correctly classified points and a new tree is computed. The new prediction is the sum of the weighted predictions of all of the previous trees. The process is repeated many times with misclassified points being given greater weight. A new tree is created using the weighted data and it is then added to the previous model. The weak learners are combined using a weighted average approach where the highest weights are given to the best performing weak learners. This results in an additive model where the final predictions are the weighted sum of the predictions made by all of the models in the ensemble [12].
GBMs are boosted trees that use gradient descent to minimize a loss function during the learning process [5]. The loss function can be tailored to the problem being solved. We use the mean squared error (MSE) for regression models and a logarithmic loss for classification problems as the loss functions for the examples in this article. A decision tree is used as the initial weak learner. GBMs recursively fit new trees to the residuals from previous trees and then combine the predictions from all of the trees to obtain a final prediction.
Adaboost and GBMs have a nearly identical set of hyperparameters. Both models require tuning the number of iterations, depth of the trees, and the shrinkage, which controls how fast the trees learn. GBMs have an additional hyperparameter which is the minimum number of observations in a terminal node.
### Elastic net
Elastic net is a linear model that incorporates \(\ell_{1}\) and \(\ell_{2}\) regularization. Regularization reduces variability in the model with the sacrifice of introducing some bias. The \(\ell_{1}\) penalty introduces sparseness into the model. However, using only the \(\ell_{1}\) penalty limits the number of variables that can have non-zero coefficients to the number of observations and prevents group selection of variables. That is, if a group of variables are correlated, only one of the variables will typically be selected. Introducing \(\ell_{2}\) regularization allows for more non-zero coefficients and encourages correlated groups of variables to be retained in the model. Elastic net estimates the coefficients using the following equation:
\[\hat{\beta}=\underset{\beta}{\text{argmin}}||\mathbf{y}-\mathbf{X}\beta||^{2}+\lambda_ {2}||\beta||_{2}^{2}+\lambda_{1}||\beta||_{1}\]
The parameters \(\lambda_{1}\) and \(\lambda_{2}\) control the amount of \(\ell_{1}\) and \(\ell_{2}\) regularization in the model. Ridge regression is a special case of elastic net where \(\lambda_{1}=0\). The coefficients shrink toward \(0\), but none of them will be equal to \(0\) which results in the retention of all predictors in the model. Similarly, lasso is an elastic net model with \(\lambda_{2}=0\) which results in many coefficients being set to \(0\). Larger values of \(\lambda_{1}\) result in more shrinkage of the coefficients.
Elastic net has two hyperparameters: \(\alpha\) and \(\lambda\). The parameter \(\alpha\) is the elastic net tuning parameter and it controls the amount \(\ell_{1}\) and \(\ell_{2}\) regularization in the model. It is defined as \(\alpha=\frac{\lambda_{1}}{\lambda_{1}+\lambda_{2}}\). Note that \(\alpha\in[0,1]\), where \(\alpha=0\) is the ridge model and \(\alpha=1\) is the lasso model. The other tuning parameter, \(\lambda\), controls the amount of shrinkage that is performed. Larger values of \(\lambda\) result in more shrinkage. The only mathematical restriction on \(\lambda\) is that \(\lambda\geq 0\).
## 3 Discussion of available R packages
Several R packages are available that can tune statistical learning models. Packages such as e1071 [13] can tune a single model type. However, we are interested in being able to compare different model types with a simple interface. Thus, we limit this discussion to the most commonly used R packages that can tune different model types: caret, tidymodels, and EZtune.
The caret package [3] is a powerful package that has been available in R for many years. caret is able to tune nearly any model using almost any method. However, this abundant functionality makes caret time consuming to learn and can be overwhelming and inaccessible to a non-expert R user. caret is not used in comparisons in this article because although it is widely used, we feel that the programming and machine learning knowledge needed to use it makes caret a poor candidate for comparison with EZtune.
tidymodels[2] is a suite of packages that can automatically tune many supervised learning models with varying degrees of automation. tidymodels is not as powerful or versatile as caret, but it is much easier to learn and use. tidymodels can tune many different model types and allows the user to tune a model using a grid search or Iterative Bayesian optimization [14]. tidymodels includes functionality that auto-selects reasonable ranges for the grid search, which is helpful for the user who is not an expert in hyperparameters. However, it requires that the user knows what hyperparameters must be tuned and what R packages are used to construct the different models. Although tidymodels is much easier to use than caret, it is still not accessible to a novice R user and takes considerable understanding of the different models and their hyperparameters to learn.
EZtune[8] tunes fewer models than caret or tidymodels, but the user interface is simple and accessible to those who are novice R users or inexperienced with machine learning models. Tuning is done by optimizing the hyperparameter space using either a Hookes-Jeeves algorithm [15] or a genetic algorithm [16]. EZtune does not require any knowledge of the hyperparameters or their properties. The interface is designed to work well within a computational pipeline or R function.
## 4 Comparison of EZtune with other R packages
This section includes a comparison of EZtune with tidymodels for tuning SVMs and GBMs. tidymodels does not tune elastic net models so we include a comparison of EZtune and glmnet[9] for tuning elastic net. Adaboost is not included in this section because tidymodels does not tune adaboost. The section is intended to showcase the strengths and weaknesses of tidymodels and EZtune and to provide a tutorial on how to
use both packages. Many code snippets are included to demonstrate how to use both packages for different model types and tuning methods.
Comparisons are made using both classification and regression models because different models and packages perform differently in each of these settings. Five datasets have a binary response and are used for classification and four datasets have a continuous response variable and are used to compare the regression methods. These datasets were selected because they are publicly available and have been used in previous benchmarking studies. A description of the datasets is in Table 1.
Datasets were split into training and test datasets using the rsample package [20] and models were tuned using the training dataset. Tuned models were verified using the test data from the split and the results were compared for each method and dataset. The following code shows how the data were split for all of the binary classification tests. The same methodology was used for the regression datasets except that the strata argument is not used in the initial_split function.
library(mlbench) library(rsample) data(Sonar) sonar_split <- initial_split(Sonar, strata = Class) sonar_train <- training(sonar_split) sonar_test <- testing(sonar_split) sonar_folds < vfold_cv(sonar_train) The model was tuned and accuracy or root mean squared error (RMSE) and computation time was recorded for ten trials. The mean computation time and mean accuracy are reported for each dataset and tuning method. EZtune was tested with both the genetic and the Hooke-Jeeves algorithms and with 10-fold cross-validation and the fast method for verification while tuning. The fast method randomly splits the data in half, trains the model with half of the data, and verifies the model with the other half [8]. The GBM and SVM comparisons use tidymodels with both a grid search and Iterative Bayes optimization. The grid comprised five different values for each hyperparameter selected by tidymodels and Iterative Bayes was done using ten iterations. Elastic net was tuned with glmnet using two different methods specified in Section 4.3. Each section includes examples of the code used to perform the computations.
### Results for support vector machines
tidymodels uses the package kernlab[21] and EZtune uses the package e1071 [13] as the engine for the SVM calculations. EZtune only tunes models with a radial kernel, but tidymodels can tune a model with a
\begin{table}
\begin{tabular}{l r r r r} \hline Data sets & N & Variables & Categorical variables & Continuous variables \\ \hline Abalone & 4177 & 9 & 1 & 7 \\ Boston Housing 2 & 506 & 19 & 1 & 15 \\ CO2 & 84 & 5 & 3 & 1 \\ Crime & 47 & 14 & 1 & 12 \\ Breast Cancer & 699 & 10 & 0 & 9 \\ Pima & 768 & 9 & 0 & 8 \\ Sonar & 208 & 61 & 0 & 60 \\ Lichen & 840 & 40 & 2 & 31 \\ Mullein & 12094 & 32 & 0 & 31 \\ \hline \multicolumn{5}{l}{_Note:_} \\ \multicolumn{5}{l}{Abalone is from the AppliedPredictiveModeling package [17].} \\ \multicolumn{5}{l}{Boston Housing 2, Breast cancer, Pima, and Sonar are from the mlbench package [18].} \\ \multicolumn{5}{l}{CO2 is from the datasets package [1].} \\ \multicolumn{5}{l}{Crime is from the book Practicing Statistics [19].} \\ \multicolumn{5}{l}{Lichen and Mullein are internal to EZtune [8].} \\ \end{tabular}
\end{table}
Table 1: List of datasets used to explore hyperparameters.
linear, polynomial, or radial kernel. All comparisons were done with radial kernels to ensure comparability. Cost and \(\gamma\) were both tuned for the binary classification models and \(\epsilon\) was also tuned for the regression models.
The following code snippet shows how tidymodels was used to tune an SVM for the Sonar data using Iterative Bayes optimization. The model is created by using the svm_rbf function to specify that the model is an SVM and to identify the hyperparameters that will be tuned. This is used in conjunction with set_engine for specifying the underlying engine package for SVM computations and set_mode for defining the model type. The metrics that will be used to tune the model are identified with metric_set. The tuning workflow is then specified with the workflow, add_model, and add_formula functions. Once the model and the workflow are specified, the parameters from the model, the workflow, and the set of performance metrics are used by the function tune_bayes to tune the SVM using Iterative Bayes. The performance results are obtained by refitting the tuned model with final_workflow and then obtaining the metrics from the test dataset with last_fit and collect_metrics. This workflow provides a great deal of flexibility at all stages, but it can be challenging to piece together and identify the inputs to each part.
library(tidymodels)
tune_model <- svm_rbf(cost = tune(), rbf_sigma = tune()) %>% set_engine("kernlab") %>% set_mode("classification") melts <- metric_set(accuracy, roc_auc)
model_wf <- workflow() %>% add_model(tune_model) %>% add_formula(Class -.)
model_set <- parameters(model_wf) best_model <- model_wf %>% tune_bayes(resamples = sonar_folds, param_info = model_set, initial = 5, iter = 10, metrics = meets) %>% select_best("accuracy")
results <- model_wf %>% finalize_workflow(best_model) %>% fit(data = sonar_train) %>% last_fit(sonar_split, metrics = meets) %>% collect_metrics()
as.data.frame(results[, c(1, 3)])
The following code snippet demonstrates how EZtune was used to tune an SVM for the Sonar data using a genetic algorithm and 10-fold cross-validation. Note that EZtune can tune the SVM with a single function call to eztune while tidymodels requires calling ten functions to obtain the tuned model. The method for obtaining the performance metrics for the test dataset is also far less complicated and more intuitive for a novice R user than for tidymodels.
library(EZtune)
model <- eztune(x = subset(sonar_train, select = -Class), y = sonar_train$Class, method = "svm", optimizer = "ga", fast = FALSE, cross = 10)
predictions <- predict(model, sonar_test) acc <- accuracy_vec(truth = sonar_test$Class, estimate = predictions[, 1]) auc <- roc_auc_vec(truth = sonar_test$Class, estimate = predictions[, 2])
data.frame(Accuracy = acc, AUC = auc)
The mean accuracy and mean computations times in seconds are shown in Table 2 which shows that the best accuracies were obtained from EZtune for all five datasets. It also shows that the shortest computation times for all datasets were achieved by EZtune with the Hooke-Jeeves optimization algorithm and the fast option. Computation times were faster for all of the EZtune runs than for the tidymodels with some EZtune runs being as much as 50 to 100 times faster than the tidymodels runs. The exception is Mullein, the largest dataset, tuned with cross-validation.
Support vector regression was done on four datasets. The same methodology was used for the regression model as for the binary classification model, except that \(\epsilon\) was tuned in addition to cost and \(\gamma\). The code for regression with EZtune is identical to that for binary classification because EZtune automatically make the appropriate adjustments for the type of response variable. tidymodels requires a slight modification to specify whether a model is classification or regression. As with the binary classification SVM trials, the training dataset was used to tune the model with each method and then the model was verified with the test dataset.
The following code snippet shows how the Boston Housing dataset was split for the regression tests.
library(mlbench) data(BostonHousing2) bh<- mutate(BostonHousing2, lcrim=log(crim)) %>% dplyr::select(-town, -medv, -crim) bh_split<- initial_split(bh) bh_train<- training(bh_split) bh_test<- testing(bh_split) bh_folds<- vfold_cv(bh_train)
The following code snippet demonstrates how an SVM was tuned for the Boston Housing data using tidymodels with Iterative Bayes optimization. The workflow is similar to the one used for the SVM for binary classification. The primary differences are that the model is specified as a regression model, \(\epsilon\) is added as a hyperparameter, and the metrics used to verify the model are RMSE and mean absolute error.
tune_model<- swm_rbf(cost=tune(), rbf_sigma=tune(), margin=tune()) %>% set_engine("kernlab") %>% set_mode("regression") mets<- metric_set(rmse, mae)
model_wf<- workflow() %>% add_model(tune_model) %>% add_formula(cmedv -.)
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{4}{c}{EZtune} & \multicolumn{4}{c}{Tidymodels} \\ \cline{2-7} Data & GA CV & GA fast & HJ CV & HJ fast & Grid & IB \\ \hline _Accuracy_ & & & & & & \\ BreastCancer & 0.994 & 0.993 & **0.996** & 0.993 & 0.965 & 0.965 \\ Lichen & **0.900** & 0.892 & 0.871 & 0.886 & 0.856 & 0.842 \\ Mullein & **0.959** & 0.949 & **0.959** & 0.957 & 0.884 & 0.916 \\ Pima & 0.833 & **0.847** & 0.827 & 0.822 & 0.763 & 0.737 \\ Sonar & 0.948 & **0.959** & 0.954 & 0.957 & 0.814 & 0.882 \\ _Time (seconds)_ & & & & & & \\ BreastCancer & 9.32 & 3.47 & 1.35 & **0.591** & 126 & 88.0 \\ Lichen & 59.1 & 15.7 & 14.4 & **3.74** & 146 & 92.8 \\ Mullein & 47,800 & 3,550 & 38,300 & **1,170** & 7,380 & 3,310 \\ Pima & 38.2 & 9.26 & 5.91 & **1.38** & 122 & 84.3 \\ Sonar & 9.54 & 4.61 & 2.76 & **1.56** & 111 & 87.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean accuracies and computation times in seconds for ten trials of tuning classification SVMs. The best accuracies and times for each dataset are bolded.
model_set<- parameters(model_wf) best_model<- model_wf%> tune_bayes(resamples=bh_folds, param_info=model_set, initial=5,iter=10, metrics=mets)%>% select_best("rmse") results<- model_wf%>% finalize_workflow(best_model)%>% fit(data=bh_train)%>% last_fit(bh_split, metrics=mets)%>% collect_metrics() as.data.frame(results[, c(1, 3)]) The following code snippet demonstrates how an SVM was tuned for the Boston Housing data with a genetic algorithm and 10-fold cross-validation using EZtune. Note that the syntax for using extune is the same as for the binary classification SVM. This is because extune uses the response variable to determine if the model is a classification model or a regression model and then adjusts the hyperparameters, tuning regions, and verification metrics accordingly.
model<- extune(x=subset(bh_train, select=-cmdv), y=bh_train$cmdv, method="svm", optimizer="ga", fast=FALSE, cross=10) predictions<- predict(model,bh_test) rmse.ez<- rmse_vec(truth=bh_test$cmdv, estimate=predictions) mae.ez<- mae_vec(truth=bh_test$cmdv, estimate=predictions) data.frame(RMSE=rmse.ez, MAE=mae.ez) The RMSE was computed for ten runs of each model type and the mean RMSE is listed for each method and dataset in Table 3 along with the mean computation time for each run. The table shows that the RMSEs for each method are similar, but all of the smallest RMSEs were obtained with EZtune. The shortest computation times were achieved with EZtune using the Hooke-Jeeves algorithm and fast option. The longest computation time was seen with the Abalone data for EZtune with the genetic algorithm and cross-validation. This mirrors what was seen with the binary classification results in Table 2 which also showed that the genetic algorithm with cross-validation on large datasets is computationally slower than the other EZtune and tidmodels options. The accuracies and RMSEs for the cross-validated genetic algorithm are not better than the other options which implies it may not be worth the long computation time for larger datasets.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{5}{c}{EZtune} & \multicolumn{2}{c}{Tidymodels} \\ \cline{2-7} Data & GA CV & GA fast & HJ CV & HJ fast & Grid & IB \\ \hline _RMSE_ & & & & & & \\ Abalone & 2.16 & 2.15 & **2.09** & 2.11 & 2.11 & 2.13 \\ BostonHousing & **2.82** & 3.12 & 3.50 & 2.94 & 3.09 & 2.89 \\ CO2 & 4.20 & **3.80** & 4.44 & 4.31 & 4.28 & 4.79 \\ Crime & 26.7 & 28.9 & **24.6** & 26.8 & 30.2 & 28.0 \\ _Time (seconds)_ & & & & & & \\ Abalone & 8,410 & 272 & 309 & **26.5** & 1,980 & 327 \\ BostonHousing & 91.5 & 4.75 & 41.0 & **1.56** & 426 & 91.1 \\ CO2 & 6.99 & 2.06 & 1.49 & **0.442** & 414 & 137 \\ Crime & 1.93 & 1.87 & 0.573 & **0.436** & 447 & 107 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean RMSEs and computation times in seconds for ten trials of tuning regression SVMs. The best results for each dataset are bolded.
### Results for gradient boosting machines
tidymodels uses the package xgboost[22] and EZtune uses the package gbm[23] as the engine for GBM. All four tuning parameters for GBMs were tuned with tidymodels and EZtune. As with SVMs, tidymodels was run with both a grid search and an Iterative Bayes algorithm using the same criteria for grid size and iterations as with SVMs. EZtune was run using the same criteria that was used for the SVM iterations.
The following code snippet shows the code used to tune a GBM for the Sonar data with tidymodels using a grid search.
```
tune_model<-boost_tree(trees=tune(),tree_depth=tune(), learn_rate=tune(), min_n=tune())%>% set_engine("xgboost")%>% set_mode("classification") nets<-metric_set(accuracy,roc_auc) model_wf<-workflow()%>% add_model(tune_model)%>% add_formula(Class-.) best_model<-model_wf%>% tune_grid(resamples=sonar_folds,grid=5^4,metrics=mets)%>% select_best("accuracy") results<-model_wf%>% finalize_workflow(best_model)%>% fit(data=sonar_train)%>% last_fit(sonar_split,metrics=mets)%>% collect_metrics() as_data.frame(results[,c(1,3)])
```
The following code snippet demonstrates how a GBM was tuned for the Sonar data using EZtune with Hooke-Jeeves and the fast option.
```
model<-extune(x=subset(sonar_train,select=-Class), y=sonar_trainSClass,method="gbm",optimizer="hjn", fast=0.5) predictions<-predict(model,sonar_test) acc<-accuracy_vec(truth=sonar_testSClass,estimate=predictions[,1]) auc<-roc_auc_vec(truth=sonar_testSClass,estimate=predictions[,2]) data.frame(Accuracy=acc,AUC=auc)
```
Table 4 shows the mean accuracies and the mean computation times for the ten trials. The table shows that the accuracies for EZtune are notably higher than those for tidymodels with the difference being about 3 percentage points for the Breast Cancer data and as large as 14 percentage points for the Sonar data. The shortest computation times were seen for EZtune with the Hooke-Jeeves algorithm and the fast option for all of the datasets with computation times that were approximately 10 times or more faster than those for the tidymodels Iterative Bayes option. The accuracies for the Hooke-Jeeves fast option were also similar to the optimal accuracy obtained for all of the datasets. The grid search option for tidymodels was much slower than the other models. This is because five options were tested for each hyperparameter. The grid for classification with GBM had 625 tests instead of the 25 needed to tune an SVM for binary classification. With the exception of the Sonar data, Iterative Bayes worked nearly as well as the grid search for tidymodels.
The following code demonstrates how tidymodels was used to tune a GBM on the Boston Housing data using a grid search.
```
tune_model<-boost_tree(trees=tune(),tree_depth=tune(), learn_rate=tune(),min_n=tune())%>% set_engine("xgboost")%>%
```
set_mode("regression") mets <- metric_set(rmse, mae)
model_wf <- workflow() %>% add_model(tune_model) %>% add_formula(cmedv -.)
best_model <- model_wf %>% tune_grid(resamples = bh_folds, grid = 5^4, metrics = mets) %>% select_best("rmse")
results <- model_wf %>% finalize_workflow(best_model) %>% fit(data = bh_train) %>% last_fit(bh_split, metrics = mets) %>% collect_metrics() as.data.frame(results[, c(1, 3)])
The following code snippet shows how to tune a GBM for the Boston Housing data using EZtune with Hooke-Jeeves and the fast option.
model <- eetune(x = subset(bh_train, select = -cmedv), y = bh_train$cmedv, method = "gbm", optimizer = "hjn", fast = 0.5)
predictions <- predict(model, bh_test) rmse.ez <- rmse_vec(truth = bh_test$cmedv, estimate = predictions) mae.ez <- mae_vec(truth = bh_test$cmedv, estimate = predictions) data.frame(RMSE = rmse.ez, MAE = mae.ez)
Table 5 shows the mean RMSEs and the mean computation times for the regression trials. As with binary classification, the grid search with tidymodels is substantially slower than the other options without meaningful improvements in RMSE. The EZtune fast computations have much shorter computation times than the other methods, with Hooke-Jeeves having the shortest computation times. The best RMSE results for three of the four datasets were achieved by EZtune.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{4}{c}{EZtune} & \multicolumn{4}{c}{Tidymodels} \\ \cline{2-7} Data & GA CV & GA fast & HJ CV & HJ fast & Grid & IB \\ \hline _Accuracy_ & & & & & & \\ BreastCancer & 0.991 & 0.992 & 0.994 & **0.995** & 0.965 & 0.965 \\ Lichen & 0.895 & **0.898** & 0.891 & 0.893 & 0.844 & 0.852 \\ Mullein & **0.970** & **0.970** & 0.967 & 0.966 & 0.929 & 0.922 \\ Pima & **0.833** & 0.823 & 0.806 & 0.815 & 0.742 & 0.740 \\ Sonar & 0.924 & **0.935** & 0.934 & 0.904 & 0.863 & 0.794 \\ _Time (seconds)_ & & & & & & \\ BreastCancer & 1,440 & 79.0 & 199 & **13.9** & 5,540 & 196 \\ Lichen & 4,130 & 369 & 853 & **50.3** & 11,200 & 433 \\ Mullein & 149,000 & 7,770 & 24,600 & **1,160** & 194,000 & 7,970 \\ Pima & 935 & 66.5 & 210 & **13.6** & 5,050 & 208 \\ Sonar & 1,730 & 79.5 & 306 & **17.8** & 6,170 & 200 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean accuracies and computation times in seconds for ten trials of tuning classification GBMs. The best results for each dataset are bolded.
### Results for elastic net
glmnet[9] can be used to tune elastic net, but it will not tune both \(\lambda\) and \(\alpha\) simulataneously. Automatic tuning with EZtune is compared to a common tuning method using glmnet. The glmnet method is as follows:
1. For each \(\alpha\) in (0, 0.1, 0.2,..., 0.9, 1.0) do the following:
2. Use cv.glmnet to find the \(\lambda\) that achieves the best accuracy or RMSE (min-\(\lambda\)) and the \(\lambda\) that produces the the error that is within one standard error of the minimum (1-SE).
3. Select the \(\alpha\) and \(\lambda\) combination that produces the model with the best accuracy or RMSE. Do this for each \(\lambda\) type (min-\(\lambda\) and 1-SE).
As with the previous comparisons, the EZtune and the glmnet models are tuned using a trial dataset and then verified using a test dataset. Note that EZtune uses glmnet to simultaneously tune \(\lambda\) and \(\alpha\) but it uses a Hooke-Jeeves or genetic algorithm to search through the hyperparameter space.
The following code snippet demonstrates how elastic net was tuned on the Sonar data using glmnet. Note that glmnet is particular about how the data are formatted for use in the glmnet and cv.glmnet functions. The explanatory variables must be a matrix which means factor or character variables cannot be directly used in the functions. EZtune is liberal with the way data are passed to the function extune. It can handle both data.frame and matrix objects and can handle both character and factor variables directly.
library(glmnet)
foldid <- sample(1:10, size = nrow(sonar_train), replace = TRUE) alpha <- seq(0, 1, 0.1) alpha_data <- data.frame(alpha = alpha, lambda = NA, loss = NA) model_cv <- NULL for (i in 1:length(alpha)) { model_cv[[i]] <- cv.glmnet(x = as.matrix(subset(sonar_train, select = -Class)), y = sonar_train$Class, family = "binomial", type.measure = "class") alpha_data[i, -1] <- c(model_cv[[i]]$lambda.ise, model_cv[[i]]$cum[model_cv[[i]]$lambda == model_cv[[i]]$lambda.ise]) } model <- glmnet(x = as.matrix(subset(sonar_train, select = -Class)), y = sonar_train$Class, family = "binomial", lambda = alpha_data$lambda[alpha_data$loss == min(alpha_data$loss)][i],
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{4}{c}{EZtune} & \multicolumn{2}{c}{Tidymodels} \\ \cline{2-7} Data & GA CV & GA fast & HJ CV & HJ fast & Grid & IB \\ \hline \multicolumn{7}{l}{_RMSE_} \\ Abalone & 2.18 & 2.14 & 2.17 & 2.16 & **2.13** & 2.15 \\ BostonHousing & 2.63 & 2.79 & 2.97 & **2.48** & 2.92 & 3.00 \\ CO2 & 2.60 & 2.80 & **2.48** & 2.71 & 2.59 & 2.55 \\ Crime & 25.8 & 31.2 & 27.5 & **22.6** & 24.1 & 31.2 \\ \multicolumn{7}{l}{_Time (seconds)_} \\ Abalone & 8,110 & 523 & 4,180 & **289** & 32,800 & 675 \\ BostonHousing & 3,180 & 171 & 1,480 & **64.0** & 6,840 & 288 \\ CO2 & 98.6 & 6.54 & 49.9 & **3.35** & 3,380 & 176 \\ Crime & 81.3 & 2.84 & 41.2 & **1.20** & 3,600 & 128 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean RMSEs and computation times in seconds for ten trials of tuning regression GBMs. The the best results for each dataset are bolded.
alpha = alpha_data$alpha[alpha_data$loss == min(alpha_data$loss)][1], type.measure = "class") sonar_test_truth <- as.factor(as.numeric(sonar_test$Class) - 1) result <- predict(model, as.matrix(subset(sonar_test, select = -Class)), type = "response") result.r <- as.factor(round(result)) acc <- accuracy_vec(truth = sonar_test_truth, estimate = result.r)auc <- roc_auc_vec(truth = sonar_test_truth, estimate = result[, 1], event_level = "second") data.frame(Accuracy = acc, AUC =auc) The following code snippet shows how EZtune was used to tune and elastic net model using the Hooke-Jeeves algorithm and 10-fold cross-validation. Note that it is much easier to tune an elastic net model with EZtune than with glmnet.
model <- ertune(x = subset(sonar_train, select = -Class), y = sonar_train$Class, method = "en", optimizer = "hjn", fast = FALSE, cross = 10) predictions <- predict(model, sonar_test) acc <- accuracy_vec(truth = sonar_test$Class, estimate = predictions[, 1])auc <- roc_auc_vec(truth = sonar_test$Class, estimate = predictions[, 2]) data.frame(Accuracy = acc, AUC =auc) Table 6 shows the mean accuracies and mean computation times for all ten trials. The table shows that no one method produced the best accuracy for all or most of the datasets and that the accuracies were similar. The computation times were much faster for EZtune with the Hooke-Jeeves optimizer and fast option than for the other options. This was also the best option in terms of accuracy for two of the datasets.
The following code snippet demonstrates how to tune an elastic net model for the Boston Housing data using glmnet.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{4}{c}{EZtune} & \multicolumn{2}{c}{Glmnet} \\ \cline{2-7} Data & GA CV & GA fast & HJ CV & HJ fast & 1-SE & Min \\ \hline _Accuracy_ & & & & & & \\ BreastCancer & 0.959 & **0.971** & 0.962 & **0.971** & 0.959 & 0.965 \\ Lichen & 0.851 & 0.848 & 0.841 & 0.832 & **0.855** & 0.848 \\ Mullein & 0.773 & 0.769 & **0.781** & 0.772 & 0.776 & 0.778 \\ Pima & **0.784** & 0.773 & 0.771 & 0.758 & 0.766 & 0.763 \\ Sonar & 0.726 & 0.736 & 0.774 & **0.792** & 0.717 & 0.745 \\ _Time (seconds)_ & & & & & & \\ BreastCancer & 6.04 & 2.53 & 2.37 & **1.45** & 3.69 & 3.69 \\ Lichen & 89.7 & 12.8 & 75.1 & **9.90** & 42.2 & 42.2 \\ Mullein & 1,080 & 143 & 1,440 & **124** & 680 & 680 \\ Pima & 7.94 & 2.91 & 2.55 & **1.45** & 2.06 & 2.06 \\ Sonar & 33.6 & 5.28 & 5.82 & **2.86** & 11.3 & 11.3 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean accuracy and computation times in seconds for ten trials of tuning classification elastic net models. The best results for each dataset are bolded.
bh_train$chas<-as.numeric(as.character(bh_train$chas)) bh_test$chas<-as.numeric(as.character(bh_test$chas))
foldid <-sample(1:10, size = nrow(bh_train), replace = TRUE) alpha<-seq(0, 1, 0.1) alpha_data<-data.frame(alpha = alpha, lambda = NA, loss = NA) model_cv<-NULL for (i in 1:length(alpha)) { model_cv[[i]]<-cv.glmnet(x = as.matrix(subset(bh_train, select = -cmedv)), y = bh_train$cmedv, family = "gaussian", type.measure = "mse") alpha_data[i, -1] <-c(model_cv[[i]]$lambda.min, model_cv[[i]]$cvm[model_cv[[i]]$lambda == model_cv[[i]]$lambda.min]) }
model <-glmnet(x = as.matrix(subset(bh_train, select = -cmedv)), y = bh_train$cmedv, family = "gaussian", lambda = alpha_data$lambda[alpha_data$loss == main(alpha_data$loss)][1], alpha = alpha_data$alpha[alpha_data$loss == min(alpha_data$loss)][1], type.measure = "mse")
result <-predict(model, as.matrix(subset(bh_test, select = -cmedv)), type = "response") rmse.en<-rmse_vec(truth = bh_test$cmedv, estimate = result[, 1]) mae.en<-mae_vec(truth = bh_test$cmedv, estimate = result[, 1])
data.frame(RMSE = rmse.en, MAE = mae.en)
The following code snippet shows how to tune an elastic net model for the Boston Housing data using EZtune with the genetic algorithm and the fast option.
model <-eztune(x = subset(bh_train, select = -cmedv), y = bh_train$cmedv, method = "en", optimizer = "ga", fast = 0.5)
predictions <-predict(model, bh_test) rmse.ez <-rmse_vec(truth = bh_test$cmedv, estimate = predictions) mae.ez <-mae_vec(truth = bh_test$cmdv, estimate = predictions)
data.frame(RMSE = rmse.ez, MAE = mae.ez)
Table 7 shows the mean RMSE and computation times for the regression elastic net models. As with binary classification, there is no one method that out performs the others. The table also shows the glmnet method was faster for regression than the other datasets, but all of them were fast.
## 5 Model selection with EZtune
As stated earlier, there is no one model type that out performs other models in all situations [10]. Thus, different model types should be compared when developing a model. EZtune provides an easy interface for comparing different models. Figure 1 shows the mean classification errors and mean computation times for ten models tuned with EZtune, tidmodels, and glmnet for all five of the binary classification datasets. SVM performed better for some datasets and GBM for others. The best type of model also depends on the method that was used tune the model. GBM and SVM performed similarly well for the Breast Cancer data with the SVM performing slightly better for most of the models. In many cases, the GBM and SVM models are comparable. However, for some datasets, one of the models consistently outperforms the others. For example, the model with the lowest classification error for the Sonar data is an SVM tuned with EZtune,
while the best model for the Mullein data is a GBM tuned with EZtune. Not only is the model type (GBM or SVM) important, different tuning methods produce models with very different accuracies as is seen with the Lichen, Pima, Abalone, and Boston Housing datasets. The elastic net models have greater classification error than the SVM and GBM in nearly all cases.
Figure 2 shows the RMSE and computation time for the regression models. Figure 1 and Figure 2 show that the elastic net model had a larger error rate than SVM and GBM for all of the datasets with the exception of the Crime dataset. The Crime dataset is very small and hyperparameter tuning is difficult with small datasets [8]. The SVM models performed better for the Abalone data, but the GBM was the better model for the Boston Housing and CO2 datasets.
with the price of being difficult to learn and implement, even for experienced R users. Further, tidymodels is a good option for experienced R users who have a solid understanding of hyperparameters and wish to explore a larger set of statistical learning models. In contrast, EZtune is an excellent option for users who want fast and effective hyperparameter tuning for a smaller set of model types without the programming overhead required for other approaches. EZtune is a powerful tuning tool whose simple interface, ability to find a well tuned model, and fast computation time make it an excellent choice for general hyperparameter tuning or incorporation into a larger computational pipeline. Not only is EZtune an approachable option for someone new to statistical learning models, it is an excellent way to become familiar with statistical learning models and their hyperparameters. EZtune can be used to prepare users to interact with tidymodels and caret in the future if they choose to expand their choice of models.
|
2309.07247 | Introduction to Continuous biframes in Hilbert spaces and their tensor
products | We introduce the notion of a continuous biframe in a Hilbert space which is a
generalization of discrete biframe in Hilbert space. Representation theorem for
this type of generalized frame is verified and some characterizations of this
biframe with the help of a invertible operator is given. Here we also introduce
the concept of continuous biframe for the tensor products of Hilbert spaces and
give an example. Further, we study dual continuous biframe and continuous
biframe Bessel multiplier in Hilbert spaces and their tensor products. | Prasenjit Ghosh, T. K. Samanta | 2023-08-28T13:18:15Z | http://arxiv.org/abs/2309.07247v1 | ###### Abstract
###### Abstract
_We introduce the notion of a continuous biframe in a Hilbert space which is a generalization of discrete biframe in Hilbert space. Representation theorem for this type of generalized frame is verified and some characterizations of this biframe with the help of a invertible operator is given. Here we also introduce the concept of continuous biframe for the tensor products of Hilbert spaces and give an example. Further, we study dual continuous biframe and continuous biframe Bessel multiplier in Hilbert spaces and their tensor products._
**Introduction to Continuous biframes in Hilbert spaces**
**and their tensor products**
**Prasenjit Ghosh**
Department of Mathematics,
Barwan N. S. High School (HS), Barwan,
Murshidabad, 742161, West Bengal, India
e-mail: [email protected]
**T. K. Samanta**
Department of Mathematics, Uluberia College,
Uluberia, Howrah, 711315, West Bengal, India
e-mail: mumpu\({}_{-}\)[email protected]
**Keywords:**_Frame, Dual frame, Continuous frame, biframe, tensor product._
**2010 MSC:**_Primary 42C15; Secondary 46C07, 46C50._
## 1 Introduction
The notion of a frame in Hilbert space was first introduced by Duffin and Schaeffer [5] in connection with some fundamental problem in non-harmonic analysis. Thereafter, it was further developed and popularized by Daubechies et al. [4] in 1986. A discrete frame is a countable family of elements in a separable Hilbert space which allows for a stable, not necessarily unique, decomposition of an arbitrary element into an expansion of the frame element. A sequence \(\,\left\{\,f_{\,i}\,\right\}_{i\,=\,1}^{\infty}\,\) in a separable Hilbert space \(\,H\,\) is called a frame for \(\,H,\) if there exist positive constants \(\,0<A\,\leq\,B\,<\,\infty\,\) such that
\[A\,\|\,f\,\|^{\,2}\,\leq\,\sum_{\,i\,=\,1}^{\infty}\,\,|\,\,\langle\,f\,,\,f_{ \,i}\,\rangle\,|^{\,2}\,\leq\,B\,\|\,f\,\|^{\,2}\,\,\,\mbox{for all}\,\,\,f\,\in\,H.\]
The constants \(\,A\,\) and \(\,B\,\) are called lower and upper bounds, respectively.
Controlled frame is one of the newest generalization of frame in Hilbert space. I. Bogdanova et al. [3] introduced controlled frame for spherical wavelets to get numerically more efficient approximation algorithm. Thereafter, P. Balaz [2] developed
weighted and controlled frame in Hilbert space. Biframe is also a generalization of controlled frame in Hilbert space which was studied by M. F. Parizi et al. [9]. To define frame in Hilbert space, only one sequence is needed, but for a biframe, two sequences are needed. A pair of sequences \(\,(\,\{\,f_{i}\,\}_{i\,=\,1}^{\infty}\,\,,\,\{\,g_{i}\,\}_{i\,=\,1}^{\infty}\,)\,\) in \(\,H\,\) is called a biframe for \(\,H\,\) if there exist positive constants \(\,A\,\) and \(\,B\,\) such that
\[A\,\|\,f\,\|^{\,2}\,\leq\,\sum_{i\,=\,1}^{\infty}\,\langle\,f\,,\,f_{i}\, \rangle\,\langle\,g_{i}\,,\,f\,\rangle\,\leq\,B\,\|\,f\,\|^{\,2}\,\,\,\forall\, f\,\in\,H.\]
The constants \(\,A\,\) and \(\,B\,\) are called lower and upper biframe bounds, respectively.
Continuous frames extend the concept of discrete frames when the indices are related to some measurable space. Continuous frame in Hilbert space was studied by A. Rahimi et al [1]. M. H. faroughi and E. Osgooei [7] also studied continuous frame and continuous Bessel mapping. Continuous frame and discrete frame have been used in image processing, coding theory, wavelet analysis, signal denoising, feature extraction, robust signal processing etc.
In this paper, we give the notion of continuous biframes in Hilbert spaces and their tensor products and then discuss some examples of this type of frame. A characterization of continuous biframe using its frame operator is established. We will see that the image of a continuous biframe under a bounded invertible operator in Hilbert space is also a continuous biframe in Hilbert space. Continuous biframe Bessel multipliers in Hilbert spaces and their tensor product are presented.
## 2 Preliminaries
**Definition 2.1.**_[_1_]_ _Let \(\,H\,\) be a complex Hilbert space and \(\,(\,\Omega,\,\mu\,)\,\) be a measure space with positive measure \(\,\mu\). A mapping \(\,F\,:\,\Omega\,\to\,H\,\) is called a continuous frame with respect to \(\,(\,\Omega,\,\mu\,)\,\) if_
* \(\,F\,\) _is weakly-measurable, i. e., for all_ \(\,f\,\in\,H\)_,_ \(\,w\,\to\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\) _is a measurable function on_ \(\,\Omega\)_,_
* _there exist constants_ \(\,0\,<\,A\,\leq\,B\,<\,\infty\,\) _such that_ \[A\,\|\,f\,\|^{\,2}\leq\int\limits_{\Omega}|\,\langle\,f,\,F\,(\,w\,)\, \rangle\,|^{\,2}\,\,d\mu\leq B\,\|\,f\,\|^{\,2}\,\,,\]
_for all \(\,f\,\in\,H\). The constants \(\,A\,\) and \(\,B\,\) are called continuous frame bounds. If \(\,A\,=\,B\), then it is called a tight continuous frame. If the mapping \(\,F\,\) satisfies only the right inequality, then it is called continuous Bessel mapping with Bessel bound \(\,B\)._
Let \(\,L^{\,2}\,(\,\Omega,\,\mu\,)\,\) be the class of all measurable functions \(\,f\,:\,\Omega\,\to\,H\,\) such that \(\,\|\,f\,\|_{\,2}^{\,2}\,=\,\int\limits_{\Omega}\|\,f\,(\,w\,)\,\|^{\,2}\,\,d \mu\,<\,\infty\). It can be proved that \(\,L^{\,2}\,(\,\Omega,\,\mu\,)\,\) is a Hilbert space with respect to the inner product defined by
\[\langle\,f,\,g\,\rangle_{\,L^{\,2}}\,=\,\int\limits_{\Omega}\,\langle\,f\,(\, w\,),\,g\,(\,w\,)\rangle\,\,d\mu\,,\,f,\,g\,\in\,L^{\,2}\,(\,\Omega,\,\mu\,).\]
**Theorem 2.2.**_[7] Let \(F:\Omega\to H\) be a Bessel mapping. Then the operator \(T_{C}:L^{2}\,(\,\Omega,\,\mu\,)\,\to\,H\) is defined by_
\[\langle\,T_{C}\,(\,\varphi\,),\,h\,\rangle\,=\,\int\limits_{\Omega}\,\varphi\, (\,w\,)\,\,\langle\,F\,(\,w\,),\,h\,\rangle\,\,d\mu\]
_where \(\varphi\in L^{2}\,(\,\Omega,\,\mu\,)\) and \(h\in H\) is well-defined, linear, bounded and its adjoint operator is given by_
\[T_{C}^{*}\,:\,H\,\to\,L^{2}\,(\,\Omega,\,\mu\,)\,\,,\,\,T_{C}^{*}\,f\,(\,w\,)\, =\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\,,\,\,f\,\in\,H\,,\,\,\,w\,\in\,\Omega.\]
The operator \(T_{C}\) is called a pre-frame operator or synthesis operator and its adjoint operator is called analysis operator of \(F\).
**Definition 2.3.**_[7] Let \(F:\Omega\to H\) be a continuous frame for \(H\). Then the operator \(S_{C}:H\to H\) defined by_
\[\langle\,S_{C}\,(\,f\,),\,h\,\rangle\,=\,\int\limits_{\Omega}\,\langle\,f,F \,(\,w\,)\,\rangle\,\langle\,F\,(\,w\,),\,h\,\rangle\,\,d\mu\,,\,\,\forall\,\, f,\,h\,\in\,H\]
_is called the frame operator of \(F\)._
The tensor product of Hilbert spaces are introduced by several ways and it is a certain linear space of operators which was represented by Folland in [6].
**Definition 2.4.**_[10] The tensor product of Hilbert spaces \(H_{1}\) and \(H_{2}\) is denoted by \(H_{1}\otimes H_{2}\) and it is defined to be an inner product space associated with the inner product_
\[\big{\langle}\,f\,\otimes\,g\,,\,f\,^{\prime}\,\otimes\,g\,^{\prime}\,\big{\rangle} \,=\,\big{\langle}\,f,\,f\,^{\prime}\,\big{\rangle}_{1}\,\,\big{\langle}\,g, \,g^{\prime}\,\big{\rangle}_{2}\,, \tag{1}\]
_for all \(f,\,f\,^{\prime}\in H_{1}\) and \(g,\,g^{\prime}\in H_{2}\). The norm on \(H_{1}\otimes H_{2}\) is given by_
\[\|\,f\,\otimes\,g\,\|\,=\,\|\,f\,\|\,_{1}\,\|\,g\,\|\,_{2}\,\,\,\forall\,\,f \,\in\,H_{1}\,\,\,\mbox{and}\,\,\,g\,\in\,H_{2}. \tag{2}\]
_The space \(H_{1}\otimes H_{2}\) is complete with respect to the above inner product. Therefore the space \(H_{1}\otimes H_{2}\) is a Hilbert space._
For \(Q\in\mathcal{B}\,(\,H_{1}\,)\) and \(T\in\mathcal{B}\,(\,H_{2}\,)\), the tensor product of operators \(Q\) and \(T\) is denoted by \(Q\,\otimes\,T\) and defined as
\[(\,Q\,\otimes\,T\,)\,\,A\,=\,Q\,A\,T\,^{*}\,\,\,\forall\,\,\,A\,\in\,H_{1} \,\otimes\,H_{2}.\]
It can be easily verified that \(Q\,\otimes\,T\,\in\,\mathcal{B}\,(\,H_{1}\,\otimes\,H_{2}\,)\)[6].
**Theorem 2.5.**_[6] Suppose \(Q,\,Q^{\prime}\in\mathcal{B}\,(\,H_{1}\,)\) and \(T,\,T^{\prime}\in\mathcal{B}\,(\,H_{2}\,)\), then_
\((i)\)_\(Q\,\otimes\,T\,\in\mathcal{B}\,(\,H_{1}\,\otimes\,H_{2}\,)\) and \(\|\,Q\,\otimes\,T\,\|\,=\,\|\,Q\,\|\,\|\,T\,\|\)._
\((ii)\)_\((\,Q\,\otimes\,T\,)\)_\((\,f\,\otimes\,g\,)\,=\,Q\,(\,f\,)\otimes\,T\,(\,g\,)\) for all \(f\in H_{1},\,g\in H_{2}\)._
\((iii)\)_\((\,Q\,\otimes\,T\,)\)_\((\,Q^{\prime}\,\otimes\,T\,^{\prime}\,)\,=\,(\,Q\,Q^{\prime}\,)\,\otimes\,(\,T\,T^{ \prime}\,)\)._
\((iv)\)_\(Q\,\otimes\,T\,\) is invertible if and only if \(Q\) and \(T\) are invertible, in which case \((\,Q\,\otimes\,T\,)^{-\,1}\,=\,\big{(}\,Q\,^{-\,1}\,\otimes\,T^{-\,1}\,\big{)}\)._
\((v)\)_\((\,Q\,\otimes\,T\,)\,^{*}\,=\,(\,Q\,^{*}\,\otimes\,T\,^{*}\,)\)._
## 3 Continuous biframe in Hilbert space
In this section, first we give the definition of a continuous biframe in Hilbert space and then discuss some of its properties.
**Definition 3.1.**_Let \(H\) be a Hilbert space and \((\,\Omega,\,\mu\,)\) be a measure space with positive measure \(\mu.\) A pair \((\,F,\,G\,)\,=\,\,(\,F:\Omega\,\to\,H,\,\,G:\Omega\,\to\,H\,)\) of mappings is called a continuous biframe for \(H\) with respect to \((\,\Omega,\,\mu\,)\) if_
1. \(F,\,G\) _are weakly-measurable, i. e., for all_ \(f\in H\)_,_ \(w\,\mapsto\,\langle\,f,\,F\,(\,w\,)\,\rangle\) _and_ \(w\,\mapsto\,\langle\,f,\,G\,(\,w\,)\,\rangle\) _are measurable functions on_ \(\Omega\)_,_
2. _there exist constants_ \(0<A\,\leq\,B\,<\,\infty\) _such that_ \[A\,\left\|\,f\,\right\|^{2}\leq\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,) \,\rangle\,\,\langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu\leq\,B\,\left\|\,f\, \right\|^{2}\,,\] (3)
_for all \(f\in H.\) The constants \(A\) and \(B\) are called continuous biframe bounds. If \(A\,=\,B\), then it is called a tight continuous biframe and it is called Parseval continuous biframe if \(A\,=\,B\,=\,1.\) If the pair \((\,F,\,G\,)\) satisfies only the right inequality, then it is called continuous biframe Bessel mapping with Bessel bound \(B\)._
In particular, if \(\mu\) is a counting measure and \(\Omega\,=\,\mathbb{N},\) then \((\,F,\,G\,)\) is called a discrete biframe for \(H.\)
**Remark 3.2.**_Let \(F\,:\,\Omega\,\to\,H\) be a mapping. Then according to the Definition 3.1, we say that_
1. _If_ \((\,F,\,F\,)\) _is a continuous biframe for_ \(H\)_, then_ \(F\) _is a continuous frame for_ \(H\)_._
2. _If_ \(U\,\in\,G\,\mathcal{B}(\,H\,)\)_,_ \((\,F,\,UF\,)\) _is a continuous biframe for_ \(H\)_, then_ \(F\) _is a_ \(U\)_-controlled continuous frame for_ \(H\)_, where_ \(\mathcal{G}\,\mathcal{B}(\,H\,)\) _denotes the set of all bounded linear operators which have bounded inverse._
3. _If_ \(T,\,U\,\in\,G\,\mathcal{B}(\,H\,)\)_,_ \((\,TF,\,UF\,)\) _is a continuous biframe for_ \(H\)_, then_ \(F\) _is a_ \((\,T,\,U\,)\)_-controlled continuous frame for_ \(H\)_._
Now, we validates the above definition by some examples.
**Example 3.3.**_Let \(H\,=\,\mathbb{R}^{\,3}\) and \(\{\,e_{\,1},\,e_{\,2},\,e_{\,3}\,\}\) be an standard orthonormal basis for \(H\). Consider_
\[\Omega\,=\,\left\{\,x\,\in\,\mathbb{R}^{\,3}\,:\,\left\|\,x\,\right\|\,\leq\,1 \,\right\}.\]
_Then it is a measure space equipped with the Lebesgue measure \(\mu.\) Suppose \(\{\,B_{\,1},\,B_{\,2},\,B_{\,3}\,\}\) is a partition of \(\Omega\) where \(\mu\,(\,B_{\,1}\,)\,\geq\,\mu\,(\,B_{\,2}\,)\,\geq\,\mu\,(\,B_{\,3}\,)\,>\,1.\) Define_
\[F\,:\,\Omega\,\to\,H\quad\text{ by }\,\,F\,(\,w\,)\,=\,\begin{cases} \dfrac{e_{\,1}}{\sqrt{\,\mu\,(B_{\,1})}}&\text{if }\,\,\,w\,\in\,B_{\,1}\\ \dfrac{e_{\,2}}{\sqrt{\,\mu\,(B_{\,2}\,)}}&\text{if }\,\,w\,\in\,B_{\,2}\\ \dfrac{2\,e_{\,3}}{\sqrt{\,\mu\,(B_{\,3}\,)}}&\text{if }\,\,w\,\in\,B_{\,3}\end{cases}\]
_and_
\[G\,:\,\Omega\,\to\,H\quad\text{ by }\,\,G\,(\,w\,)\,=\,\begin{cases} \dfrac{2\,e_{\,1}}{\dfrac{\sqrt{\mu\,(B_{1})}}{\sqrt{\mu\,(B_{1})}}}&\text{if }\,\,w\,\in\,B_{1}\\ \dfrac{e_{\,2}}{\sqrt{\mu\,(B_{2})}}&\text{if }\,\,w\,\in\,B_{2}\\ \dfrac{e_{\,3}}{2\,\sqrt{\mu\,(B_{3})}}&\text{if }\,\,w\,\in\,B_{3}\end{cases}\]
_It is easy to verify that for all \(\,f\,\in\,H\), \(\,w\,\mapsto\,\left\langle\,f,\,F\,(\,w\,)\,\right\rangle\,\) and \(\,w\,\mapsto\,\left\langle\,f,\,G\,(\,w\,)\,\right\rangle\,\) are measurable functions on \(\,\Omega\). Now, for \(\,f\,\in\,H\), we have_
\[\int_{\Omega}\left|\left\langle\,f,\,F\,(\,w\,)\,\right\rangle \,\right|^{2}\,d\mu =\int_{B_{1}}\left|\left\langle\,f,\,\dfrac{1}{\sqrt{\mu\,(B_{1})} }\,e_{\,1}\,\right\rangle\,\right|^{2}\,d\mu\,+\,\int_{B_{2}}\left|\left\langle \,f,\,\dfrac{1}{\sqrt{\mu\,(\,B_{2}\,)}}\,e_{\,2}\,\right\rangle\,\right|^{2}\, d\mu\] \[\qquad\qquad+\,\int_{B_{3}}\left|\left\langle\,f,\,\dfrac{1}{ \sqrt{\mu\,(\,B_{3}\,)}}\,2\,e_{\,1}\,\right\rangle\,\right|^{2}\,d\mu\] \[=\,\left|\left\langle\,f,\,\,e_{\,1}\,\right\rangle\,\right|^{2} \,+\,\left|\left\langle\,f,\,\,e_{\,2}\,\right\rangle\,\right|^{2}\,+\,4\, \left|\left\langle\,f,\,e_{\,3}\,\right\rangle\,\right|^{2}\] \[=\,\left\|\,f\,\right\|^{2}\,+\,3\,\left|\left\langle\,f,\,\,e_{ \,3}\,\right\rangle\,\right|^{2}.\]
_Therefore, \(\,F\,:\,\Omega\,\to\,H\,\) is a continuous frame for \(\,H\,\) with bounds \(\,1\,\) and \(\,4.\,\)Similarly, it can be shown that \(\,G\,:\,\Omega\,\to\,H\,\) is a continuous frame for \(\,H\,\) with bounds \(\,1\,/\,4\,\) and \(\,4.\,\)On the other hand, for \(\,f\,\in\,H\), we have_
\[\int_{\Omega}\,\left\langle\,f,\,F\,(\,w\,)\,\right\rangle\, \left\langle\,G\,(\,w\,),\,f\,\right\rangle\,d\mu\] \[=\int_{B_{1}}\left\langle\,f,\,\dfrac{e_{\,1}}{\sqrt{\mu\,(\,B_{1 }\,)}}\,\right\rangle\,\left\langle\,\dfrac{2\,e_{\,1}}{\sqrt{\mu\,(\,B_{1}\, )}},\,f\,\right\rangle\,d\mu\] \[+\,\int_{B_{2}}\left\langle\,f,\,\dfrac{e_{\,2}}{\sqrt{\mu\,(\,B_ {2}\,)}}\,\right\rangle\,\left\langle\,\dfrac{e_{\,2}}{\sqrt{\mu\,(\,B_{2}\,) }},\,f\,\right\rangle\,d\mu\] \[+\,\int_{B_{3}}\left\langle\,f,\,\dfrac{2\,e_{\,3}}{\sqrt{\mu\,( \,B_{3}\,)}}\,\right\rangle\,\left\langle\,\dfrac{e_{\,3}}{2\,\sqrt{\mu\,(\,B_ {3}\,)}},\,f\,\right\rangle\,d\mu\] \[=\,2\,\left|\left\langle\,f,\,\,e_{\,1}\,\right\rangle\,\right|^{ 2}+\,\left|\left\langle\,f,\,\,e_{\,2}\,\right\rangle\,\right|^{2}+\,\left| \left\langle\,f,\,\,e_{\,3}\,\right\rangle\,\right|^{2}\] \[=\,\left\|\,f\,\right\|^{2}\,+\,\left|\left\langle\,f,\,\,e_{\,1 }\,\right\rangle\,\right|^{2}.\]
_Therefore, \(\,(\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,1\,\) and \(\,2\)._
**Example 3.4.**_Let \(\,H\,\) be an infinite dimensional separable Hilbert space and \(\,\{\,e_{\,i}\,\}_{i=\,1}^{\infty}\,\) be an orthonormal basis for \(\,H.\,\)Suppose_
\[\left\{\,f_{\,i}\,\right\}_{i=\,1}^{\infty}\,=\,\left\{\,e_{\,1}, \,e_{\,1},\,e_{\,1},\,e_{\,2},\,e_{\,3},\,\cdots\,\cdots\,\right\}\,,\] \[\left\{\,g_{\,i}\,\right\}_{i=\,1}^{\infty}\,=\,\left\{\,0,\,e_{ \,1},\,e_{\,1},\,e_{\,2},\,e_{\,3},\,\cdots\,\cdots\,\right\}.\]
_Now, let \((\,\Omega,\,\mu\,)\,\) be a measure space with \(\,\mu\,\) is \(\,\sigma\)-finite. Then we can write \(\,\Omega\,=\,\bigcup_{i=1}^{\infty}\,\Omega_{i}\), where \(\,\{\,\Omega_{\,i}\,\}_{i=\,1}^{\infty}\,\) is a sequence of disjoint measurable subsets of \(\,\Omega\,\) with \(\,\mu\,(\,\Omega_{i}\,)\,<\,\infty\). For each \(\,w\,\in\,\Omega\), we define the mappings \(\,F\,:\,\Omega\,\to\,H\,\) by \(\,F\,(\,w\,)\,=\,\frac{1}{\sqrt{\mu\,(\,\Omega_{i}\,)}}\,f_{\,i}\,\) and \(\,G\,:\,\Omega\,\to\,H\,\) by \(\,G\,(\,w\,)\,=\,\frac{1}{\sqrt{\mu\,(\,\Omega_{i}\,)}}\,g_{\,i}\). Then for \(\,f\,\in\,H\), we have_
\[\int\limits_{\Omega}|\langle\,f,\,F\,(\,w\,)\,\rangle\,|^{2}\,\,d \mu\,=\,\sum_{i=\,1}^{\infty}\int\limits_{\Omega_{i}}|\,\langle\,f,\,f_{\,i}\, \rangle\,|^{\,2}\,\,d\mu\] \[\,=\,2\mid\langle\,f,\,e_{\,1}\,\rangle\,|^{\,2}\,+\,\sum_{i=\,1 }^{\infty}\mid\langle\,f,\,e_{\,i}\,\rangle\,|^{\,2}\] \[\,=\,2\mid\langle\,f,\,e_{\,1}\,\rangle\,|^{\,2}\,+\,\|f\,\|^{\,2 }\,.\]
_Therefore, \(\,F\,\) is a continuous frame for \(\,H\,\) with bounds \(\,1\,\) and \(\,3\). Similarly, it can be shown that \(\,G\,\) is a continuous frame for \(\,H\,\) with bounds \(\,1\,\) and \(\,2\). Now, for \(\,f\,\in\,H\), we have_
\[\int\limits_{\Omega}\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\langle \,G\,(\,w\,),\,f\,\rangle\,\,d\mu\,=\,\sum_{i=\,1}^{\infty}\int\limits_{ \Omega_{i}}\,\langle\,f,\,f_{\,i}\,\rangle\,\,\langle\,g_{\,i},\,f\,\rangle\, \,d\mu\] \[\,=\,\langle\,f,\,e_{\,1}\,\rangle\,\langle\,e_{\,1},\,f\,\rangle \,+\,\langle\,f,\,e_{\,1}\,\rangle\,\langle\,e_{\,1},\,f\,\rangle\,+\,\langle \,f,\,e_{\,2}\,\rangle\,\langle\,e_{\,2},\,f\,\rangle+\cdots\] \[\,=\,\langle\,f,\,e_{\,1}\,\rangle\,\langle\,e_{\,1},\,f\,\rangle \,+\,\langle\,f,\,\langle\,f,\,e_{\,1}\,\rangle\,e_{\,1}\,\rangle\,+\,\langle \,f,\,\langle\,f,\,e_{\,2}\,\rangle\,e_{\,2}\,\rangle\,+\cdots\] \[\,=\,|\langle\,f,\,e_{\,1}\,\rangle\,|^{\,2}\,+\,\langle\,f,\, \sum_{i=\,1}^{\infty}\,\langle\,f,\,e_{\,i}\,\rangle\,e_{\,i}\,\rangle\] \[\,=\,|\langle\,f,\,e_{\,1}\,\rangle\,|^{\,2}\,+\,\langle\,f,\,f \,\rangle\,=\,|\langle\,f,\,e_{\,1}\,\rangle\,|^{\,2}\,+\,\|\,f\,\|^{\,2}.\]
_Thus, \(\,(\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,1\,\) and \(\,2\)._
Next, we give an example of a continuous biframe for a real inner product space.
**Example 3.5.**_Let_
\[V\,=\,\left\{\left(\begin{array}{cc}a&0\\ 0&b\end{array}\right):\,a,\,b\,\in\,\mathbb{R}\right\}.\]
_Define \(\,\langle\,\cdot,\,\cdot\,\rangle:V\,\times\,V\,\to\,\mathbb{R}\,\) by \(\,\langle\,M,\,N\,\rangle\,=\,det\,\big{(}\,MN^{\,t}\,\big{)}\), for all \(\,M,\,N\,\in\,V\). Then it is easy to verify that \(\,\langle\,\cdot,\,\cdot\,\rangle\,\) is an real inner product on \(\,V\). Now, we consider a measure space \(\,(\,\Omega\,=\,[\,0,\,1\,],\,\mu\,)\,\) where \(\,\mu\,\) is the Lebesgue measure. Define \(\,F\,:\,\Omega\,\to\,V\,\) by_
\[F\,(\,w\,)\,=\,\left(\begin{array}{cc}\sqrt{3}\,(\,1\,-\,w\,)&0\\ 0&\sqrt{3}\,(\,1\,+\,w\,)\end{array}\right),\;w\,\in\,\Omega\]
_and \(\,G\,:\,\Omega\,\to\,V\,\) by_
\[G\,(\,w\,)\,=\,\left(\begin{array}{cc}\sqrt{2\,w}&0\\ 0&\sqrt{2\,w}\end{array}\right),\;w\,\in\,\Omega\]
_It is easy to verify that for all \(\,M\,\in\,V\), \(\,w\,\mapsto\,\langle\,M,\,F\,(\,w\,)\,\rangle\,\) and \(\,w\,\mapsto\,\langle\,M,\,G\,(\,w\,)\,\rangle\,\) are measurable functions on \(\,\Omega\). Now, for each \(\,M\,=\,\left(\begin{array}{cc}a&0\\ 0&b\end{array}\right)\,\in\,V\), we get_
\[\langle\,M,\,F\,(\,w\,)\,\rangle\,=\,det\left\{\,\left(\begin{array}{cc}a&0 \\ 0&b\end{array}\right)\left(\begin{array}{cc}\sqrt{3}\,(\,1\,-\,w\,)&0\\ 0&\sqrt{3}\,(\,1\,+\,w\,)\end{array}\right)\,\right\}\,=\,3\,a\,b\,\left(\,1\,- \,w^{\,2}\,\right)\]
_and_
\[\langle\,G\,(\,w\,),\,M\,\rangle\,=\,det\left\{\,\left(\begin{array}{cc} \sqrt{2\,w}&0\\ 0&\sqrt{2\,w}\end{array}\right)\left(\begin{array}{cc}a&0\\ 0&b\end{array}\right)\,\right\}\,=\,2\,a\,b\,w.\]
_Thus, for each \(\,M\,=\,\left(\begin{array}{cc}a&0\\ 0&b\end{array}\right)\,\in\,V\), we have_
\[\int\limits_{[\,0,\,1\,]}\,\langle\,M,\,F\,(\,w\,)\,\rangle\,\, \langle\,G\,(\,w\,),\,M\,\rangle\,\,d\mu\,=\,\int\limits_{[\,0,\,1\,]}\,6\,a^{ \,2}\,b^{\,2}\,w\,\left(\,1\,-\,w^{\,2}\,\right)\,d\mu\] \[\,=\,\frac{3}{2}\,a^{\,2}\,b^{\,2}\,=\,\frac{3}{2}\,\,det\left( \begin{array}{cc}a^{\,2}&0\\ 0&b^{\,2}\end{array}\right)\,=\,\frac{3}{2}\,\parallel M\,\parallel^{\,2}.\]
_Therefore, \(\,(\,F,\,G\,)\,\) is a tight continuous biframe for \(\,H\,\) with bound \(\,3\,/\,2\)._
Let \(\,(\,F,\,G\,)\,=\,(\,F:\Omega\to H,\,\,G:\Omega\to H\,)\,\) be a continuous biframe for \(\,H\,\) with respect to \(\,(\,\Omega,\,\mu\,)\).Then the mapping \(\,\Psi\,:\,H\,\times\,H\,\rightarrow\,\mathbb{C}\,\) defined by
\[\Psi\,(\,f,\,g\,)\,=\,\int\limits_{\Omega}\,\,\langle\,f,F\,(\,w\,)\,\rangle \,\langle\,G\,(\,w\,),\,g\,\rangle\,\,d\mu\]
is well-defined, a sesquilinear form (i. e linear in the first and conjugate-linear in the second variable ) and is bounded. By Cauchy-Schwartz inequality, we get
\[|\,\Psi\,(\,f,\,g\,)\,| \,\leq\,\int\limits_{\Omega}\,|\,\langle\,f,F\,(\,w\,)\,\rangle \,\langle\,G\,(\,w\,),\,g\,\rangle\,|\,\,d\mu\] \[\,\leq\,\left(\,\int\limits_{\Omega}\,|\,\langle\,f,F\,(\,w\,)\, \rangle\,|^{\,2}\,\,d\mu\,\right)^{1\,/\,2}\,\left(\,\int\limits_{\Omega}\,| \,\langle\,G\,(\,w\,),\,g\,\rangle\,|^{\,2}\,\,d\mu\,\right)^{1\,/\,2}\] \[\,\leq\,B\,\|\,f\,\|\,\|\,g\,\|\]
By Theorem 2.3.6 in [8], there exists a unique operator \(\,S_{F,\,G}\,:\,H\,\rightarrow\,H\,\) such that
\[\Psi\,(\,f,\,g\,)\,=\,\langle\,S_{F,\,G}\,f,\,g\,\rangle\,\,\,\,\forall\,f, \,g\,\in\,H\]
and moreover \(\,\parallel\Psi\,\parallel\,=\,\parallel S_{F,\,G}\,\|\).
Now, we introduce the continuous biframe operator and give some properties.
**Definition 3.6.**_Let \((\,F,\,G\,)\,=\,\,(\,F:\Omega\to H,\,G:\Omega\to H\,)\,\) be a continuous biframe for \(\,H\,\) with respect to \(\,(\,\Omega,\,\mu\,)\). Then the operator \(\,S_{F,\,G}\,:H\to H\,\) defined by_
\[S_{F,\,G}\,f\,=\,\int\limits_{\Omega}\,\,\langle\,f,F\,(\,w\,)\,\rangle\,\,G\, (\,w\,)\,d\mu,\]
_for all \(\,f\,\in\,H\,\) is called the frame operator._
Now, for each \(\,f\,\in\,H\), we have
\[\langle\,S_{F,\,G}\,f,\,f\,\rangle\,=\,\int\limits_{\Omega}\,\,\langle\,f,F\,( \,w\,)\,\rangle\,\langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu. \tag{4}\]
Thus, for each \(\,f\,\in\,H\), we get
\[A\,\parallel\!f\!\parallel^{\,2}\,\leq\,\langle\,S_{F,\,G}\,f,\,f\,\rangle\, \leq\,B\,\parallel\!f\!\parallel^{\,2}.\]
This implies that \(\,A\,I\,\leq\,S_{F,\,G}\,\leq\,B\,I\), where \(\,I\,\) is the identity operator on \(\,H\). Hence, \(\,S_{F,\,G}\,\) is positive and invertible. Here, we assume that \(\,S_{F,\,G}\,\) is self-adjoint operator.
Thus, every \(\,f\,\in\,H\,\) has the representations
\[f\,=\,S_{F,\,G}\,S_{F,\,G}^{\,-\,1}\,f\,=\,\int\limits_{\Omega}\,\,\left\langle \,f,S_{F,\,G}^{\,-\,1}\,F\,(\,w\,)\,\right\rangle\,\,G\,(\,w\,)\,d\mu\,,\]
\[f\,=\,S_{F,\,G}^{\,-\,1}\,S_{F,\,G}\,f\,=\,\int\limits_{\Omega}\,\,\left\langle \,f,F\,(\,w\,)\,\right\rangle\,\,S_{F,\,G}^{\,-\,1}\,G\,(\,w\,)\,d\mu\,.\]
**Example 3.7.**_Let \(\,H\,=\,\mathbb{R}\,^{\,3}\).Suppose_
\[\{\,f_{i}\,\}_{i=1}^{\,3}\,=\,\{\,(\,2,\,1,\,1\,),\,(\,-\,1,\,3, \,-\,1\,),\,(\,-\,1,\,1,\,4\,)\,\}\,\,,\] \[\{\,g_{i}\,\}_{i=1}^{\,3}\,=\,\{\,(\,1,\,0,\,0\,),\,(\,0,\,1,\, 0\,),\,(\,0,\,0,\,1\,)\,\}\,.\]
_Consider_
\[\Omega\,=\,\left\{\,x\,\in\,\mathbb{R}^{\,3}\,:\,\parallel\!x\!\parallel\, \leq\,1\,\right\}.\]
_Then \(\,(\,\Omega,\,\mu\,)\) is a measure space, where \(\,\mu\,\) is the Lebesgue measure. Suppose \(\,\{\,B_{\,1},\,B_{\,2},\,B_{\,3}\,\}\,\) is a partition of \(\,\Omega\,\) where \(\,\mu\,(\,B_{\,1}\,)\geq\mu\,(\,B_{\,2}\,)\geq\mu\,(\,B_{\,3}\,)>1\). Now, we define_
\[F\,:\,\Omega\,\rightarrow\,H\,\,\,\,\,\,\,\,\,by\,\,\,\,\,F\,(\,w\,)\,=\, \begin{cases}\dfrac{f_{1}}{\sqrt{\,\mu\,(\,B_{\,1}\,)}}&\mbox{if \,\,\,\,\,$w\,\in\,B_{\,1}$}\\ \dfrac{f_{2}}{\sqrt{\,\mu\,(\,B_{\,2}\,)}}&\mbox{if \,\,\,\,$w\,\in\,B_{\,2}$}\\ \dfrac{f_{3}}{\sqrt{\,\mu\,(\,B_{\,3}\,)}}&\mbox{if \,\,\,\,$w\,\in\,B_{\,3}$}\end{cases}\]
_and_
\[G\,:\,\Omega\,\to\,H\quad\mbox{ by }\,\,G\,(\,w\,)\,=\,\cases{\frac{ g_{1}}{\sqrt{\,\mu\,(\,B_{1}\,)}}&if \,\,\,w\,\in\,B_{1}\cr\frac{ g_{2}}{\sqrt{\,\mu\,(\,B_{2}\,)}}&if \,\,\,w\,\in\,B_{2}\cr\frac{ g_{3}}{\sqrt{\,\mu\,(\,B_{3}\,)}}&if \,\,\,w\,\in\,B_{3}\cr}\]
_It is easy to verify that for all \(\,f\,\in\,H\), \(\,w\,\mapsto\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\) and \(\,w\,\mapsto\,\langle\,f,\,G\,(\,w\,)\,\rangle\,\) are measurable functions on \(\,\Omega\). Now, for \(\,f\,=\,(\,x,\,y,\,z\,)\,\in\,H\), we have_
\[\int\limits_{\Omega}\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\, \langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu\] \[=\int\limits_{B_{1}}\,\left\langle\,f,\,\frac{f_{1}}{\sqrt{\mu\,( \,B_{1}\,)}}\,\right\rangle\,\left\langle\,\frac{g_{1}}{\sqrt{\mu\,(\,B_{1}\,) }},\,f\,\right\rangle\,d\mu\] \[+\,\int\limits_{B_{2}}\,\left\langle\,f,\,\frac{f_{2}}{\sqrt{\mu \,(\,B_{2}\,)}}\,\right\rangle\,\left\langle\,\frac{g_{2}}{\sqrt{\mu\,(\,B_{2} \,)}},\,f\,\right\rangle\,d\mu\] \[+\,\int\limits_{B_{3}}\,\left\langle\,f,\,\frac{f_{3}}{\sqrt{\mu \,(\,B_{3}\,)}}\,\right\rangle\,\left\langle\,\frac{g_{3}}{\sqrt{\mu\,(\,B_{3} \,)}},\,f\,\right\rangle\,d\mu\] \[=\,\langle\,f,\,f_{1}\,\rangle\,\langle\,g_{1},\,f\,\rangle\,+\, \langle\,f,\,f_{2}\,\rangle\,\langle\,g_{2},\,f\,\rangle\,+\,\langle\,f,\,f_{ 3}\,\rangle\,\langle\,g_{3},\,f\,\rangle\] \[=\,\langle\,(\,x,\,y,\,z\,)\,,\,(\,2,\,1,\,1\,)\,\rangle\,\langle \,(\,1,\,0,\,0\,),\,(x,\,y,\,z\,)\,\rangle\,+\] \[\qquad+\,\langle\,(\,x,\,y,\,z\,)\,,\,(\,-\,1,\,3,\,-\,1\,)\, \rangle\,\langle\,(\,0,\,1,\,0\,),\,(x,\,y,\,z\,)\,\rangle\] \[\qquad+\,\langle\,(\,x,\,y,\,z\,)\,,\,(\,-\,1,\,1,\,4\,)\, \rangle\,\langle\,(\,0,\,0,\,1\,),\,(x,\,y,\,z\,)\,\rangle\] \[=\,(\,2\,x\,+\,y\,+\,z\,)\,x\,+\,(\,-\,x\,+\,3\,y\,-\,z\,)\,y\,+\, (\,-\,x\,+\,y\,+\,4\,z\,)\,z\] \[=\,2\,x^{\,2}\,+\,3\,y^{\,2}\,+\,4\,z^{\,2}\,\leq\,4\,\left(\,x^{ \,2}\,+\,y^{\,2}\,+\,z^{\,2}\,\right)\,=\,4\,\parallel(\,x,\,y,\,z\,)\, \parallel^{2}\,=\,4\,\parallel f\,\parallel^{2}.\]
_Thus, for each \(\,f\,\in\,H\), we get_
\[2\,\parallel f\,\parallel^{2}\,\leq\,\int\limits_{\Omega}\,\langle\,f,\,F\,( \,w\,)\,\rangle\,\,\langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu\,\leq\,4\, \parallel f\,\parallel^{2}.\]
_Therefore, \(\,(\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,2\,\) and \(\,4\)._
_For \(\,(\,x,\,y,\,z\,)\,\in\,\mathbb{R}^{\,3}\), the continuous biframe operator \(\,S_{F,G}\,\) is given by_
\[S_{F,\,G}\,(\,x,\,y,\,z\,) \,=\,\langle\,(\,x,\,y,\,z\,)\,,\,(\,2,\,1,\,1\,)\,\rangle\,(\,1, \,0,\,0\,)+\] \[+\,\langle\,(\,x,\,y,\,z\,)\,,\,(\,-\,1,\,3,\,-\,1\,)\,\rangle\,( \,0,\,1,\,0\,)\] \[+\,\langle\,(\,x,\,y,\,z\,)\,,\,(\,-\,1,\,1,\,4\,)\,\rangle\,(\,0, \,0,\,1\,)\] \[=\,(\,2\,x\,+\,y\,+\,z,\,-\,x\,+\,3\,y\,-\,z,\,-\,x\,+\,y\,+\,4\,z \,)\,.\]
_The matrix associated with the operator \(\,S_{F,\,G}\,\) is given by_
\[[\,S_{F,\,G}\,]\,=\,\left(\begin{array}{ccc}2&1&1\\ -\,1&3&-\,1\\ -\,1&1&4\end{array}\right).\]
_Since \(\,det\,(\,[\,S_{F,\,G}\,]\,)\,=\,33\,\neq\,0\), the matrix \(\,[\,S_{F,\,G}\,]\,\) is invertible. Thus, the operator \(\,S_{F,\,G}\,\) is well defined and invertible bounded linear operator on \(\,\mathbb{R}^{\,3}\). It is easy to verify that the operator \(\,S_{F,\,G}\,\) is positive._
_The inverse of the matrix \(\,[\,S_{F,\,G}\,]\,\) is given by_
\[[\,S_{F,\,G}\,]^{\,-\,1}\,=\,\frac{1}{33}\left(\begin{array}{ccc}13&-\,3&-\, 4\\ 5&9&1\\ 2&-\,3&7\end{array}\right).\]
_Therefore, for \(\,(\,x,\,y,\,z\,)\,\in\,\mathbb{R}^{\,3}\), \(\,S_{F,\,G}^{\,-\,1}\,\) is given by_
\[S_{F,\,G}^{\,-\,1}\,(\,x,\,y,\,z\,)\,=\,\frac{1}{33}\,(\,13\,x\,-\,3\,y\,-\,4 \,z,\,5\,x\,+\,9\,y\,+\,z,\,2\,x\,-\,3\,y\,+\,7\,z\,)\,.\]
_Now, for \(\,f\,=\,(\,x,\,y,\,z\,)\,\in\,H\), we have_
\[\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\,S_{F, \,G}^{\,-\,1}\,G\,(\,w\,)\,d\mu\] \[\,=\,\langle\,f,\,f_{1}\,\rangle\,\,S_{F,\,G}^{\,-\,1}\,g_{1}\,+ \,\langle\,f,\,f_{2}\,\rangle\,\,S_{F,\,G}^{\,-\,1}\,g_{2}\,+\,\langle\,f,\,f_ {3}\,\rangle\,\,S_{F,\,G}^{\,-\,1}\,g_{3}\] \[\,=\,(\,2\,x\,+\,y\,+\,z\,)\,\,\frac{1}{33}\,(\,13,\,5,\,2\,)\,+\, (\,-\,x\,+\,3\,y\,-\,z\,)\,\frac{1}{33}\,(\,-\,3,\,9,\,-\,3\,)\,+\] \[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\, (\,-\,x\,+\,y\,+\,4\,z\,)\,\,\frac{1}{33}\,\,(\,-\,4,\,1,\,7\,)\] \[\,=\,\frac{1}{33}\,\,(\,33\,x,\,33\,y,\,33\,z\,)\,=\,(\,x,\,y,\,z \,)\,=\,f\]
_Similarly, it can be verified that_
\[\int\limits_{\Omega}\,\,\Big{\langle}\,f,\,S_{F,\,G}^{\,-\,1}\,F\,(\,w\,)\, \Big{\rangle}\,\,G\,(\,w\,)\,d\mu\,=\,f\,,\,f\,\in\,H.\]
_Thus,the representation theorem is verified in this example._
**Theorem 3.8.**_The pair \(\,(\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with respect to \(\,(\,\Omega,\,\mu\,)\,\) if and only if \(\,(\,G,\,F\,)\,\) is a continuous biframe for \(\,H\,\) with respect to \(\,(\,\Omega,\,\mu\,)\)._
_Proof._ Let \(\,(\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,A\,\) and \(\,B\). Then for each \(\,f\,\in\,H\), we have
\[A\,\parallel f\,\parallel^{2}\leq\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\, w\,)\,\rangle\,\,\langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu\leq\,B\,\parallel f\, \parallel^{2}.\]
Since \(S_{F,\,G}\) is self adjoint, using (4), we can write
\[\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\,\langle\,G\,(\,w\,), \,f\,\rangle\,\,d\mu =\,\,\langle\,S_{F,\,G}\,f,\,f\,\rangle\,=\,\overline{\langle\,S_{F,\,G} \,f,\,f\,\rangle}\]
Thus, for each \(f\in H\), we have
\[A\parallel f\parallel^{\,2}\leq\int\limits_{\Omega}\,\,\langle\,f,\,G\,(\,w\,) \,\rangle\,\,\langle\,F\,(\,w\,),\,f\,\rangle\,\,d\mu\leq\,B\parallel f \parallel^{\,2}.\]
Therefore, \((\,G,\,F\,)\) is a continuous biframe for \(H\).
Similarly, we can prove the converse part of this Theorem.
In the next Theorem, we establish a characterization of a continuous biframe using its biframe operator.
**Theorem 3.9**.: _Let \((\,F,\,G\,)\) is a continuous biframe Bessel mapping for \(H\) with respect to \((\,\Omega,\,\mu\,)\). Then \((\,F,\,G\,)\) is a continuous biframe for \(H\) if and only if there exists \(\alpha>0\) such that \(S_{F,\,G}\,\geq\,\alpha\,I\), where \(S_{F,\,G}\) is the continuous biframe operator for \((\,F,\,G\,)\)._
Proof.: Let \((\,F,\,G\,)\) is a continuous biframe for \(H\) with bounds \(A\) and \(B\).Then using (3) and (4), for each \(f\in H\), we get
\[A\parallel f\parallel^{\,2}\leq\,\langle\,S_{F,\,G}\,f,\,f\,\rangle\,\leq\,B \parallel f\parallel^{\,2}.\]
Thus
\[A\,\,\langle\,f,\,f\,\rangle\,\leq\,\langle\,S_{F,\,G}\,f,\,f\,\rangle\,\Rightarrow \,S_{F,\,G}\,\geq\,\alpha\,I,\]
where \(\alpha=A\).
Conversely, suppose that \(S_{F,\,G}\,\geq\,\alpha\,I\).Thus, for each \(f\in H\), we have
\[\alpha^{\,2}\parallel f\parallel^{\,2}\,\leq\,\langle\,S_{F,\,G}\,f,\,f\, \rangle\,=\,\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\, \langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu.\]
Hence, \((\,F,\,G\,)\) is a continuous biframe for \(H\) with lower biframe bound \(\alpha^{\,2}\).This completes the proof.
Next, also we give a characterization of a continuous biframe with the help of a invertible operator on \(H\).
**Theorem 3.10**.: _Let \(T\) be an invertible bounded linear operator on \(H\).Then \((\,F,\,G\,)\) is a continuous biframe for \(H\) with respect to \((\,\Omega,\,\mu\,)\) if and only if \((\,TF,\,TG\,)\) is a continuous biframe for \(H\) with respect to \((\,\Omega,\,\mu\,)\)._
Proof.: For each \(f\in H\), \(w\mapsto\,\langle\,f,\,TF\,(\,w\,)\,\rangle\,=\,\langle\,T^{\,*}\,f,\,F\,(\,w\,)\,\rangle\) and \(w\mapsto\,\langle\,f,\,TG\,(\,w\,)\,\rangle\,=\,\langle\,T^{\,*}\,f,\,G\,(\,w\,)\,\rangle\,\) are measurable functions on \(\,\Omega.\) Let \((\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,A\,\) and \(\,B.\) Since \(T\,\) is invertible, for \(\,f\in H,\) we have
\[\|\,f\,\|^{\,2}\,=\,\Big{\|}\,\big{(}\,T^{\,-\,1}\,\big{)}^{\,*}\,T^{\,*}\,f\, \Big{\|}^{\,2}\,\leq\,\big{\|}\,T^{\,-\,1}\,\big{\|}^{\,2}\,\,\|\,T^{\,*}\,f\, \|^{\,2}\,.\]
Now, for each \(\,f\in H,\) we have
\[\int_{\Omega}\,\,\langle\,f,\,TF\,(\,w\,)\,\rangle\,\,\langle\, TG\,(\,w\,),\,f\,\rangle\,\,d\mu \,=\,\int_{\Omega}\,\,\langle\,T^{\,*}\,f,\,F\,(\,w\,)\,\rangle\, \,\langle\,G\,(\,w\,),\,T^{\,*}\,f\,\rangle\,\,d\mu\] \[\leq\,B\,\,\|\,T^{\,*}\,f\,\|^{\,2}\,\leq\,B\,\|\,T\,\|^{\,2}\,\, \|\,f\,\|^{\,2}\,.\]
On the other hand, for each \(\,f\in H,\) we have
\[\int_{\Omega}\,\,\langle\,f,\,TF\,(\,w\,)\,\rangle\,\,\langle\, TG\,(\,w\,),\,f\,\rangle\,\,d\mu \,=\,\int_{\Omega}\,\,\langle\,T^{\,*}\,f,\,F\,(\,w\,)\,\rangle\, \,\langle\,G\,(\,w\,),\,T^{\,*}\,f\,\rangle\,\,d\mu\] \[\geq\,A\,\,\|\,T^{\,*}\,f\,\|^{\,2}\,\geq\,A\,\,\big{\|}\,T^{\,- \,1}\,\big{\|}^{\,-\,2}\,\|\,f\,\|^{\,2}\,.\]
Hence, \(\,(TF,\,TG\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,A\,\,\big{\|}\,T^{\,-\,1}\,\big{\|}^{\,-\,2}\,\) and \(\,B\,\|\,T\,\|^{\,2}.\)
Conversely, suppose that \(\,(TF,\,TG\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,A\,\) and \(\,B.\) Now, for each \(\,f\in H,\) we have
\[\frac{A}{\|\,T\,\|^{\,2}}\,\,\|\,f\,\|^{\,2}\,=\,\frac{A}{\|\,T\, \|^{\,2}}\,\,\Big{\|}\,\big{(}\,T^{\,-\,1}\,T\,\big{)}^{\,*}\,f\,\Big{\|}^{\, 2}\,\leq\,A\,\,\Big{\|}\,\big{(}\,T^{\,-\,1}\,\big{)}^{\,*}\,\,f\,\Big{\|}^{\, 2}\] \[\leq\,\int_{\Omega}\,\,\Big{\langle}\,\big{(}\,T^{\,-\,1}\, \big{)}^{\,*}\,f,\,TF\,(\,w\,)\,\Big{\rangle}\,\,\Big{\langle}\,TG\,(\,w\,), \,\big{(}\,T^{\,-\,1}\,\big{)}^{\,*}\,\,f\,\Big{\rangle}\,\,d\mu\] \[=\,\int_{\Omega}\,\,\Big{\langle}\,T^{\,*}\,\big{(}\,T^{\,-\,1} \,\big{)}^{\,*}\,f,\,F\,(\,w\,)\,\Big{\rangle}\,\,\Big{\langle}\,G\,(\,w\,), \,T^{\,*}\,\big{(}\,T^{\,-\,1}\,\big{)}^{\,*}\,\,f\,\Big{\rangle}\,\,d\mu\] \[=\,\int_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\,\langle \,G\,(\,w\,),\,f\,\rangle\,\,d\mu.\]
On the other hand, for each \(\,f\in H,\) we have
\[\int_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\,\langle\,G \,(\,w\,),\,f\,\rangle\,\,d\mu\] \[=\,\int_{\Omega}\,\,\Big{\langle}\,T^{\,*}\,\big{(}\,T^{\,-\,1}\, \big{)}^{\,*}\,f,\,F\,(\,w\,)\,\Big{\rangle}\,\,\Big{\langle}\,G\,(\,w\,),\,T ^{\,*}\,\big{(}\,T^{\,-\,1}\,\big{)}^{\,*}\,\,f\,\Big{\rangle}\,\,d\mu\] \[=\,\int_{\Omega}\,\,\Big{\langle}\,\big{(}\,T^{\,-\,1}\,\big{)}^ {\,*}\,f,\,TF\,(\,w\,)\,\Big{\rangle}\,\,\Big{\langle}\,TG\,(\,w\,),\,\big{(} \,T^{\,-\,1}\,\big{)}^{\,*}\,\,f\,\Big{\rangle}\,\,d\mu\] \[\leq\,B\,\,\Big{\|}\,\big{(}\,T^{\,-\,1}\,\big{)}^{\,*}\,\,f\, \Big{\|}^{\,2}\,\leq\,B\,\,\|\,T^{\,-\,1}\,\big{\|}^{\,2}\,\|\,f\,\|^{\,2}\,.\]
Thus, \((\,F,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,\frac{A}{\|\,T\,\|\,^{2}}\,\) and \(\,B\,\left\|\,T^{\,-\,1}\,\right\|^{\,2}\). This completes the proof.
Now, we would complete this section with discussion of continuous biframe Bessel multiplier in \(\,H\).
**Definition 3.11**.: _Let \((\,F,\,F\,)\,\) and \(\,(\,G,\,G\,)\,\) be continuous biframe Bessel mappings for \(\,H\,\) with respect to \(\,(\,\Omega,\,\mu\,)\,\) and let \(\,m:\,\Omega\,\to\,\mathbb{C}\,\) be a measurable function. Then the operator \(\,M_{m,\,F,\,G}\,:\,H\,\to\,H\,\) defined by_
\[\langle\,M_{m,\,F,\,G}\,f,\,g\,\rangle\,=\,\int\limits_{\Omega}\,m\,(\,w\,)\, \,\langle\,f,\,F\,(\,w\,)\,\rangle\,\,\langle\,G\,(\,w\,),\,g\,\rangle\,\,d\mu,\]
_for all \(\,f,\,g\,\in\,H\), is called continuous biframe Bessel multiplier of \(\,F\,\) and \(\,G\,\) with respect to \(\,m\)._
**Theorem 3.12**.: _The continuous biframe Bessel multiplier of \(\,F\,\) and \(\,G\,\) with respect to \(\,m\,\) is well defined and bounded._
Proof.: Let \((\,F,\,F\,)\,\) and \((\,G,\,G\,)\,\) be continuous biframe Bessel mappings for \(\,H\,\) with bounds \(\,B_{1}\,\) and \(\,B_{2}\).Then for any \(\,f,\,g\,\in\,H\), we have
\[|\,\langle\,M_{m,\,F,\,G}\,f,\,g\,\rangle\,|\,=\,\left|\int\limits_{\Omega}\,m \,(\,w\,)\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\langle\,G\,(\,w\,),\,g\, \rangle\,d\mu\,\right|\]
\[\leq\,\|\,m\,\|_{\,\infty}\left(\,\int\limits_{\Omega}\,|\,\langle\,f,\,F\,( \,w\,)\,\rangle\,|\,^{\,2}\,d\mu\,\right)^{1\,/2}\,\left(\,\int\limits_{ \Omega}\,|\,\langle\,g,\,G\,(\,w\,)\,\rangle\,|\,^{\,2}\,d\mu\,\right)^{1\,/ \,2}\]
\[\leq\,\|\,m\,\|_{\,\infty}\sqrt{\,B_{1}\,B_{2}}\,\left\|\,f\,\|\,\,\|\,g\,\|\,.\]
This shows that \(\,\|\,M_{m,\,F,\,G}\,\|\,\leq\,\|\,m\,\|_{\,\infty}\,\sqrt{\,B_{1}\,B_{2}}\,\) and so \(\,M_{m,\,F,\,G}\,\) is well-defined and bounded. This completes the proof.
Following the proof of the Theorem 3.12, for each \(\,f\,\in\,H\), we have
\[\|\,M_{m,\,F,\,G}\,f\,\|\,=\,\sup\limits_{\|\,g\,\|\,=\,1}\,|\, \langle\,M_{m,\,F,\,G}\,f,\,g\,\rangle\,|\] \[\leq\,\|\,m\,\|_{\,\infty}\,\sqrt{\,B_{2}}\,\left(\,\int\limits_{ \Omega}\,|\,\langle\,f,\,F\,(\,w\,)\,\rangle\,|\,^{\,2}\,d\mu\,\right)^{1\,/ \,2} \tag{5}\]
and similarly it can be shown that
\[\left\|\,M_{m,\,F,\,G}^{\,*}\,g\,\right\|\] \[\leq\,\|\,m\,\|_{\,\infty}\,\sqrt{\,B_{1}}\,\left(\,\int\limits_{ \Omega}\,|\,\langle\,G\,(\,w\,),\,g\,\rangle\,|\,^{\,2}\,d\mu\,\right)^{1\,/ \,2}. \tag{6}\]
**Theorem 3.13.**_Let \(M_{m,\,F,\,G}\) be the continuous biframe Bessel multiplier of \(F\) and \(G\) with respect to \(m.\)Then \(M_{m,\,F,\,G}^{*}\,=\,M_{\overline{m},\,F,\,G}\)._
Proof.: For \(f,\,g\,\in\,H,\) we have
\[\big{\langle}\,f,\,M_{m,\,F,\,G}^{*}\,g\,\big{\rangle}\,=\,\big{ \langle}\,M_{m,\,F,\,G}\,f,\,g\,\big{\rangle}\] \[=\,\int\limits_{\Omega}\,m\,(\,w\,)\,\big{\langle}\,f,\,F\,(\,w\,) \,\big{\rangle}\,\,\big{\langle}\,G\,(\,w\,),\,g\,\big{\rangle}\,\,d\mu\] \[=\,\int\limits_{\Omega}\,\big{\langle}\,f,\,\overline{m}\,(\,w\,) \,\big{\langle}\,g,\,G\,(\,w\,)\,\big{\rangle}\,\,F\,(\,w\,)\,\big{\rangle}\, \,\,d\mu\] \[=\,\big{\langle}\,f,\,M_{\overline{m},\,F,\,G}\,g\,\big{\rangle}\,.\]
This completes the proof.
**Theorem 3.14.**_Let \(M_{m,\,F,\,G}\) be the continuous biframe Bessel multiplier of \(F\) and \(G\) with respect to \(m.\)Then \((\,F,\,F\,)\) is a continuous biframe for \(H\) provided for each \(f\,\in\,H\), there exists \(D\,>\,0\) such that_
\[\|\,M_{m,\,F,\,G}\,f\,\|\,\geq\,D\,\,\|\,f\,\|\,.\]
Proof.: For each \(f\,\in\,H\), using (5), we get
\[D^{\,2}\,\,\|\,f\,\|^{\,2}\,\leq\,\|\,M_{m,\,F,\,G}\,f\,\|^{\,2}\] \[\Rightarrow\,D^{\,2}\,\,\|\,f\,\|^{\,2}\,\leq\,\|\,m\,\|_{\,\infty }^{\,2}\,B_{\,2}\,\int\limits_{\Omega}\,|\,\big{\langle}\,f,\,F\,(\,w\,)\, \big{\rangle}\,|^{\,2}\,d\mu\] \[\Rightarrow\,\frac{D^{\,2}}{\|\,m\,\|_{\,\infty}^{\,2}\,B_{\,2}}\, \,\|\,f\,\|^{\,2}\,\leq\,\int\limits_{\Omega}\,|\,\big{\langle}\,f,\,F\,(\,w \,)\,\big{\rangle}\,\,\big{\langle}\,F\,(\,w\,),\,f\,\big{\rangle}\,\,d\mu.\]
Thus, \((\,F,\,F\,)\) is a continuous biframe for \(H\) with bounds \(\frac{D^{\,2}}{\|\,m\,\|_{\,\infty}^{\,2}\,B_{\,2}}\) and \(B_{1}.\) This completes the proof.
**Theorem 3.15.**_Let \(M_{m,\,F,\,G}\) be the continuous biframe Bessel multiplier of \(F\) and \(G\) with respect to \(m.\)Suppose \(\lambda_{\,1}\,<\,1,\,\,\lambda_{\,2}\,>\,-\,1\) such that for each \(f\,\in\,H\), we have_
\[\|\,f\,-\,M_{m,\,F,\,G}\,f\,\|\,\leq\,\lambda_{\,1}\,\|\,f\,\|\,+\,\lambda_{ \,2}\,\,\|\,M_{m,\,F,\,G}\,f\,\|\,.\]
_Then \((\,F,\,F\,)\) is a continuous biframe for \(H\)._
Proof.: For each \(f\,\in\,H\), we have
\[\|\,f\,\|\,-\,\|\,M_{m,\,F,\,G}\,f\,\|\,\leq\,\|\,f\,-\,M_{m,\,F, \,G}\,f\,\|\] \[\leq\,\lambda_{\,1}\,\,\|\,f\,\|\,+\,\lambda_{\,2}\,\,\|\,M_{m,\,F,\,G}\,f\,\|\] \[\Rightarrow\,(\,1\,-\,\lambda_{\,1}\,)\,\|\,f\,\|\,\leq\,(\,1\,+\, \lambda_{\,2}\,)\,\,\|\,M_{m,\,F,\,G}\,f\,\|\,.\]
Now, using (5), we get
\[\left(\,\frac{1\,-\,\lambda_{1}}{1\,+\,\lambda_{2}}\,\right)\,\|\,f\,\|\] \[\leq\,\|\,m\,\|_{\,\infty}\,\sqrt{\,B_{2}}\,\left(\,\int\limits_{ \Omega}\,|\,\langle\,f,\,F\,(\,w\,)\,\rangle\,|\,^{2}\,d\mu\,\right)^{1\,/\,2}.\]
\[\Rightarrow\,\frac{(\,1\,-\,\lambda_{1}\,)\,^{2}}{\|\,m\,\|_{\, \infty}^{\,2}\,B_{2}\,\left(\,1\,+\,\lambda_{2}\,\right)^{2}}\,\,\|\,f\,\|^{2}\] \[\leq\,\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\, \,\langle\,F\,(\,w\,),\,f\,\rangle\,\,d\mu. \tag{7}\]
Thus, \(\,(\,F,\,F\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,\frac{(\,1\,-\,\lambda_{1}\,)\,^{2}}{\|\,m\,\|_{\,\infty}^{\,2}\,B_{2}\, \left(\,1\,+\,\lambda_{2}\,\right)^{2}}\) and \(\,B_{1}\). This completes the proof.
**Theorem 3.16**.: _Let \(\,M_{m,\,F,\,G}\,\) be the continuous biframe Bessel multiplier of \(\,F\,\) and \(\,G\,\) with respect to \(\,m\).Suppose \(\,\lambda\,\in\,[\,0,\,1\,)\,\) such that for each \(\,f\,\in\,H\), we have_
\[\|\,f\,-\,M_{m,\,F,\,G}\,f\,\|\,\leq\,\lambda\,\,\|\,f\,\|\,.\]
_Then \(\,(\,F,\,G\,)\,\) and \(\,(\,G,\,G\,)\,\) are continuous biframes for \(\,H\)._
Proof.: Putting \(\,\lambda_{1}\,=\,\lambda\,\) and \(\,\lambda_{2}\,=\,0\,\) in (7), we get
\[\frac{(\,1\,-\,\lambda\,)\,^{2}}{\|\,m\,\|_{\,\infty}^{\,2}\,B_{2}}\,\,\|\,f\, \|^{2}\,\leq\,\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\,\rangle\,\, \langle\,F\,(\,w\,),\,f\,\rangle\,\,d\mu.\]
Thus, \(\,(\,F,\,F\,)\,\) is a continuous biframe for \(\,H\). On the other hand, for each \(\,f\,\in\,H\), we have
\[\left\|\,f\,-\,M_{m,\,F,\,G}^{*}\,f\,\right\|\,=\,\|\,\left(\,I \,-\,M_{m,\,F,\,G}\,\right)^{*}\,f\,\|\] \[\leq\,\|\,I\,-\,M_{m,\,F,\,G}\,\|\,\,\|\,f\,\|\,\leq\,\lambda\,\, \|\,f\,\|\] \[\Rightarrow\,(\,1\,-\,\lambda\,)\,\|\,f\,\|\,\leq\,\left\|\,M_{m, \,F,\,G}^{*}\,f\,\right\|.\]
Now, using (6), we get
\[\frac{(\,1\,-\,\lambda\,)\,^{2}}{\|\,m\,\|_{\,\infty}^{\,2}\,B_{1}}\,\,\|\,f\, \|^{2}\,\leq\,\int\limits_{\Omega}\,\,|\,\langle\,G\,(\,w\,),\,f\,\rangle\,|^{ 2}\,d\mu\,=\,\int\limits_{\Omega}\,\,\langle\,f,\,G\,(\,w\,)\,\rangle\,\, \langle\,G\,(\,w\,),\,f\,\rangle\,\,d\mu.\]
Thus \(\,(\,G,\,G\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,\frac{(\,1\,-\,\lambda\,)\,^{2}}{\|\,m\,\|_{\,\infty}^{\,2}\,B_{1}}\,\) and \(\,B_{2}\). This completes the proof.
**Definition 3.17**.: _Let \((\,F,\,G\,)\,\) be a continuous biframe for \(H\).If_
\[\langle\,f,\,g\,\rangle\,=\,\int\limits_{\Omega}\,\,\langle\,f,\,F\,(\,w\,)\, \rangle\,\,\langle\,G\,(\,w\,),\,g\,\rangle\,\,d\mu\,,\]
_holds for all \(f,\,g\,\in\,H\), then \((\,F,\,G\,)\,\) is called dual continuous biframe for \(H\)._
**Theorem 3.18**.: _Let \((\,F,\,G\,)\,\) be a continuous biframe for \(H\) with continuous biframe operator \(S_{F,\,G}\). Then \(\left(\,S_{F,\,G}^{\,-\,1}\,F,\,G\,\right)\) and \(\left(\,F,\,S_{F,\,G}^{\,-\,1}\,G\,\right)\) are dual continuous biframes for \(H\)._
Proof.: For each \(f,\,g\,\in\,H\), we have
\[\langle\,f,\,g\,\rangle\,=\,\int\limits_{\Omega}\,\,\Big{\langle}\,f,S_{F,\,G }^{\,-\,1}\,F\,(\,w\,)\,\Big{\rangle}\,\,\langle\,G\,(\,w\,),\,g\,\rangle\,\,d\mu\,,\]
\[\langle\,f,\,g\,\rangle\,=\,\int\limits_{\Omega}\,\,\langle\,f,F\,(\,w\,)\, \rangle\,\,\Big{\langle}\,S_{F,\,G}^{\,-\,1}\,G\,(\,w\,),\,g\,\Big{\rangle}\, \,d\mu\,.\]
This verifies that \(\left(\,S_{F,\,G}^{\,-\,1}\,F,\,G\,\right)\) and \(\left(\,F,\,S_{F,\,G}^{\,-\,1}\,G\,\right)\) are dual continuous biframes for \(H\).
In the following Theorem, we will find a dual continuous biframe for \(H\) with respect to the multiplier operator.
**Theorem 3.19**.: _Let \(M_{m,\,F,\,G}\) be invertible and \((\,F,\,G\,)\) be a continuous biframe for \(H\). Then \(\left(\,\left(\,M_{m,\,F,\,G}^{\,-\,1}\,\overline{m}\,F\,\right)^{\,*},\,G\, \right)\) is a dual continuous biframe for \(H\)._
Proof.: From the definition of \(M_{m,\,F,\,G}\), we can write
\[\langle\,M_{m,\,F,\,G}\,f,\,g\,\rangle\] \[=\,\int\limits_{\Omega}\,m\,(\,w\,)\,\,\langle\,f,\,F\,(\,w\,)\, \rangle\,\,\langle\,G\,(\,w\,),\,g\,\rangle\,\,d\mu.\]
Now, by replacing \(f\) with \(M_{m,\,F,\,G}^{\,-\,1}\,f\), we get
\[\langle\,f,\,g\,\rangle\] \[=\,\int\limits_{\Omega}\,m\,(\,w\,)\,\,\Big{\langle}\,M_{m,\,F, \,G}^{\,-\,1}\,f,\,F\,(\,w\,)\,\Big{\rangle}\,\,\langle\,G\,(\,w\,),\,g\, \rangle\,\,d\mu\] \[=\,\int\limits_{\Omega}\,\Big{\langle}\,f,\,\Big{(}\,M_{m,\,F,\, G}^{\,-\,1}\,\Big{)}^{\,*}\,\,\overline{m}\,(\,w\,)\,F\,(\,w\,)\,\Big{\rangle}\,\, \langle\,G\,(\,w\,),\,g\,\rangle\,\,d\mu.\]
Thus, \(\left(\,\left(\,M_{m,\,F,\,G}^{\,-\,1}\,\overline{m}\,F\,\right)^{\,*},\,G\, \right)\) is a dual continuous biframe for \(H\). This completes the proof.
## 4 Continuous biframe in \(H_{1}\,\otimes\,H_{2}\)
In this section, we introduce the concept of continuous biframe in tensor product of Hilbert spaces \(\,H_{1}\,\otimes\,H_{2}\,\) and give a characterization.
**Definition 4.1.**_Let \(\,(\,X,\,\mu\,)\,=\,(\,X_{1}\,\times\,X_{2},\,\mu_{\,1}\,\otimes\,\mu_{\,2}\,)\,\) be the product of measure spaces with \(\,\sigma\)-finite positive measures \(\,\mu_{\,1},\,\mu_{\,2}\,\) on \(\,X_{1},\,\,X_{2},\) respectively. A pair \(\,(\,\mathbf{F},\,\mathbf{G}\,)\,=\,(\,\mathbb{F}:X\to H_{1}\,\otimes\,H_{2}, \,\,\mathbb{G}:X\,\to\,H_{1}\,\otimes\,H_{2}\,)\,\) is called a continuous biframe for \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\,\) if_
* \(\mathbb{F},\,\mathbb{G}\,\) _is weakly-measurable, i. e., for all_ \(\,f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\)_,_ \(\,x=(\,x_{\,1},\,x_{\,2}\,)\,\mapsto\,\langle\,f\,\otimes\,g,\,\mathbb{F}\,( \,x\,)\,\rangle\,\) _and_ \(\,(\,x_{\,1},\,x_{\,2}\,)\,\mapsto\,\langle\,f\,\otimes\,g,\,\mathbb{G}\,(\,x \,)\,\rangle\,\) _are measurable functions on_ \(\,X\)_,_
* _there exist constants_ \(\,A,\,B\,>\,0\,\) _such that_ \[A\,\parallel f\,\otimes\,g\,\parallel^{\,2}\] \[\leq\,\int\limits_{X}\,\langle\,f\,\otimes\,g,\,\mathbb{F}\,(\,x \,)\,\rangle\,\,\langle\,\mathbb{G}\,(\,x\,),\,f\,\otimes\,g\,\rangle\,\,d\mu\] \[\leq\,B\,\parallel f\,\otimes\,g\,\parallel^{\,2},\] (8) _for all_ \(\,f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\)_. The constants_ \(\,A\,\) _and_ \(\,B\,\) _are called continuous biframe bounds. If_ \(\,A\,=\,B\)_, then the pair_ \(\,(\,\mathbf{F},\,\mathbf{G}\,)\,\) _is called a tight continuous biframe for_ \(\,H_{1}\,\otimes\,H_{2}\)_. If_ \(\,(\,\mathbf{F},\,\mathbf{G}\,)\,\) _satisfies only the right inequality of (_2_), then it is called continuous biframe Bessel mapping in_ \(\,H_{1}\,\otimes\,H_{2}\,\) _with Bessel bound_ \(\,B\)_._
**Theorem 4.2.**_The pair of mappings \(\,(\,\mathbf{F},\,\mathbf{G}\,)\,=\,(\,F_{1}\,\otimes\,F_{2},\,G_{1}\,\otimes \,G_{2}\,)\,=\,\mathbb{F},\,\mathbb{G}:X\,\to\,H_{1}\,\otimes\,H_{2}\,\) is a continuous biframe for \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\,\) if and only if \(\,F_{1},\,G_{1}:\,X_{1}\,\to\,H_{1}\,\) is a continuous biframe for \(\,H_{1}\,\) with respect to \(\,(\,X_{1},\,\mu_{\,1}\,)\,\) and \(\,F_{2},\,G_{2}:X_{2}\,\to\,H_{2}\,\) is a continuous biframe for \(\,H_{2}\,\) with respect to \(\,(\,X_{2},\,\mu_{\,2}\,)\)._
_Proof._ Suppose that \(\,(\,\mathbf{F},\,\mathbf{G}\,)\,=\,(\,F_{1}\,\otimes\,F_{2},\,G_{1}\,\otimes \,G_{2}\,)\,\) is a continuous biframe for \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\,\) having bounds \(\,A\,\) and \(\,B\).Let \(\,f\,\in\,H_{1}-\{\,\theta\,\}\,\) and fix \(\,g\,\in\,H_{2}\,-\{\,\theta\,\}\).Then \(\,f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\,-\{\,\theta\,\otimes\,\theta\,\}\,\) and by Fubini's theorem we have
\[\int\limits_{X}\,\langle\,f\,\otimes\,g,\,F_{1}\,(\,x_{\,1}\,)\, \otimes\,F_{2}\,(\,x_{\,2}\,)\,\rangle\,\,\langle\,G_{1}\,(\,x_{\,1}\,)\, \otimes\,G_{2}\,(\,x_{\,2}\,),\,f\,\otimes\,g\,\rangle\,\,d\mu\] \[=\,\int\limits_{X_{1}}\,\langle\,f,\,F_{1}\,(\,x_{\,1}\,)\, \rangle_{1}\,\,\langle\,G_{1}\,(\,x_{\,1}\,),\,f\,\rangle_{1}\,\,d\mu_{\,1}\, \int\limits_{X_{2}}\,\langle\,g,\,F_{2}\,(\,x_{\,2}\,)\,\rangle_{2}\,\, \langle\,G_{2}\,(\,x_{\,2}\,),\,g\,\rangle_{2}\,\,d\mu_{\,2}.\]
Therefore, for each \(\,f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\), the inequality (8) can be written as
\[A\,\parallel f\,\parallel_{1}^{2}\,\parallel g\,\parallel_{2}^{2}\] \[\leq\,\int\limits_{X_{1}}\,\left\langle\,f,\,F_{1}\,\left(\,x_{ \,1}\,\right)\,\right\rangle_{1}\,\left\langle\,G_{1}\left(\,x_{\,1}\,\right),\,f\,\right\rangle_{1}\,d\mu_{\,1}\,\int\limits_{X_{2}}\,\left\langle\,g,\,F_ {2}\left(\,x_{\,2}\,\right)\,\right\rangle_{2}\,\left\langle\,G_{2}\left(\,x_{ \,2}\,\right),\,g\,\right\rangle_{2}\,d\mu_{\,2}\] \[\leq\,B\,\parallel f\,\parallel_{1}^{2}\,\parallel g\,\parallel_{ 2}^{2}\,.\]
Since \(\,f\,\) and \(\,g\,\) are non-zero and therefore
\[\int\limits_{X_{1}}\,\left\langle\,f,\,F_{1}\left(\,x_{\,1}\,\right)\,\right _{1}\,\left\langle\,G_{1}\left(\,x_{\,1}\,\right),\,f\,\right\rangle_{1}\,d \mu_{\,1}\,,\,\int\limits_{X_{2}}\,\left\langle\,g,\,F_{2}\left(\,x_{\,2}\, \right)\,\right\rangle_{2}\,\left\langle\,G_{2}\left(\,x_{\,2}\,\right),\,g\, \right\rangle_{2}\,d\mu_{\,2}\]
are non-zero. Thus from the above inequality we can write
\[\leq\,\int\limits_{X_{1}}\,\left\langle\,f,\,F_{1}\left(\,x_{\,1} \,\right)\,\right\rangle_{1}\,\left\langle\,G_{1}\left(\,x_{\,1}\,\right),\,f \,\right\rangle_{1}\,d\mu_{\,1}\] \[\leq\,\frac{B\,\parallel g\,\parallel_{2}^{2}}{\int\limits_{X_{2} }\,\left\langle\,g,\,F_{2}\left(\,x_{\,2}\,\right)\,\right\rangle_{2}\,\left \langle\,G_{2}\left(\,x_{\,2}\,\right),\,g\,\right\rangle_{2}\,d\mu_{\,2}}\, \parallel f\,\parallel_{1}^{2}\,.\]
Thus, for each \(\,f\,\in\,H_{1}\,-\,\left\{\,\theta\,\right\}\), we have
\[A_{\,1}\,\parallel f\,\parallel_{1}^{2}\,\leq\,\int\limits_{X_{1}}\,\left\langle \,f,\,F_{1}\left(\,x_{\,1}\,\right)\,\right\rangle_{1}\,\left\langle\,G_{1} \left(\,x_{\,1}\,\right),\,f\,\right\rangle_{1}\,d\mu_{\,1}\leq\,B_{\,1}\, \parallel f\,\parallel_{1}^{2}\,,\]
where
\[A_{\,1}\,=\,\inf\limits_{g\,\in\,H_{2},\,\parallel\,g\,\parallel_{2}\,=\,1}\, \left\{\,\frac{A\,\parallel g\,\parallel_{2}^{2}}{\int\limits_{X_{2}}\,\left\langle \,g,\,F_{2}\left(\,x_{\,2}\,\right)\,\right\rangle_{2}\,\left\langle\,G_{2} \left(\,x_{\,2}\,\right),\,g\,\right\rangle_{2}\,d\mu_{\,2}}\,\right\},\]
and
\[B_{\,1}\,=\,\sup\limits_{g\,\in\,H_{2},\,\parallel\,g\,\parallel_{2}\,=\,1}\, \left\{\,\frac{B\,\parallel g\,\parallel_{2}^{2}}{\int\limits_{X_{2}}\,\left\langle \,g,\,F_{2}\left(\,x_{\,2}\,\right)\,\right\rangle_{2}\,\left\langle\,G_{2} \left(\,x_{\,2}\,\right),\,g\,\right\rangle_{2}\,d\mu_{\,2}}\,\right\}.\]
This shows that \(\,\left(\,F_{1},\,G_{1}\,\right)\,\) is a continuous biframe for \(\,H_{1}\,\) with respect to \(\,\left(\,X_{1},\,\mu_{\,1}\,\right)\). Similarly, it can be shown that \(\,\left(\,F_{2},\,G_{2}\,\right)\,\) is a continuous biframe for \(\,H_{2}\,\) with respect to \(\,\left(\,X_{2},\,\mu_{\,2}\,\right)\).
Conversely, suppose that \(\,\left(\,F_{1},\,G_{1}\,\right)\,\) is a continuous biframe for \(\,H_{1}\,\) with respect to \(\,\left(\,X_{1},\,\mu_{\,1}\,\right)\,\) having bounds \(\,A,\,B\,\) and \(\,\left(\,F_{2},\,G_{2}\,\right)\,\) is a continuous biframe for \(\,H_{2}\,\) with respect to \(\,\left(\,X_{2},\,\mu_{\,2}\,\right)\,\) having bounds \(\,C,\,D.\,\)By the assumption it is easy to
very that \((\,{\bf F},\,{\bf G}\,)\,=\,(\,F_{1}\,\otimes\,F_{2},\,G_{1}\,\otimes\,G_{2}\,)\) is weakly measurable on \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\,.\) Now, for each \(\,f\,\in\,H_{1}\,-\,\{\,\theta\,\}\,\) and \(\,g\,\in\,H_{2}\,-\,\{\,\theta\,\},\) we have
\[A\parallel f\parallel_{1}^{2}\,\leq\,\int\limits_{X_{1}}\, \,\langle\,f,\,F_{1}\,(\,x_{\,1}\,)\,\rangle_{1}\,\,\langle\,G_{1}\,(\,x_{\, 1}\,),\,f\,\rangle_{1}\,\,d\mu_{\,1}\,\leq\,B\parallel f\parallel_{1}^{2}\,,\] \[C\parallel g\parallel_{2}^{2}\,\leq\,\int\limits_{X_{2}}\, \,\langle\,g,\,F_{2}\,(\,x_{\,2}\,)\,\rangle_{2}\,\,\langle\,G_{2}\,(\,x_{\,2 }\,),\,g\,\rangle_{1}\,\,d\mu_{\,2}\,\leq\,B\parallel g\parallel_{2}^{2}.\]
Multiplying the above two inequalities and using Fubini's theorem we get
\[A\,C\parallel f\,\otimes\,g\parallel^{2}\] \[\leq\,\int\limits_{X}\,\,\langle\,f\,\otimes\,g,\,F_{1}\,(\,x_{\, 1}\,)\,\otimes\,F_{2}\,(\,x_{\,2}\,)\,\rangle\,\,\langle\,G_{1}\,(\,x_{\,1}\,) \,\otimes\,G_{2}\,(\,x_{\,2}\,),\,f\,\otimes\,g\,\rangle\,\,d\mu\] \[\leq\,B\,D\parallel f\,\otimes\,g\parallel^{2},\]
for all \(\,f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}.\) Thus, for each \(\,f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2},\) we have
\[A\,C\parallel f\,\otimes\,g\parallel^{2}\,\leq\,\int\limits_{X}\,\,\langle\,f \,\otimes\,g,\,\mathbb{F}\,(\,x\,)\,\rangle\,\,\langle\,\mathbb{G}\,(\,x\,), \,f\,\otimes\,g\,\rangle\,\,d\mu\leq\,B\,D\parallel f\,\otimes\,g\parallel^{2},\]
Hence, \((\,{\bf F},\,{\bf G}\,)\,=\,(\,F_{1}\,\otimes\,F_{2},\,G_{1}\,\otimes\,G_{2}\,)\) is a continuous biframe for \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\,\) having bounds \(\,A\,C\,\) and \(\,B\,D.\) This completes the proof.
**Example 4.3.**_Let \(\,\{\,e_{\,i}\,\}_{i=\,1}^{\infty}\,\) be an orthonormal basis for \(\,H_{1}\,\) and \(\,(\,X_{1},\,\mu_{\,1}\,)\,\) be a measure space with \(\,\mu_{\,1}\,\) is \(\,\sigma\)-finite. Then we can write \(\,X_{1}\,=\,\bigcup_{\,i=\,1}^{\infty}\,\Omega_{i},\) where \(\,\{\,\Omega_{i}\,\}_{i=\,1}^{\infty}\,\) is a sequence of disjoint measurable subsets of \(\,X_{1}\,\) with \(\,\mu\,(\,\Omega_{i}\,)\,<\,\infty.\) Suppose_
\[\{\,f_{\,i}\,\}_{i=\,1}^{\infty}\,=\,\{\,e_{\,1},\,e_{\,1},\,e_{\,1},\,e_{\,2 },\,e_{\,3},\,\cdots\,\cdots\,\}\,,\] \[\{\,g_{\,i}\,\}_{i=\,1}^{\infty}\,=\,\{\,0,\,e_{\,1},\,e_{\,1},\,e_ {\,2},\,e_{\,3},\,\cdots\,\cdots\,\}\,.\]
_For each \(\,x_{1}\,\in\,X_{1}\), we define the mappings \(\,F_{1}:\,X_{1}\,\to\,H_{1}\,\) by \(\,F_{1}\,(\,x_{1}\,)\,=\,\frac{1}{\sqrt{\mu\,(\,\Omega_{i}\,)}}\,f_{\,i}\,\) and \(\,G_{1}:\,X_{1}\,\to\,H_{1}\,\) by \(\,G_{1}\,(\,x_{1}\,)\,=\,\frac{1}{\sqrt{\mu\,(\,\Omega_{i}\,)}}\,g_{\,i}\,\). Then by Example 3.4, \(\,(\,F_{1},\,G_{1}\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,1\,\) and \(\,2.\)_
_On the other hand, let \(\,\Big{\{}\,e^{\,\prime}_{\,j}\,\Big{\}}_{j=\,1}^{\infty}\,\) be an orthonormal basis for \(\,H_{2}\,\) and \(\,(\,X_{2},\,\mu_{\,2}\,)\,\) be a measure space with \(\,\mu_{\,2}\,\) is \(\,\sigma\)-finite. Then \(\,X_{2}\,=\,\bigcup_{\,i=\,1}^{\infty}\,\Omega^{\,\prime}_{j},\) and \(\,\Big{\{}\,\Omega^{\,\prime}_{\,j}\,\Big{\}}_{j=\,1}^{\infty}\,\) is a sequence of disjoint measurable subsets of \(\,X_{2}\,\) with \(\,\mu\,\Big{(}\,\Omega^{\,\prime}_{\,j}\,\Big{)}\,<\,\infty.\) Suppose_
\[\big{\{}\,f^{\,\prime}_{\,j}\,\big{\}}_{i=\,1}^{\infty}\,=\,\{\,5\,e_{\,1},\,3 \,e_{\,2},\,2\,e_{\,3},\,2\,e_{\,4},\,\cdots\cdots\,\}\,,\] \[\big{\{}\,g^{\,\prime}_{\,j}\,\big{\}}_{i=\,1}^{\infty}\,=\,\{\,0,,\,0,\,3\,e_{\,1},\,0,\,2\,e_{\,2},\,2\,e_{\,3},\,\cdots\cdots\,\}\,.\]
_Now, we define \(\,F_{2}\,:\,X_{2}\,\to\,H_{2}\,\) by \(\,F_{2}\,(\,x_{2}\,)\,=\,\dfrac{1}{\sqrt{\mu\left(\,\Omega_{j}^{\, \prime}\,\right)}}\,f_{j}^{\,\prime}\,\) and \(\,G_{2}:\,X_{2}\,\to\,H_{2}\,\) by \(\,G_{2}\,(\,x_{2}\,)\,=\,\dfrac{1}{\sqrt{\mu\left(\,\Omega_{j}^{\, \prime}\,\right)}}\,g_{j}^{\,\prime}.\) Now, for \(\,f,\,g\,\in\,H_{2}\), we have_
\[\int_{X_{2}}\,\,\langle\,f,\,F_{2}\,(\,x_{2}\,)\,\rangle\,\, \langle\,G_{2}\,(\,x_{2}\,),\,f\,\rangle\,\,d\mu_{2}\,=\,\sum_{j\,=\,1}^{ \infty}\,\int_{\Omega_{j}^{\,\prime}}\,\langle\,f,\,f_{j}^{\,\prime}\,\rangle \,\,\langle\,g_{j}^{\,\prime},\,f\,\rangle\,\,d\mu_{2}\] \[=\,\langle\,f,\,e_{1}\,\rangle\,\,\langle\,e_{1},\,f\,\rangle\,+ \,2\,\,\langle\,f,\,e_{1}\,\rangle\,\,\langle\,e_{1},\,f\,\rangle\,+\,2\,\, \langle\,f,\,e_{2}\,\rangle\,\,\langle\,e_{2},\,f\,\rangle+\cdots\] \[=\,|\,\langle\,f,\,e_{1}\,\rangle\,|^{\,2}\,+\,2\,\,|\,\langle\, f,\,e_{1}\,\rangle\,|^{\,2}\,+\,2\,\,|\,\langle\,f,\,e_{2}\,\rangle\,|^{\,2}\,+\,\cdots\] \[=\,|\,\langle\,f,\,e_{1}\,\rangle\,|^{\,2}\,+\,2\,\|\,f\,\|^{\,2}.\]
_Thus, \(\,(\,F_{2},\,G_{2}\,)\,\) is a continuous biframe for \(\,H\,\) with bounds \(\,2\,\) and \(\,3.\,\)Thus, by the Theorem 4.2, \(\,(\,{\bf F},\,{\bf G}\,)\,=\,(\,F_{1}\,\otimes\,F_{2},\,G_{1}\,\otimes\,G_{2}\,)\,\) is a continuous biframe for \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\,\) having bounds \(\,2\,\) and \(\,6.\,\)_
Let \(\,(\,X,\,\mu\,)\,=\,(\,X_{1}\,\times\,X_{2},\,\mu_{\,1}\,\otimes\,\mu_{\,2}\,)\,\) be the product of measure spaces with \(\,\sigma\)-finite positive measures \(\,\mu_{\,1},\,\mu_{\,2}.\) Let \(\,L^{2}\,(\,X,\,\mu\,)\,\) be the class of all measurable functions \(\,\Psi:X\,\to\,H_{1}\,\otimes\,H_{2}\,\) such that
\[\int_{X}\,\|\,\Psi\,(\,x\,)\,\|^{\,2}\,\,d\mu\] \[=\,\int_{X_{1}}\,\|\,\varphi_{1}\,(\,x_{1}\,)\,\|_{\,1}\,\,d\mu_{ 1}\,\int_{X_{2}}\,\|\,\varphi_{2}\,(\,x_{2}\,)\,\|_{\,2}\,\,d\mu_{2}\,<\,\infty,\]
for \(\,\varphi_{1}\,\in\,L_{1}^{2}\,(\,X_{1},\,\mu_{1}\,),\,\,\varphi_{2}\,\in\,L_ {2}^{2}\,(\,X_{2},\,\mu_{2}\,),\) with the inner product
\[\langle\,\Psi,\,\Phi\,\rangle_{L^{2}} =\,\int_{X}\,\langle\,\Psi\,(\,x\,),\,\Phi\,(\,x\,)\,\rangle\,\,d\mu\] \[=\,\int_{X_{1}}\,\langle\,\varphi_{1}\,(\,x_{1}\,),\,\psi_{1}\,( \,x_{1}\,)\,\rangle_{\,1}\,\,d\mu_{1}\,\int_{X_{2}}\,\langle\,\varphi_{2}\,( \,x_{2}\,),\,\psi_{2}\,(\,x_{2}\,)\,\rangle_{\,2}\,\,d\mu_{2}\] \[=\,\langle\,\varphi_{1},\,\psi_{1}\,\rangle_{L_{1}^{2}}\,\, \langle\,\varphi_{2},\,\psi_{2}\,\rangle_{L_{2}^{2}}\,,\]
where \(\,\Psi=\varphi_{1}\,\otimes\,\varphi_{2}\,,\,\Phi=\psi_{1}\,\otimes\,\psi_{2} \,\in\,L^{2}\,(\,X,\,\mu\,),\) for \(\,\varphi_{1},\,\psi_{1}\,\in\,L_{1}^{2}\,(\,X_{1},\,\mu_{1}\,)\,\) and \(\,\varphi_{2},\,\psi_{2}\,\in\,L_{2}^{2}\,(\,X_{2},\,\mu_{2}\,).\) The space \(\,L^{2}\,(\,X,\,\mu\,)\,\) is complete with respect to the above inner product. Therefore, it is an Hilbert space.
**Definition 4.4.**_Let \(\,(\,{\bf F},\,{\bf G}\,)\,=\,(\,{\mathbb{F}}:X\to H_{1}\,\otimes\,H_{2},\, \,{\mathbb{G}}:X\to H_{1}\,\otimes\,H_{2}\,)\,\) be a continuous biframe for \(\,H_{1}\,\otimes\,H_{2}\,\) with respect to \(\,(\,X,\,\mu\,)\). Then the operator \(\,S_{{\mathbb{F}}\,\otimes\,{\mathbb{G}}}\,:\,H_{1}\,\otimes\,H_{2}\,\to\,H_{1} \,\otimes\,H_{2}\,\) is given by_
\[S_{{\mathbb{F}}\,\otimes\,{\mathbb{G}}}\,(\,f\,\otimes\,g\,)\,=\,\int_{X}\, \,\langle\,f\,\otimes\,g,\,{\mathbb{F}}\,(\,x\,)\,\rangle\,\,{\mathbb{G}}\,(\,x \,)\,d\mu\]
_is called continuous biframe operator._
**Theorem 4.5**.: _Let \((\,{\bf F},\,{\bf G}\,)=(\,F_{1}\,\otimes\,F_{2},\,G_{1}\,\otimes\,G_{2}\,)=\mathbb{ F},\,\mathbb{G}:X\to H_{1}\,\otimes\,H_{2}\) is a continuous biframe for \(H_{1}\,\otimes\,H_{2}\) with respect to \((\,X,\,\mu\,)\,\) having continuous biframe operator \(S_{\mathbb{F}\,\otimes\,\mathbb{G}}.\) Then \(S_{\mathbb{F}\,\otimes\,\mathbb{G}}\,=\,S_{F_{1},\,G_{1}}\otimes S_{F_{2},\,G_ {2}},\) where \(S_{F_{1},\,G_{1}}\) and \(S_{F_{2},\,G_{2}}\) are continuous biframe operators of \((\,F_{1},\,G_{1}\,)\,\) and \((\,F_{2},\,G_{2}\,)\), respectively._
Proof.: For each \(f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\), we have
\[S_{\mathbb{F}\,\otimes\,\mathbb{G}}\,(\,f\,\otimes\,g\,)\] \[=\int\limits_{X}\langle\,f\otimes g,\,F_{1}(\,x_{1}\,)\otimes F_{ 2}(\,x_{2}\,)\,\rangle\,\,(\,G_{1}(\,x_{1}\,)\otimes G_{2}\,(\,x_{2}\,)\,)\,d\mu\] \[=\left(\,\int\limits_{X_{1}}\,\langle\,f,\,F_{1}\,(\,x_{1}\,) \,\rangle_{1}\,\,G_{1}\,(\,x_{1}\,)\,d\mu_{\,1}\,\right)\,\otimes\left(\,\int \limits_{X_{2}}\,\langle\,g,\,F_{2}\,(\,x_{2}\,)\,\rangle_{2}\,\,G_{2}\,(\,x_{ 2}\,)\,d\mu_{\,2}\,\right)\] \[=\,S_{F_{1},\,G_{1}}\,f\,\otimes\,S_{F_{2},\,G_{2}}\,g\,=\,(\,S_{F _{1},\,G_{1}}\,\otimes\,S_{F_{2},\,G_{2}}\,)\,\,(\,f\,\otimes\,g\,).\]
Thus, \(S_{\mathbb{F}\,\otimes\,\mathbb{G}}\,=\,S_{F_{1},\,G_{1}}\,\otimes\,S_{F_{2}, \,G_{2}}.\)
**Lemma 4.6**.: _Let \((\,F_{1},\,G_{1}\,)\,\) be a continuous biframe for \(H_{1}\,\) with respect to \((\,X_{1},\,\mu_{1}\,)\,\) having bounds \(A,\,B\,\) and \((\,F_{2},\,G_{2}\,)\,\) be a continuous biframe for \(H_{2}\,\) with respect to \((\,X_{2},\,\mu_{2}\,)\,\) having bounds \(C,\,D\). Then \(A\,C\,I_{H_{1}\,\otimes\,H_{2}}\,\leq\,S_{\mathbb{F}\,\otimes\,\mathbb{G}}\, \leq\,B\,D\,I_{H_{1}\,\otimes\,H_{2}},\) where \(I_{H_{1}\,\otimes\,H_{2}}\,\) is the identity operator on \(H_{1}\otimes H_{2}\,\) and \(S_{F_{1},\,G_{1}},\ S_{F_{2},\,G_{2}}\) are continuous biframe operators of \((\,F_{1},\,G_{1}\,)\,,\,(\,F_{2},\,G_{2}\,)\), respectively._
Proof.: Since \(S_{F_{1},\,G_{1}}\,\) and \(S_{F_{2},\,G_{2}}\,\) are continuous biframe operators, we have
\[A\,I_{H_{1}}\,\leq\,S_{F_{1},\,G_{1}}\,\leq\,B\,I_{H_{1}},\,\,\,C\,I_{H_{2}}\, \leq\,S_{F_{2},\,G_{2}}\,\leq\,D\,I_{H_{2}},\]
where \(I_{H_{1}}\,\) and \(I_{H_{2}}\,\) are the identity operators on \(H_{1}\,\) and \(H_{2}\), respectively. Taking tensor product on the above two inequalities, we get
\[A\,C\,(\,I_{H_{1}}\,\otimes\,I_{H_{2}}\,)\,\leq\,(\,S_{F_{1},\,G _{1}}\,\otimes\,S_{F_{1},\,G_{1}}\,)\,\leq\,B\,D\,\,(\,I_{H_{1}}\,\otimes\,I_{H _{2}}\,)\] \[\Rightarrow\,A\,C\,I_{H_{1}\,\otimes\,H_{2}}\,\leq\,S_{\mathbb{F} \,\otimes\,\mathbb{G}}\,\leq\,B\,D\,I_{H_{1}\,\otimes\,H_{2}}.\]
This completes the proof.
**Theorem 4.7**.: _Let \((\,F_{1},\,G_{1}\,)\,\) be a continuous biframe for \(H_{1}\,\) with respect to \((\,X_{1},\,\mu_{1}\,)\,\) and \((\,F_{2},\,G_{2}\,)\,\) be a continuous biframe for \(H_{2}\,\) with respect to \((\,X_{2},\,\mu_{2}\,)\). Then \(\Delta\,=\,(\,(\,T_{1}\,\otimes\,T_{2}\,)\,(\,F_{1}\,\otimes\,F_{2}\,)\,,\,( \,T_{1}\,\otimes\,T_{2}\,)\,(\,G_{1}\,\otimes\,G_{2}\,)\,)\) is a continuous biframe for \(H_{1}\otimes H_{2}\,\) with respect to \((\,X,\,\mu\,)\,\) if and only if \(T_{1}\otimes T_{2}\,\) is an invertible bounded linear operator on \(H_{1}\,\otimes\,H_{2}.\)_
Proof.: Let \((\,F_{1},\,G_{1}\,)\,\) be a continuous biframe for \(H_{1}\,\) with bounds \(A,\,B\,\) and \((\,F_{2},\,G_{2}\,)\,\) be a continuous biframe for \(H_{2}\,\) with bounds \(C,\,D.\) First we suppose that \(T_{1}\,\otimes\,T_{2}\,\) is an invertible bounded linear operator on \(H_{1}\,\otimes\,H_{2}.\) Then by Theorem 2.5, \(T_{1}\,\) and \(T_{2}\,\) are invertible bounded linear operators on \(H_{1}\,\) and \(H_{2},\) respectively. Now, by Theorem 3.10, \((\,T_{1}F_{1},\,T_{1}G_{1}\,)\,\) is a continuous biframe for \(H_{1}\,\) with
bounds \(A\,\left\|\,T_{1}^{\,-\,1}\,\right\|^{\,-\,2},\)\(B\,\left\|\,T_{1}\,\right\|^{\,2}\,\) and \((\,T_{2}F_{2},\,T_{2}G_{2}\,)\,\) is a continuous biframe for \(H_{2}\) with bounds \(C\,\left\|\,T_{2}^{\,-\,1}\,\right\|^{\,-\,2},\)\(D\,\left\|\,T_{2}\,\right\|^{\,2}.\)Hence, by Theorem 4.2, the pair
\[(\,T_{1}F_{1}\,\otimes\,T_{2}F_{2}\,,\,T_{1}G_{1}\,\otimes\,T_{2}G _{2}\,)\] \[=\,(\,(\,T_{1}\,\otimes\,T_{2}\,)\,(\,F_{1}\,\otimes\,F_{2}\,)\,, \,(\,T_{1}\,\otimes\,T_{2}\,)\,(\,G_{1}\,\otimes\,G_{2}\,)\,)\]
is a continuous biframe for \(H_{1}\,\otimes\,H_{2}\,\) with bounds
\[\frac{A\,C}{\left\|\,T_{1}^{\,-\,1}\,\right\|^{\,2}\,\left\|\,T_{2 }^{\,-\,1}\,\right\|^{\,2}}\,=\,\frac{A\,C}{\left\|\,T_{1}^{\,-\,1}\,\otimes \,T_{2}^{\,-\,1}\,\right\|^{\,2}}\,=\,\frac{A\,C}{\left\|\,\left(\,T_{1}\, \otimes\,T_{2}\,\right)^{\,-\,1}\,\right\|^{\,2}}\] \[=\,A\,C\,\left\|\,(\,T_{1}\,\otimes\,T_{2}\,)^{\,-\,1}\,\right\|^ {\,-\,2}\]
and \(B\,D\,\left\|\,T_{1}\,\right\|^{\,2}\,\left\|\,T_{2}\,\right\|^{\,2}\,=\,B\,D \,\left\|\,T_{1}\,\otimes\,T_{2}\,\right\|^{\,2}\).
Conversely, suppose that \(\Delta\) is a continuous biframe for \(H_{1}\otimes H_{2}\) with respect to \((\,X,\,\mu\,).\) Let \(S_{F_{1},\,G_{1}}\) and \(S_{F_{2},\,G_{2}}\) be the continuous biframe operators of \((\,F_{1},\,G_{1}\,)\) and \((\,F_{2},\,G_{2}\,),\) respectively. By Theorem 4.2, \((\,T_{1}F_{1},\,T_{1}G_{1}\,)\) and \((\,T_{2}F_{2},\,T_{2}G_{2}\,)\) are continuous biframes for \(H_{1}\) and \(H_{2},\) respectively. Now, for \(f\,\in\,H_{1},\) we have
\[\int_{X_{1}}\,\left\langle\,f,\,T_{1}\,F_{1}\,(\,x_{\,1}\,)\, \right\rangle_{1}\,\,T_{1}\,G_{1}\,(\,x_{\,1}\,)\,d\mu_{\,1}\] \[=\,T_{1}\,\left(\,\int_{X_{1}}\,\left\langle\,T_{1}^{\,*}\,f,\,F_ {1}\,(\,x_{\,1}\,)\,\right\rangle_{1}\,\,G_{1}\,(\,x_{\,1}\,)\,d\mu_{\,1}\, \right)\,=\,T_{1}\,S_{F_{1},\,G_{1}}\,T_{1}^{\,*}\,f.\]
This shows that \(T_{1}\,S_{F_{1},\,G_{1}}\,T_{1}^{\,*}\) is the corresponding continuous biframe operator of \((\,T_{1}F_{1},\,T_{1}G_{1}\,).\) Thus, \(T_{1}\,S_{F_{1},\,G_{1}}\,T_{1}^{\,*}\) is invertible on \(H_{1}\) and hence \(T_{1}\) is invertible on \(H_{1}.\) Similarly, it can be shown that \(T_{2}\,S_{F_{2},\,G_{2}}\,T_{2}^{\,*}\) is the corresponding continuous biframe operator of \((\,T_{2}F_{2},\,T_{2}G_{2}\,)\) and hence \(T_{2}\) is invertible on \(H_{2}.\) Therefore, \(T_{1}\,\otimes\,T_{2}\) is an invertible bounded linear operator on \(H_{1}\,\otimes\,H_{2}.\)This completes the proof.
Now, we would complete this section by introducing the idea of a continuous biframe Bessel multiplier in \(H_{1}\,\otimes\,H_{2}.\)
**Definition 4.8**.: _Let \((\,\mathbb{F},\,\mathbb{F}\,)\) and \((\,\mathbb{G},\,\mathbb{G}\,)\) be continuous biframe Bessel mappings in \(H_{1}\otimes H_{2}\) with respect to \((\,X,\,\mu\,)\) having bounds \(B_{1}\) and \(B_{2},\) respectively and \(m\,:\,X\,\rightarrow\,\mathbb{C}\) be a measurable function. The operator \(\mathcal{M}_{m,\,\mathbb{F},\,\mathbb{G}}\,:\,H_{1}\,\otimes\,H_{2}\to H _{1}\,\otimes\,H_{2}\) defined by_
\[\mathcal{M}_{m,\,\mathbb{F},\,\mathbb{G}}\,(\,f\,\otimes\,g\,)\] \[=\,\int_{X}\,m\,(\,x\,)\,\left\langle\,f\,\otimes\,g,\,\mathbb{ F}\,(\,x\,)\,\right\rangle\,\mathbb{G}\,(\,x\,)\,d\mu\,, \tag{9}\]
_for all \(f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\), is called continuous biframe Bessel multiplier of \(\mathbb{F}\) and \(\mathbb{G}\) with respect to \(m.\)_
**Note 4.9**.: _Let \(F_{1}\,,\,G_{1}\,:\,X_{1}\,\to\,H_{1}\) be continuous biframe Bessel mappings in \(H_{1}\) with respect to \((\,X_{1},\,\mu_{1}\,)\) and \(F_{2}\,,\,G_{2}\,:\,X_{2}\,\to\,H_{2}\) be continuous biframe Bessel mappings in \(H_{2}\) with respect to \((\,X_{2},\,\mu_{2}\,)\) and \(m_{1}:X_{1}\to\mathbb{C}\), \(m_{2}:X_{2}\to\mathbb{C}\) be two measurable functions. Suppose \(M_{m_{1},\,F_{1},\,G_{1}}\,:\,H_{1}\,\to\,H_{1}\) be the continuous biframe Bessel multiplier of \(F_{1}\) and \(G_{1}\) with respect to \(m_{1}\) and \(M_{m_{2},\,F_{2},\,G_{2}}\,:\,H_{2}\,\to\,H_{2}\) be the continuous biframe Bessel multiplier of \(F_{2}\) and \(G_{2}\) with respect to \(m_{2}\). Now, by Theorem 4.2, \(\mathcal{F}=F_{1}\otimes F_{2}\), \(\mathcal{G}=G_{1}\otimes G_{2}:X\to H_{1}\otimes H_{2}\) are continuous biframe Bessel mappings in \(H_{1}\,\otimes\,H_{2}\) with respect to \((\,X,\,\mu\,)\). Also, \(m\,(\,x\,)\,=\,m_{1}\,(\,x_{1}\,)\,m_{2}\,(\,x_{2}\,)\) is a measurable function. From (9), for each \(f\,\otimes\,g\,\in\,H_{1}\,\otimes\,H_{2}\), we can write_
\[\mathcal{M}_{m,\,\mathbb{F},\,\mathbb{G}}\,(\,f\,\otimes\,g\,)\] \[=\,\int\limits_{X_{1}}\,m\,(\,x\,)\,\left\langle\,f\,\otimes\,g, \,\mathbb{F}\,(\,x\,)\,\right\rangle\,\mathbb{G}\,(\,x\,)\,d\mu\] \[=\,\int\limits_{X_{1}}\,m_{1}(\,x_{1}\,)\,\left\langle f,\,F_{1} \,(\,x_{1}\,)\right\rangle_{1}G_{1}(\,x_{1}\,)\,d\mu_{1}\,\otimes\int\limits_{ X_{2}}\,m_{2}(\,x_{2}\,)\,\left\langle g,\,F_{2}\,(\,x_{2}\,)\right\rangle_{2}G_{2}( \,x_{2}\,)\,d\mu_{2}\] \[=\,M_{m_{1},\,F_{1},\,G_{1}}\,f\,\otimes\,M_{m_{2},\,F_{2},\,G_{ 2}}\,g\,=\,(\,M_{m_{1},\,F_{1},\,G_{1}}\,\otimes\,M_{m_{2},\,F_{2},\,G_{2}}\,) \,\,(\,f\,\otimes\,g\,).\]
_Thus, \(\mathcal{M}_{m,\,\mathbb{F},\,\mathbb{G}}=M_{m_{1},\,F_{1},\,G_{1}}\,\otimes\, M_{m_{2},\,F_{2},\,G_{2}}\)._
**Remark 4.10**.: _Following the Theorem 3.12, the continuous biframe Bessel multiplier of \(\mathbb{F}\) and \(\mathbb{G}\) with respect to \(m\) is well defined and bounded._
|
2310.05533 | The Lagrange top and the fifth Painlevé equation | We show that the Lagrange top with a linearly time-dependent moment of
inertia is equivalent to the degenerate fifth Painlev\'e equation. More
generally we show that the harmonic Lagrange top (the ordinary Lagrange top
with a quadratic term added in the potential) is equivalent to the fifth
Painlev\'e equation when the potential is made time-dependent in an appropriate
way. Through this identification two of the parameters of the fifth Painlev\'e
equation acquire the interpretation of global action variables. We discuss the
relation to the confluent Heun equation, which is the Schr\"odinger equation of
the Lagrange top, and discuss the dynamics of $P_V$ from the point of view of
the Lagrange top. | Holger R Dullin | 2023-10-09T08:54:25Z | http://arxiv.org/abs/2310.05533v1 | # The Lagrange Top and the Fifth Painleve Equation
###### Abstract.
We show that the Lagrange top with a linearly time-dependent moment of inertia is equivalent to the degenerate fifth Painleve equation. More generally we show that the harmonic Lagrange top (the ordinary Lagrange top with a quadratic term added in the potential) is equivalent to the fifth Painleve equation when the potential is made time-dependent in an appropriate way. Through this identification two of the parameters of the fifth Painleve equation acquire the interpretation of global action variables. We discuss the relation to the confluent Heun equation, which is the Schrodinger equation of the Lagrange top, and discuss the dynamics of \(P_{V}\) from the point of view of the Lagrange top.
## 1. Introduction
The Painleve equations are six non-linear second order ODEs, all of whose moveable singularities are poles. They were initially studied by P. Painleve, B. Gambier, R. Fuchs and others around 1900, and today are at the centre of the theory of integrable systems. For a general introduction see [21, 18, 22, 23, 24, 25]. Painleve equations appear in the Ising model [24], plasma physics [17], Bose gas [19], random matrix theory [26], as reductions of integrable PDEs [1], and we refer to [1] for a more extensive list of applications. The fifth Painleve equation, denoted by \(P_{V}\), for \(w=w(\zeta)\) is
\[\frac{\mathrm{d}^{2}w}{\mathrm{d}\zeta^{2}}=\left(\frac{1}{2w}+\frac{1}{w-1} \right)\left(\frac{\mathrm{d}w}{\mathrm{d}\zeta}\right)^{2}-\frac{1}{\zeta} \frac{\mathrm{d}w}{\mathrm{d}\zeta}+\frac{(w-1)^{2}}{\zeta^{2}}\left(\alpha w +\frac{\beta}{w}\right)+\frac{\gamma w}{\zeta}+\frac{\delta w(w+1)}{w-1} \tag{1}\]
where \(\alpha,\beta,\gamma,\delta\) are constants, see, e.g., [19].
In this paper we would like to add the Lagrange top to the list of applications: The fifth Painleve equation \(P_{V}\) describes the symmetric rigid body with a fixed point in a quadratic potential, i.e. the harmonic Lagrange top of [19], with a time-dependent potential. Furthermore, the usual Lagrange top in the linear potential of gravity with a moment of inertia depending linearly on time is equivalent to \(P_{V}\) with \(\delta=0\), the so called degenerate case of \(P_{V}\). In a somewhat similar spirit a connection between a non-autonomous Euler top with extra gyroscopic terms and \(P_{VI}\) has been reported in [1].
In [19] we showed that the quantisation of the harmonic Lagrange top (i.e. a symmetric top in a quadratic potential) leads to a Schrodinger equation which is the confluent Heun equation. In [18, 22] it was shown that Heun equations are related to Painleve equation by a kind of de-quantisation procedure. In fact the relation between the Heun equation and the Painleve equation was classically known, for some modern references see
[16, 17, 18]. This motivated the idea that the harmonic Lagrange top when appropriately turned into a non-autonomous system is equivalent to \(P_{V}\). Here we are going to show that this is indeed the case. We directly establish the equivalence of \(P_{V}\) and the non-autonomous (harmonic) Lagrange top without the detour through the Heun equation, but will comment on the connection to the Heun equation in a later section. In the two final section we consider regularizations of the singular points \(w=0,\infty\) motivated through the Lagrange top. After symplectic reduction by one \(S^{1}\) symmetry the dynamics of the Lagrange top lives on \(T^{*}S^{2}\) and gives a singularity free description of the dynamics of \(P_{V}\) on \(S^{2}\), and also a simple qualitative description of real solutions of \(P_{V}\). In the final section we consider the full singular symmetry reduction by \(S^{1}\times S^{1}\) which leads to dynamics on an orbifold. Both these description could be considered as a kind of blow-up of \(P_{V}\).
## 2. Trigonometric form of \(P_{v}\)
Changing the independent variable to \(\tau=\log\zeta\) and redefining the constants according to \(\kappa_{\infty}^{2}=2\alpha\), \(\kappa_{0}^{2}=-2\beta\) gives the modified fifth Painleve equation [10] as
\[\frac{\mathrm{d}^{2}w}{\mathrm{d}\tau^{2}}=\left(\frac{1}{2w}+\frac{1}{w-1} \right)\left(\frac{\mathrm{d}w}{\mathrm{d}\tau}\right)^{2}+\frac{1}{2}(w-1)^ {2}\left(\kappa_{\infty}^{2}w-\frac{\kappa_{0}^{2}}{w}\right)+\gamma e^{\tau} w+\delta e^{2\tau}\frac{w(w+1)}{(w-1)}\,.\]
This equation has the property that every solution is locally meromorphic [17, 16]. The new form of the parameters is convenient for discussion of the affine Weyl symmetry group \(W(A_{3}^{(1)})\)[11], and in particular also for the description of special function solutions and rational solutions [14, 15, 16, 17].
The first polynomial Hamiltonian form of \(P_{V}\) was given by Okamoto [11]. A Hamiltonian form of \(P_{VI}\) in which the Hamiltonian has the standard form \(H=\frac{1}{2}p^{2}+V(q)\) was given by Manin [18, 19] where \(V\) is given in terms of the Weierstrass \(\wp\) function, although the corresponding form of \(P_{VI}\) was already described in slightly different form by Fuchs [19] and Painleve [20]. This and analogous transformations for other Painleve equations were given by Babich and Bordag [1], Iwasaki [21, p. 4.2.1], Takasaki [22], also see [1]. Introducing a new dependent variable by \(w=\coth^{2}y/2\) transforms the modified \(P_{V}\) equation into the hyperbolic form
\[\frac{d^{2}y}{d\tau^{2}}=-V^{\prime},\quad V(y)=-\frac{\kappa_{\infty}^{2}}{ 2\sinh^{2}(y/2)}+\frac{\kappa_{0}^{2}}{2\cosh^{2}(y/2)}+\frac{\gamma e^{\tau}} {2}\cosh y+\frac{\delta e^{2\tau}}{4}\cosh^{2}y\,.\]
In order to obtain an equation related to the Lagrange top instead we consider the slightly different transformation \(w=-\cot^{2}y/2\), which leads to
\[\frac{d^{2}y}{d\tau^{2}}=-V^{\prime},\quad V(y)=-\frac{\kappa_{\infty}^{2}}{ 2\sin^{2}(y/2)}-\frac{\kappa_{0}^{2}}{2\cos^{2}(y/2)}-\frac{\gamma e^{\tau}}{ 2}\cos y-\frac{\delta e^{2\tau}}{4}\cos^{2}y\,. \tag{2}\]
We call this equation the trigonometric form of \(P_{V}\). It is obtained from the hyperbolic form by the simple transformation \(y\to iy\). A frozen time version of this equation is obtained by setting \(\tau=0\) in the exponential terms, and in frozen time this is the equation for the harmonic Lagrange top, as we are now going to show.
## 3. The Lagrange top
The Lagrange top is a symmetric heavy rigid body with a fixed point on the symmetry axis. The configuration space is \(SO(3)\). In Euler angles \(\phi,\theta,\psi\) it has a metric on \(SO(3)\) defined by the kinetic energy, see, e.g., [10], as
\[T_{\rm rot}=\tfrac{1}{2}I_{1}(\dot{\phi}^{2}\sin^{2}\theta+\dot{\theta}^{2})+ \tfrac{1}{2}I_{3}(\dot{\phi}\cos\theta+\dot{\psi})^{2}\,, \tag{3}\]
where \(\phi\) and \(\psi\) are \(2\pi\)-periodic angles and \(\theta\in[0,\pi]\), and \(I_{1}=I_{2}\) and \(I_{3}\) are the principal moments of inertia of the body with respect to the fixed point. A Legendre transformation leads to the corresponding Hamiltonian
\[H=\frac{1}{2I_{1}}\left(p_{\theta}^{2}+\frac{1}{\sin^{2}\theta}(p_{\phi}^{2}+p _{\psi}^{2}-2p_{\phi}p_{\psi}\cos\theta)\right)+\frac{1}{2}\left(\frac{1}{I_{ 3}}-\frac{1}{I_{1}}\right)p_{\psi}^{2}+U(\cos\theta) \tag{4}\]
where the potential \(U\) depends on \(z=\cos\theta\), the spatial \(z\)-coordinate of the tip of the axis of the top. The usual Lagrange top in the field of gravity has only a linear term proportional to \(z\) in the potential. The harmonic Lagrange top studied in [1] adds a quadratic term and hence we consider \(U(z)=cz+dz^{2}\). The potential is left somewhat general as a function \(U\) because we will later also allow for time-dependence in \(U\). Both momenta \(p_{\phi}\) and \(p_{\psi}\) are constants of motion, since the angles \(\phi\) for rotation about the direction of gravity and \(\psi\) for rotation about the symmetry axis of the body are both cyclic. The kinetic energy in the above Hamiltonian is split into a kinetic term that corresponds to the "round" top with all moments of inertia equal to \(I_{1}\), and an asymmetry "correction" proportional to the angular momentum for rotation about the symmetry axis of the body \(p_{\psi}^{2}\). This correction term is irrelevant for the dynamics of \(\theta\).
In the Lagrange top with time-dependent moments of inertia and/or time-dependent potential the momenta \(p_{\phi}\) and \(p_{\psi}\) are still constants of motion. Thus the essential dynamics is given by a (singularly) reduced one degree of freedom system in which the momenta \(p_{\phi}\) and \(p_{\psi}\) are parameters and all the terms but \(p_{\theta}^{2}\) are considered as the effective potential of the reduced system
\[H=\frac{1}{2I_{1}}p_{\theta}^{2}+U_{\rm eff}(\cos\theta;p_{\phi},p_{\psi})\,. \tag{5}\]
The angles \(\phi\) and \(\psi\) are driven by the dynamics of \(\theta\) through Hamilton's equation
\[\frac{d\phi}{dt}=\frac{p_{\phi}-2p_{\psi}\cos\theta}{I_{1}\sin^{2}\theta}, \quad\frac{d\psi}{dt}=\frac{p_{\psi}-2p_{\phi}\cos\theta}{I_{1}\sin^{2}\theta }+\left(\frac{1}{I_{3}}-\frac{1}{I_{1}}\right)p_{\psi}\,. \tag{6}\]
The Lagrange top (without time-dependent terms) is Liouville integrable with integrals \(H=E\), \(p_{\phi}\), \(p_{\psi}\). The typical motion is quasiperiodic on 3-dimensional tori in phase space. In this motion the tip of the axis of the top oscillates between \(\theta_{min}\) and \(\theta_{max}\) determined by \(p_{\phi}\), \(p_{\psi}\), and \(E\), while rotating about its axis. The constants of motion \(p_{\phi}\) and \(p_{\psi}\) are global action variables, they generate \(2\pi\)-periodic flows which are the rotation about the axis of gravity and the rotation about the axis of symmetry of the top, respectively. The third action variable is given by a complete elliptic integral of 3rd kind. Solutions on 2-dimensional tori occur for \(\theta=const\) in which the tip of the axis of the top traces out
a horizontal circle. Isolated periodic solutions are the so-called sleeping tops with \(\theta=0\) (upright) or \(\theta=\pi\) (hanging) where the axis of symmetry is parallel to the direction of gravity and the top is rotating about this axis. The sleeping tops are only possible for \(p_{\phi}\pm p_{\psi}=0\), so that the term in the Hamiltonian that is singular for \(\theta\to 0\) or \(\theta\to\pi\), respectively, disappears. These linear combinations of \(p_{\phi}\) and \(p_{\psi}\) will play an essential role in the following. Finally, for \(p_{\phi}=p_{\psi}=0\) there are two equilibrium points corresponding to minimal and maximal potential energy.
## 4. The equivalence between \(P_{v}\) and the Lagrange top
Now the stage is set to show that the two dynamical systems described in the previous two sections are actually equivalent with the appropriate choice of variables, parameters, and potentials.
**Theorem 1**.: _The trigonometric form of \(P_{V}\) is the equation of motion for the harmonic Lagrange top where \(y=\theta\), \(\kappa_{0}^{2}=-(p_{\phi}+p_{\psi})^{2}/4\), \(\kappa_{\infty}^{2}=-(p_{\phi}-p_{\psi})^{2}/4\), \(\tau=t/I_{1}\), and \(U\) is the time-dependent potential \(U(z)=-(\frac{1}{2}\gamma e^{\tau}z+\frac{1}{4}\delta e^{2\tau}z^{2})/I_{1}\)._
Proof.: Consider the metric of the round \(SO(3)\) of the rigid body with a fixed point given by
\[\frac{1}{I_{1}}ds^{2}=d\theta^{2}+d\phi^{2}+d\psi^{2}+2\cos\theta d\phi d\psi\]
obtained from the kinetic energy (3) for \(I_{3}=I_{1}\). This is a metric of constant sectional curvature \(3/(2I_{1})\) whose Ricci tensor is proportional to the metric with proportionality factor \(1/(2I_{1})\). Hence up to a covering it is equivalent to the metric of the round sphere \(S^{3}\). To make this explicit introduce new angles \(\phi_{\pm}\) through \(\phi_{\pm}=\phi\pm\psi\). In these coordinates the metric becomes diagonal
\[\frac{1}{I_{1}}ds^{2}=d\theta^{2}+\cos^{2}\tfrac{\theta}{2}d\phi_{+}^{2}+\sin ^{2}\tfrac{\theta}{2}d\phi_{-}^{2}\;,\]
and this is the metric of the Hopf coordinates on the sphere \(S^{3}\) with angles \(\phi_{\pm}\). Note that at the coordinate singularity of the Euler angles where \(\theta=0\) only \(\phi_{+}\) is defined, while at \(\theta=\pi\) only \(\phi_{-}\) is defined. Extending this to a symplectic transformation the momenta are given by \(2p_{\pm}=p_{\phi}\pm p_{\psi}\) and transforming (4) the new Hamiltonian is
\[H=\frac{1}{2I_{1}}\left(p_{\theta}^{2}+\frac{p_{+}^{2}}{\cos^{2}\theta/2}+ \frac{p_{-}^{2}}{\sin^{2}\theta/2}\right)+\frac{1}{2}\left(\frac{1}{I_{3}}- \frac{1}{I_{1}}\right)(p_{+}-p_{-})^{2}+U(\cos\theta)\,. \tag{7}\]
The overall factor \(1/I_{1}\) can be removed by introducing a new time \(\tau=t/I_{1}\). The term proportional to \((p_{+}-p_{-})^{2}\) has no influence on the dynamics of \(\theta\) and can be ignored. Thus define
\[U_{\text{eff}}(\cos\theta)=\frac{p_{+}^{2}}{2\cos^{2}\theta/2}+\frac{p_{-}^{2 }}{2\sin^{2}\theta/2}+I_{1}U(\cos\theta)\]
as the effective potential relevant for the dynamics of \(\theta(\tau)\). Now Hamiltons equations for \(\theta\) are equivalent to the trigonometric form (2) of \(P_{V}\) in \(y\) if we set \(V=U_{\text{eff}}\) and hence the
parameters in the effective potential are \(\kappa_{0}^{2}=-p_{+}^{2}\), \(\kappa_{\infty}^{2}=-p_{-}^{2}\) and the coefficients in the potential \(U(z)=cz+dz^{2}\) need to be chosen as \(c=-\frac{1}{2}\gamma e^{\tau}/I_{1}\) and \(d=-\frac{1}{4}\delta e^{2\tau}/I_{1}\).
The parameters \(p_{\pm}\) are action variables and are therefore real for the Lagrange top, and hence the parameters \(\kappa_{0}\), \(\kappa_{\infty}\) in \(P_{V}\) will be purely imaginary. In particular this means that any rational solutions that appear for integer or half-integer values of \(\kappa_{0}\), \(\kappa_{\infty}\), see, e.g., [10, 12, 13], are not relevant for the real Lagrange top, similarly for special function solutions. The transformation \(w\to 1/w\) does map \(P_{V}\) into itself with changed parameters \((\alpha,\beta,\gamma)\to(-\beta,-\alpha,-\gamma)\). However in terms of the signed parameters this becomes \((\kappa_{0}^{2},-\kappa_{\infty}^{2},\gamma)\to(\kappa_{\infty}^{2},-\kappa_{ 0}^{2},-\gamma)\) and so is not able to flip the signs of \(\kappa_{0}^{2}\), \(\kappa_{\infty}^{2}\). The only rational solution that does exists is the seed solution for Backlund transformations \(w=-1\) for \(\alpha+\beta=0\) and \(\gamma=0\). This is an equilibrium point of the potential \(\delta\cos^{2}\theta\) at \(\theta=\pi/2\). The other two equilibrium points at \(\theta=0,\pi\) correspond to the singularities \(w\to-\infty\) and \(w\to 0\) in \(P_{V}\), respectively.
The transformation of the metric to diagonal form suggest that another natural identification of \(P_{V}\) can be made with the degenerate Carl Neumann system on \(T^{*}S^{3}\), see [1], where either the size of the sphere and/or the potential is time-dependent.
A different time-dependence for the Lagrange top is achieved by changing the moments of inertia, which is used to great effect, e.g., by figure skaters, and the next theorem is about this time-dependence. Note, however, that the figure skater mainly changes the moment of inertia \(I_{3}\) about the axis of symmetry, which by way of (6) will change the dynamics of \(\psi\), the angle of rotation about that axis. Typically there will also be a small change in the moment of inertia \(I_{1}\), and it is the time-dependence of \(I_{1}\) that changes the dynamics of \(\theta\), and thus gives the correspondence with \(P_{V}\).
**Theorem 2**.: _The trigonometric form of the degenerate \(P_{V}\) equation where \(\delta=0\) is the equation of motion for the Lagrange top with time-dependent moment of inertia \(I_{1}(t)=a+bt\)._
Proof.: In this case the potential is simply \(U=g\cos\theta\). The proof proceeds as in Theorem 1 until the time is scaled. In order to remove the time-dependent moment of inertia \(I_{1}(t)\) from the kinetic energy introduce a new time by \(dt=I_{1}(t)d\tilde{\tau}\). Now let \(I_{1}(t)=a+bt\) and integration gives \(\log(a+bt)=b(\tilde{\tau}-\tau_{0})\) and hence \(I_{1}(t)=ae^{b\tilde{\tau}}\). Finally define \(\tau=b\tilde{\tau}\) and the Hamiltonian
\[H=\frac{1}{2}p_{\theta}^{2}+U_{\rm eff}(\cos\theta),\quad U_{\rm eff}(\cos \theta)=\frac{p_{+}^{2}}{2b^{2}\cos^{2}\theta/2}+\frac{p_{-}^{2}}{2b^{2}\sin^{ 2}\theta/2}+e^{\tau}\gamma\cos\theta\]
where \(\gamma=ag/b^{2}\) is that of the degenerate \(P_{V}\) equation. Transforming back to the original time \(t\) we see that the Hamiltonian of the Lagrange top in which \(I_{1}(t)=a+bt\) directly gives the degenerate \(P_{V}\) equation in the original time \(t\)
## 5. The connection to the confluent Heun equation
The confluent Heun equation written in the self-adjoint form (known as the generalised spheroidal wave equation) is given by the linear 2nd order differential operator
\[L_{CH}=-\frac{1}{\sin\theta}\partial_{\theta}(\sin\theta\partial_{\theta})+ \frac{p_{+}^{2}}{\cos^{2}\theta/2}+\frac{p_{-}^{2}}{\sin^{2}\theta/2}+2I_{1}c \cos\theta+2I_{1}d\,\cos^{2}\theta \tag{8}\]
as \(L_{CH}\psi=\lambda\psi\) where the eigenvalue \(\lambda\) is also called the accessory parameter in the context of the Heun equation. The operator \(L_{CH}\) is obtained from the Hamiltonian of the harmonic Lagrange top (7) by canonical quantisation, i.e. by replacing the kinetic energy with the negative Laplace-Beltrami operator. The trivial separated equations for \(\partial_{\phi_{\pm}}^{2}\) with periodic boundary conditions are solved and integer values \(p_{\pm}\) are inserted into the remaining operator. The algebraic form of the equation is obtained by introducing \(z=\cos\theta\) which is the \(z\)-coordinate of the axis of the top. The resulting confluent Heun differential operator in algebraic form is
\[L_{CH}=-\partial_{z}((1-z^{2})\partial_{z})+\frac{2p_{+}^{2}}{1+z}+\frac{2p_{- }^{2}}{1-z}+2I_{1}cz+2I_{1}d\,z^{2}\,.\]
The indices at the regular singular points \(z=\pm 1\) are \(p_{+}\) and \(p_{-}\), respectively. Extending \(z=\cos\theta\) to a canonical transformation turns the Hamiltonian (4) into
\[H=\frac{1}{2I_{1}(t)}\left((1-z^{2})p_{z}^{2}+\frac{2p_{+}^{2}}{1+z}+\frac{2p_ {-}^{2}}{1-z}\right)+U(z)\,. \tag{9}\]
Compared to \(L_{CH}\) only the first term changes sign, since \(p_{+}\) and \(p_{-}\) in \(L_{CH}\) are already quantum numbers (or classical actions) and not differential operators any more. In terms of the original variable \(w\) of \(P_{V}\) introducing \(z\) amounts to the Mobius transformation \(w=-(1+z)/(1-z)\) that maps the interval \([-1,1]\) in \(z\) to \([0,-\infty]\) in \(w\). Absorbing \(I_{1}\) into \(U\) as before by scaling time we find
\[\frac{dz}{d\tau}=(1-z^{2})p_{z},\quad\frac{dp_{z}}{d\tau}=-\frac{\partial H}{ \partial z}\]
and eliminating \(p_{z}\) we obtain a version of \(P_{V}\) that is the de-quantisation of the algebraic form of the generalised spheroidal wave equation (aka the quantised harmonic Lagrange top), which is
\[\frac{1}{1-z^{2}}\frac{d^{2}z}{d\tau^{2}}=\frac{-z}{(1-z^{2})^{2}}\left(\frac {dz}{d\tau}\right)^{2}+\frac{p_{+}^{2}}{(1+z)^{2}}-\frac{p_{-}^{2}}{(1-z)^{2} }-\gamma e^{\tau}-\frac{1}{2}\delta e^{2\tau}z\,. \tag{10}\]
This equation has singularities at \(z=\pm 1\). Interestingly, it is also this form that for \(\delta=0\) is most easily mapped to \(P_{III}\)[13].
A natural question that arrises is what the actual quantisation of \(P_{V}\) gives. Since it is a Hamiltonian system with explicit time-dependence this leads to a time-dependent Schrodinger equation
\[i\hbar\frac{\partial}{\partial t}\Psi(\theta,t)=L_{CH}\Psi(\theta,t)\]
where now the potential in \(L_{CH}\) in (8) has the time-dependence that comes from \(P_{V}\). This is a \(1+1\)-dimensional PDE for \(\Psi\). Some steps in this direction have been taken in [22]. Interesting connections between quantisation and the Painleve equation are discussed in [2]. In [10] we have shown that the quantised Lagrange top, i.e. the confluent Heun equation, has quantum monodromy, which means there is a defect in the joint spectrum of the corresponding commuting operators. It would be very interesting to try to understand how this quantum monodromy is connected to the iso-monodromy problem associated to \(P_{V}\).
## 6. Dynamics on \(S^{2}\)
The motion of the Lagrange top is smooth on \(T^{*}SO(3)\). Using Euler angles introduces a coordinate singularity at \(\theta=0,\pi\). This coordinate singularity corresponds to a pole in \(P_{V}\). In this section we are going to use the reduction of the Lagrange top to \(T^{*}S^{2}\) to obtain a global singularity free description of the dynamics on \(S^{2}\). This can be considered as physically motivated blowup of \(P_{V}\). The full symmetry group of the Lagrange top is \(S^{1}\times S^{1}\), however, there is isotropy of the group action when the rotation axis are parallel, and hence the fully symmetry reduced system is singular at \(\theta=0,\pi\). Only reducing by one of the two \(S^{1}\) symmetries leads to a smooth system with two degrees of freedom.
After reduction by the body symmetry the Lagrange top is a Hamiltonian dynamical system on \(T^{*}S^{2}\). For more details on the derivation of these equations and the associated Poisson structure see, e.g., [10]. Denote the axis of the top by \(\boldsymbol{a}\in S^{2}\subset\mathbb{R}^{3}\), \(|\boldsymbol{a}|=1\), and by \(l\) the momentum vector in the tangent space such that \(l\cdot\boldsymbol{a}=L_{3}=const\). Denote the components of these vectors by \((a_{x},a_{y},a_{z})\) and \((l_{x},l_{y},l_{z})\). Note that in (10) the single dependent variable is \(a_{z}\equiv z\). The Hamiltonian of the system written in \((\boldsymbol{a},l)\) is
\[H=\frac{1}{2}|l|^{2}+U(a_{z})\]
with equations of motion
\[\boldsymbol{a}^{\prime}=-\boldsymbol{a}\times l,\quad l^{\prime}=-\boldsymbol {a}\times\frac{\partial U}{\partial\boldsymbol{a}}=-\boldsymbol{a}\times \boldsymbol{e}_{z}U^{\prime}(a_{z})\,.\]
Here we assume that time has been changed so that \(I_{1}\) is absorbed into \(U\), possibly creating time-dependence, and the dash denotes derivatives with respect to the time \(\tau\). In the usual Lagrange top \(U\) is linear in \(z\equiv a_{z}\) and hence \(U^{\prime}=ce^{\tau}\), or in the harmonic Lagrange top it is \(U^{\prime}=ce^{\tau}+2da_{z}e^{2\tau}\). The case of constant moment of inertia is recovered by setting \(\tau=0\). The system is invariant under simultaneous rotation of \(\boldsymbol{a}\) and \(l\) about the \(z\)-axis, and the corresponding conserved quantity is \(l_{z}\). Thus after full symmetry reduction the system has one degree of freedom. The description presented earlier using Euler angles directly provides this one degree of freedom system. In that notation we have \(a_{z}=\cos\theta\), \(l\cdot\boldsymbol{a}=L_{3}=p_{\psi}\) and \(l_{z}=p_{\phi}\). The problem with Euler angles is that they are singular for \(\theta=0,\pi\) which corresponds to a coordinate singularity in the Euler angles because for these \(\theta\) the angles \(\phi\) and \(\psi\) are not uniquely defined, but only their sum or difference is. In \(P_{V}\) the corresponding singularity are \(z=\pm 1\) in (10) or at \(w=0\) and \(w=-\infty\) in (1). The present description of the Lagrange top as a system on \(T^{*}S^{2}\) has the advantage that
it provides a natural smooth coordinate system near these singularities. Note that for real motions \(w\leq 0\) and in particular the singularity of \(P_{V}\) at \(w=1\) does not correspond to a real motion of the real Lagrange top in real time.
Since \(l_{z}\) is constant and \(a_{z}\) is determined through \(a_{x}^{2}+a_{y}^{2}+a_{z}^{2}=1\) we can project the equations onto the \(xy\)-components and write it in complex form with \(a=a_{x}+ia_{y}\) and \(l=l_{x}+il_{y}\) as (a deceptively linear looking) non-linear system on \(\mathbb{C}^{2}\)
\[\begin{pmatrix}a^{\prime}\\ l^{\prime}\end{pmatrix}=i\begin{pmatrix}-l_{z}&a_{z}\\ -U^{\prime}(a_{z})&0\end{pmatrix}\begin{pmatrix}a\\ l\end{pmatrix}\,. \tag{11}\]
This system of ODEs has an equilibrium point at the origin, which corresponds to the north- or south-pole of the sphere. Linearisation about this equilibrium amounts to setting \(a_{z}=\pm 1\). We keep \(a_{z}\) in the equation to treat both signs simultaneously. The resulting 2nd order linear equation is
\[a^{\prime\prime}+il_{z}a^{\prime}-a_{z}U^{\prime}(a_{z})a=0,\quad l_{z}=const,\,U^{\prime}(a_{z})=ce^{\tau}+2de^{2\tau}a_{z},\,a_{z}=\pm 1\,.\]
Returning to the original time \(t=e^{\tau}\) we find \(a^{\prime}=t\dot{a}\) and \(a^{\prime\prime}=t^{2}\ddot{a}+t\dot{a}\) and after cancelling an overall factor of \(t\)
\[t\ddot{a}+(1+il_{z})\dot{a}-a_{z}(c+2dta_{z})a=0\,. \tag{12}\]
For \(\delta=0\) this is the Bessel equation, while in general it is the confluent hypergeometric equation. If we remove the time-dependence in the equation by setting \(t=1\) the linear equation describes the Hopf bifurcation by which the sleeping top is de-stabilised when the spin rate \(l_{z}\) becomes too slow, see, e.g., [1]. With time-dependent moment of inertia passing the stability threshold results in the onset of oscillations.
Solutions that are interesting from a physical point of view are those that approach \(a_{z}\equiv z=\pm 1\) for \(\tau\to\pm\infty\). The blow up of \(P_{V}\) near singularities has been studied in [17]. Adding the non-linear term \(ia_{z}^{\prime}l\) to (12) where now \(a_{z}=\pm\sqrt{1-a\ddot{a}}\) and \(l\) is expressed in terms of \(a\) and its derivative using (11), which gives
\[t\ddot{a}+(1+il_{z})\dot{a}-a_{z}(c+2dta_{z})a=i\frac{a\dot{a}+\bar{a}\dot{a}} {2a_{z}^{2}}(t\dot{a}+il_{z}a)\]
It would be interesting to study how this equation compares to the blown up \(P_{V}\). The main advantage of the equation when written in \(a=a_{x}+ia_{y}\) instead of \(a_{z}\) is that it is regular near \(a_{z}=\pm 1\). There is, however, a square root in the equation because \(a_{z}=\pm\sqrt{1-a\bar{a}}\).
We conclude with a qualitative discussion of solutions of \(P_{V}\) corresponding to the real Lagrange top with time-dependent moment of inertia. It appears that the parameters relevant for this are \(\alpha\leq 0\), \(\beta\geq 0\), \(\gamma>0\), \(\delta=0\). For \(\delta>0\) (i.e. with the extra harmonic terms in the top) this is the class of solutions studied in [14]. In section 3 we gave a quick review of the properties of solutions of the time-independent Lagrange top. What changes with the time dependence? The simplest case of the pendulum with time-dependent length occurs for \(p_{+}=p_{-}=0\). For \(p_{\theta}=0\) there are two equilibrium solutions at \(z=\pm 1\), the minimum and the maximum of the potential. Now consider non-zero \(p_{\theta}\). Starting at \(\tau=-\infty\) in this case \(\theta\) increases linearly with time with slope given by \(p_{\theta}\). When \(\tau\) crosses towards positive times the potential becomes important, and for \(\tau\to+\infty\) the solution
spirals to a potential minimum with \(\theta=(2n-1)\pi\) for some integer \(n\). While spiralling towards the minimum the energy goes to \(-\infty\), since \(\cos\theta\to-1\) and it is multiplied by an exponentially growing term. Increasing the initial \(p_{\theta}\) the solution will eventually change from "basin" \(n\) to basin \(n+1\). By continuity between these lies a unique solution with a particular \(p_{\theta}\) that will asymptote to the potential maximum with \(\theta=2n\pi\). On a qualitative level the behaviour is like a pendulum with friction, but the physical process (and the details of the solution) are of course very different. Nevertheless, in both systems the exceptional solutions that approach the unstable maximum for \(\tau\to+\infty\) exist. Now we are going to discuss solutions where at least one \(p_{\pm}\) is non-zero. We are going to discuss the limit \(\tau\to-\infty\) and \(\tau\to+\infty\) in turns.
For \(\tau\to-\infty\) the potential terms vanish, and the dynamics is free motion on \(SO(3)\). Considering the double cover \(S^{3}\) this implies that the solutions are great circles on \(S^{3}\) (recall that the term proportional to \(p_{\psi}^{2}\) in the Hamiltonian has no counterpart in \(P_{V}\)). Hence \(z\) will oscillate between a minimum and a maximum which depend on the values of \(p_{\pm}\). The only solutions that do not oscillate in this limit correspond to the great circle that has \(z=0\). This solution is possible only when \(p_{+}p_{-}=0\).
When \(\tau\) reaches the vicinity of \(0\) the system starts to behave like the Lagrange top. This regime is short-lived unless all parameters are large. Eventually for \(\tau\to+\infty\) the potential dominates the Hamiltonian. As for the pendulum most solutions approach the potential minimum \(z=-1\) in this limit. In the time-independent Lagrange top \(z=-1\) is only accessible when the conserved momentum satisfies \(p_{+}=0\), because otherwise the energy diverges, which is a contradiction to energy conservation. However, in the time-dependent case the energy is not constant, and in fact \(\dot{E}=\partial H/\partial\tau=\gamma e^{\tau}z\) which is negative for negative \(z\). Thus the system will loose energy and the solutions approach \(z=-1\) in an oscillatory manner.
A different class of interesting solutions are those that approach the upright sleeping top with \(z=1\) for \(\tau\to\infty\). Solutions for which \(z\equiv 1\) certainly exists but cannot be seen in \(P_{V}\) because of the singularity of the equation at \(z=1\). However, for dynamics on \(S^{2}\) the vectors \(\boldsymbol{a}=(0,0,1)\) and \(l=(0,0,l_{z})\) clearly correspond to that equilibrium solution. Can this solution be approached from \(z<1\)? In the time-independent case the answer is yes if \(p_{-}=0\) and the sleeping top is unstable (i.e. \(l_{z}\) is not too large), in which case the equilibrium has a stable manifold along which it can be approached. With time-dependence for \(\tau\to\infty\) this will be harder, but by a continuity argument similar to that applied to the pendulum this is possible at least when \(p_{-}=0\). Thus the most special solutions of \(P_{V}\) related to real motions of the time-dependent Lagrange top are those that connect \(z=0\) at \(\tau=-\infty\) to \(z=1\) at \(\tau=+\infty\) without any oscillations.
## 7. \(P_{v}\) on an orbifold
The full symmetry reduction of the Lagrange top by both its \(S^{1}\) symmetries leads to a Poisson structure in \(\mathbb{R}^{3}\) whose Casimir defines a smooth non-compact surface for most values of \(p_{\pm}\), which becomes an orbifold when either \(p_{+}\) or \(p_{-}\) vanishes. The singularity appears because the \(S^{1}\times S^{1}\) action is not free but has isotropy exactly for the sleeping
\(p_{+}\) or \(p_{-}\) vanishes. In the following we are going to describe this orbifold and its regularisation / blow-up. This will allow for a smooth description of motion at and near \(w=0\) and \(w=\infty\) for arbitrary time.
The dynamics on \(S^{2}\) with rotational symmetry around the \(z\)-axis is best described using complex variables \(a=a_{x}+ia_{y}\), \(l=l_{x}+il_{y}\). The \(S^{1}\) action in these variables is simply multiplication \((a,l)\mapsto(ae^{i\phi},le^{i\phi})\) and the invariants of the \(S^{1}\) action are \(a\bar{a}\geq 0\), \(T=l\bar{l}\geq 0\), and the complex \(a\bar{l}=u+iv\). These invariants satisfy the relation \(u^{2}+v^{2}=|a\bar{l}|^{2}=a\bar{a}\,T\). The trivial invariants \(z\) and \(l_{z}\) are related to these invariants through \(z^{2}+a\bar{a}=1\) and \(zl_{z}+u=\boldsymbol{a}\cdot l=L_{3}\). Using these to eliminate \(u\) and \(a\bar{a}\) in the relation gives the cubic Casimir
\[C(T,z,v)=(L_{3}-zl_{z})^{2}+v^{2}-(1-z^{2})T=0\]
and the Hamiltonian
\[H(T,z)=\tfrac{1}{2}T+U(z)\,.\]
The Poisson structure is given by taking the cross product with the gradient of \(C\). The zero-level of the Casimir defines a surface which is the reduced phase space. It is a non-compact surface. It is smooth unless \(L_{3}\pm l_{z}=0\). When \(L_{3}\pm l_{z}=0\) then the reduced phase space is an orbifold with singular point \(z=\mp 1\), \(v=0\). We are now going to show that these singular points are indeed conical singularities.
From now on \(L_{3}=\mp l_{z}\). Firstly, translate the singular point to the origin, \(z=\mp 1\pm\Delta z\), such that the Casimir becomes \(l_{z}^{2}\Delta z^{2}-2T\Delta z+v^{2}+T\Delta z^{2}\). Both singular points at \(z=-1+\Delta z\) and \(z=1-\Delta z\) lead to the same Casimir. Secondly, rotate the \((T,\Delta z)\) plane so that the Hessian at the origin (which is the singular point) is diagonal. Thirdly, scale the new coordinates so that the eigenvalues of the Hessian at the origin are equal in magnitude. Together this gives an affine area-preserving transformation of \((T,z)\) to new coordinates \((X,Y)\) such that the Casimir is
\[\tilde{C}(X,Y,v)=-X^{2}+Y^{2}+v^{2}+(X+Y)^{2}(X\lambda_{+}+Y\lambda_{-})(4+l_ {z}^{4})^{-3/4}\]
where \(2\lambda_{\pm}=l_{z}^{2}\pm\sqrt{4+l_{z}^{4}}\) so that \(\lambda_{+}\lambda_{-}=-1\). The quadratic terms describe the conical singularity at the origin. The cone can be "unrolled" onto the plane by introducing polar coordinates for \((Y,v)\) where \(X\) is the radius and then doubling the angle. At quadratic order this amounts to introducing new cartesian coordinates \(Y+iv=(\tilde{Y}+i\tilde{v})^{2}/r=(\tilde{Y}^{2}-\tilde{v}^{2}+2i\tilde{v} \tilde{Y})/r\) and \(X=r\) where \(r^{2}=\tilde{Y}^{2}+\tilde{v}^{2}\).
This process gives an equation that is equivalent to a double cover of the real \(P_{V}\) near the singular points \(w=0\), \(w=\infty\). The main difference to the equation in the previous section is that there we had a complex 2nd order equation corresponding to real solutions of the only partially symmetry reduced Lagrange top. By contrast, the conical singularity of the Poisson structure leads to a single real 2nd order equation that corresponds to real solutions of the fully symmetry reduced Lagrange top. The additional dimensions in the previous section were a consequence of the fact that there we did not consider the fully symmetry reduced Lagrange top. |
2305.17395 | Data acquisition system for muon tracking in a muon scattering
tomography setup | We report here the development of a multi-channel DAQ system for muon
tracking in a muon scattering tomography setup. The salient features of the
proposed DAQ system are direct acquisition and processing of LVDS signals, 500
MHz sampling frequency and scalability. It consists of front-end electronics
stage built around NINO ASIC. The back-end electronics is configured with
Intel/Altera MAX-10 FPGA development board which transmits data to the storage
following UART protocol. The proposed DAQ system has been tested for its
performance using a position sensitive glass RPC detector with two-dimensional
8X8 readout strip configuration. | Subhendu Das, Sridhar Tripathy, Jaydeep Datta, Sandip Sarkar, Nayana Majumdar, Supratik Mukhopadhyay | 2023-05-27T07:27:01Z | http://arxiv.org/abs/2305.17395v1 | # Data acquisition system for muon tracking in a muon scattering tomography setup
###### Abstract
We report here the development of a multi-channel DAQ system for muon tracking in a muon scattering tomography setup. The salient features of the proposed DAQ system are direct acquisition and processing of LVDS signals, 500_MHz_ sampling frequency and scalability. It consists of front-end electronics stage built around NINO ASIC. The back-end electronics is configured with Intel(r)/Altera(r) MAX(r)-10 FPGA development board which transmits data to the storage following UART protocol. The proposed DAQ system has been tested for its performance using a position sensitive glass RPC detector with two-dimensional 8 \(\times\) 8 readout strip configuration.
keywords: Muon Scattering Tomography, NINO ASIC, Field Programmable Gate Array, FPGA-based DAQ, +
Footnote †: journal: Nuclear Instrumentation and Methods A
## 1 Introduction
Muon Scattering Tomography (MST) is a non-destructive evaluation technique used for investigating the internal structure and constituent materials of a large and static object by utilizing the principle of multiple Coulomb scattering of cosmic ray muons. While passing through matter medium, the muons can suffer scattering owing to their electromagnetic interaction with
the atomic nuclei present in the medium [1; 2; 3; 4; 5; 6]. The net deflection of a muon from its original trajectory can be represented as a Gaussian distribution with standard deviation dependent on the momentum of muon and thickness of the object in terms of radiation length vis-a-vis its atomic number and density [7; 8; 9]. Therefore, determination of scattering angle by tracking the muon trajectory can be utilized to identify the material if the muon momentum is known. The tracking of muon may be accomplished with a series of position sensitive detectors placed along the direction of muon propagation. The two-dimensional position information of the muon event obtained from each of them can be used to reconstruct the trajectory of the muon. The scattering angle eventually can be determined from the incident and scattered trajectories reconstructed using positions of the muon events at the tracking detectors placed respectively before and after their passage through the object.
Gaseous detectors are frequently used as tracking devices for their excellent position and timing resolutions. A few more advantages, like low cost and relatively easy production of large area coverage add to their wide acceptance in this area of application [10; 11]. To ensure precise measurement of scattering angle, which can be as small as few _mrad_ for low density materials, like Aluminium, the position resolution of the detectors should be of the order of few hundreds of \(\mu m\)[6; 12]. Obviously, fairly high readout granularity with finer strip width of the order of _mm_ or less is an essential requisite for this to achieve. Eventually, the higher granularity combined with sufficiently large coverage of the tracking detectors calls for a large number of readout channels. Therefore, a cost effective solution for the readout electronics turns out extremely important in planning and designing of such an application. For data acquisition of a setup with large number of input channels, FPGA based systems offer the optimal solution for the backend signal processing and control because of the availability of large number of I/O, parallel operation, software controlled reconfigurability and cost effectiveness.
We have aimed to build a prototype setup for material discrimination utilizing the technique of MST with an objective of its application in inspection of civil structures [13; 14]. In the initial phase, we plan to implement single-gap Resistive Plate Chamber (RPC) as position sensitive tracking detector in the setup for detection of muon. The RPC in particular has been opted for its simple and robust design, easy construction using inexpensive materials, yet, very efficient performance along with excellent position and timing resolution. The design and choice of materials for fabricating the RPC have been
optimized with numerical simulation of the electrical properties of the detector [15]. In the setup, two sets of RPC, each containing three of them, will be commissioned above and below the inspection volume. The two-dimensional position information of the muon events recorded by the RPCs in each set will be used to reconstruct the trajectories and determine the scattering angle subsequently. A schematic layout of the prototype MST setup has been shown in figure 1. The design has been optimized with detailed numerical modelling and simulation of its performance [6]. In future, the RPCs may be replaced with new generation Micro-Patterrn Gaseous Detectors (MPGDs) to achieve more precise position information and improved performance of the MST setup.
We have configured a multi-parameter Data AcQuisition (DAQ) system for acquiring two-dimensional position information (X,Y) of muon events from the RPCs and their storage for further processing. It comprises of a front-end stage receiving the analog signals from the RPCs followed by a back-end stage for acquisition of valid information and their transmission to a permanent storage for subsequent data analysis. The scheme and a few preliminary test results of the DAQ system have been reported by us in an earlier publication [16]. In the current paper, we present a detailed report on its configuration along with performance test done using a single-gap glass RPC prototype.
In the following section 2, a comprehensive description of the DAQ configuration with front-end and back-end electronics has been furnished along with their functionality. The performance of the DAQ system has been validated with several measurements which can be found in section 3 along with
Figure 1: Schematic layout of the prototype MST setup.
the results. Finally, the section 4 has presented the summary and conclusion of the work.
## 2 DAQ Configuration
The direct acquisition from the readout channels of the RPC has been done by Front-End Electronics (FEE), built around a low-power, ultra-fast, amplifier discriminator ASIC, namely NINO, fabricated with 0.25 \(\mu\)m CMOS technology. It was initially developed for the Multi-gap Resistive Plate Chambers (MRPCs) in the Time-of-Flight (TOF) array of the ALICE experiment [17; 18]. The Low Voltage Differential Signal (LVDS) output from the FEE has been transferred to the Back-End Electronics (BEE), configured with Altera/Intel(r) MAX(r)-10 FPGA. It is a low-cost, single chip with small form factor and programmable logic device. As and when prompted by trigger, the BEE has acquired and saved the valid LVDS signals in parallel and transmitted the data in serial manner to a personal computer (PC) following Universal Asynchronous Receiver / Transmitter (UART) protocol for permanent storage. The schematic diagram of the proposed DAQ system has been illustrated in figure 2. The design and configuration of FEE and BEE stages have been described in the following sections 2.1 and 2.2, respectively.
### Front-End Electronics (FEE)
The NINO ASIC has been built as a discrminator to produce an output by measuring Time-Over-Threshold (TOT) of the input signal for slewing correction. The TOT is actually a measure of time lapsed between the leading
Figure 2: Schematic diagram of the DAQ system.
and trailing edges of the input pulse when they surpass a specific threshold level of charge. The ASIC has been designed on the basis of a current to voltage converter with a common gate circuit configuration followed by four cascaded amplifiers with low gain and high bandwidth. There is a slow feedback circuit to supply current for keeping the input stages correctly biased. An offset is added here to adjust the threshold level for the measurement of TOT. Finally, a stretcher is used before the LVDS output driver in order to match the width requirement foreseen for any readout system. The NINO takes differential input and its circuit is differential throughout to achieve an improved immunity to cross talk. The architecture of one of its readout channels has been schematically presented in figure 3(a). The characteristic features of the NINO have been furnished below.
* 8 input channels of either polarity
* Adjustable threshold level of 10-100_fC_
* Fast amplification with peaking time \(<\) 1_ns_ and rms resolution 20_ps_
* 8 LVDS output channels with level difference 300_mV_ and time jitter \(<\) 25_ps_
* Operate with input capacitance 30_pF_
* Power consumption 40_mW_ per channel
A readout board of dimension 200_mm_\(\times\) 23_mm_, with a common threshold control for all the channels has been designed with a single NINO by the TIFR and INO Collaboration [19], as shown in figure 3(b). The board accepts either of the positive and negative polarity signals which is converted into the differential type and fed to the inputs of the NINO. The voltage requirement for the operation of the said board is \(\pm\) 4\(V\). We have utilised the same NINO-board as the FEE of the present DAQ system.
### Back-End Electronics (BEE)
The BEE stage has been configured using Altera(r)/Intel(r) MAX(r)-10 FPGA-based development board. One of the salient features of the said board is that it has dedicated I/O for direct acquisition of LVDS signals produced by the FEE. This eliminates the necessity of conversion of LVDS signals to TTL type before the BEE stage and thereby, LVDS signal purity
is maintained. To map the LVDS connections between the FEE and BEE boards, a custom-designed connector board has been fabricated, as shown in figure 4, with required \(100\Omega\) termination. The important features of the said FPGA-board have been mentioned below.
Figure 3: (a) NINO-channel architecture, (b) NINO-board designed by the INO Collaboration [19].
* 2000 Logic Elements (LEs)
* 108 embedded memory blocks (Kbits)
* 12 user flash memory (KBytes)
* Single internal configuration memory
* 16 embedded 18 x 18 multipliers
* Phase Lock Loop (PLL) with maximum clock frequency 500_MHz_
* 101 I/O pins
* On-board clock frequency 50_MHz_
* CH340G chip-based USB-UART converter
The code for signal acquisition and data transfer by the MAX(r)-10 FPGA has been developed on the VHSIC Hardware Description Language (VHDL) platform. A custom IP (Intellectual Property) core consisting of four components, namely, digital delay module, controller, FIFO memory and UART module with TX pin, has been generated for this purpose. The flowchart of the IP core has been depicted in figure 5(a). The Intel(r) Quartus(r) Prime software has been used to compile and upload the configuration code. It has been implemented in each of the input channels. The on-board Phase Lock Loop (PLL) facilitates generation of variable clock frequencies up to a
Figure 4: Altera®/Intel® MAX®-10 FPGA-board connected to connector boards.
maximum of 500_MHz_ for sampling. A clock frequency 500_MHz_, as generated by the PLL, has been used for sampling data in the delay module and FIFO memory to achieve 2_ns_ resolution while the frequency 50_MHz_ generated by the on-board clock has been used for the controller and UART TX module. When prompted by a trigger, the controller has produced 260_ns_ wide window. A digital delay of 128_ns_ has been added to the LVDS signal received from NINO to ensure its position inside the trigger window, as shown in figure 5(b). The FIFO memory has been used to store the entire data lying inside the trigger window. When prompted by the controller module, the data have been transmitted to the connected PC through the TX pin of the UART. The information acquired for each channel for each trigger have been saved in the PC for further offline analysis. A code based on Python programming language has been developed and used to acquire data on the PC using COM port and analyse subsequently.
## 3 DAQ Performance
The proposed DAQ system has been tested for its performance to acquire the signals produced by cosmic muons in a single-gap glass RPC prototype. The experimental setup has been described in the next section 3.1. It has been followed by the section 3.2 where the functioning of the DAQ system for acquisition of RPC signal has been discussed along with the measurement of efficiency of the detector for muon detection. In the next section 3.3,
Figure 5: (a) Flowchart of the IP core in MAX®-10 FPGA, (b) Trigger window and the NINO signals.
the process of trigger validation using different physical setups and logic conditions have been presented along with the results. The scalability of the DAQ system has been tested by studying the muon event distribution detected by the whole RPC prototype using different configuration of the DAQ system. This has been discussed in section 3.4.
### Experimental Setup
A prototype RPC made with 2_mm_\(\times\) 30_cm_\(\times\) 30_cm_ float glass plates as resistive electrodes and 2_mm_ gas gap has been used for detecting cosmic muons. A gas mixture of 95% Freon and 5% Isobutane has been circulated through the detector. Two readout panels each consisting of eight copper strips of width 3_cm_ and separation 2_mm_ have been used for recording two-dimensional (X,Y) position information. The panels have been placed in orthogonal manner to each other outside the glass electrodes and insulated by a layer of mylar. The readout scheme of the RPC prototype has been illustrated in figure 6(a). It shows that two NINO-boards of the FEE stage have been connected to two readout panels for receiving detector analog signals produced by the passage of muons. The LVDS signals generated by the pair of NINO-boards corresponding to the RPC signal have been transmitted to an Altera(r)/Intel(r) MAX(r)-10 FPGA-board of the BEE stage via the custom designed connector board. The signals have been stored in the BEE when prompted by the muon trigger produced from the coincidence of the scintillators and the data subsequently have been transmitted to the PC. Several plastic scintillators of different dimensions have been used in the setup for testing the DAQ system with different trigger conditions. The schematic of the experimental setup for testing the RPC prototype using three scintillators (SCN1, SCN2, SCN3) as trigger detectors has been illustrated in figure 6(b). The SCN1 and SCN2 are two finger-shaped scintillators with length 35_cm_ and their widths are 3_cm_ and 5_cm_, respectively. The third scintillator has an area 25_cm_\(\times\) 35_cm_ which has an overlap with the entire active area of the RPC.
### Detector Signal Acquisition
Upon passage of a cosmic muon through the active gaseous medium of the RPC, an avalanche of electron-ion pairs is produced from ionization of gaseous molecules followed by multiplication of charged ion pairs due to presence of high electric field across the volume. The movement of the charged ions towards respective electrodes induces current signal on the readout strips
in the vicinity of the event. In the present setup, the NINO-board has produced TOT pulses corresponding to the signals induced on the readout strips. Subsequently, the MAX(r)-10 FPGA-board has acquired the LVDS signals transmitted from the NINO-channels following the method described in the section 2.2. A schematic of TOT measurement procedure by the FPGA-board has been illustrated in figure 7. Using the 500_MHz_ maximum clock frequency available from the on-board PLL, it is capable of achieving 2_ns_ resolution for acquisition. A spectrum of 260_ns_ (total 130_bit_) of the trigger window has been stored keeping TOT signals from the NINO near the middle region by adding 128_ns_ digital delay to avoid data loss. Each bit of memory has represented the state (0 or 1) of the signal for the time interval of 2_ns_. Counting the consecutive high states of the signal, the TOT has been calculated by the FPGA board for the respective channels of the NINO.
To validate performance of the DAQ system, several measurements have
Figure 6: (a) Readout configuration, (b) Experimental setup.
Figure 7: Schematic of TOT measurement.
been carried out to study the response of a single strip of the glass RPC prototype. The finger-shaped SCN1 has been aligned along a particular readout strip while the paddle-shaped SCN3 has been placed below the RPC covering the whole active area. The trigger has been generated from the two-fold coincidence of SCN1 and SCN3 to ensure the detection of muon events by the single readout strip. A comparative study has been made between a typical analog signal from the readout strip captured in oscilloscope and the same acquired through the present DAQ system comprising of NINO at the FEE and MAX(r)-10 FPGA at the BEE stages respectively. The spectra have been depicted in figure 8(a) and 8(b). The figure 8(c) has shown a typical distribution of the TOT outputs of the readout strip for muon events as acquired by the present DAQ system. The efficiency of the strip of muon detection with respect to the scintillators has been determined by dividing the muon counts obtained from the strip by the number of triggers generated. In figure 8(d), the efficiency as measured by using standard electronics and the present DAQ system has been depicted for different working voltage. It shows that the efficiency determined by using the present DAQ system is less than the other measurement by 7-8% at higher voltages. The possible reason may be due to the use of a threshold 100_fC_ used in the NINO-board for producing the TOT pulse which has curtailed the valid pulses with smaller charge content. It has been corroborated by larger difference in efficiency at lower operating voltage which has reached to about 35% at the operating voltage 9.6_kV_.
### Trigger Validation
The DAQ system has been tested and validated by acquiring RPC pulses with different trigger conditions produced by different physical setup and logic of three plastic scintillators (SCN1, SCN2 and SCN3) described earlier in section 3.1. Two different cases of physical setup of the scintillators have been shown in figures 9(a) and 9(b). The signals from the RPC for the muon events have been acquired for a logic condition SCN1 & SCN2 & SCN3 from both the cases which are shown in figures 10(a) and 10(b). For the second setup shown in figure 9(b), the result of another trigger condition (SCN1 + SCN2) & SCN3 has been illustrated in figure 10(c). The trigger condition in all the cases has been generated when all the three scintillators have generated signals in a coincidence window of 50_ns_. For each event, a weight factor proportional to the induced charge as indicated by the measured TOT pulse width has been assigned to each strip. In case of \(n\) number of
strips have fired and produced pulse width \(w_{i}\), the weight factor assigned to each strip has been calculated as follows.
\[\frac{w_{i}}{\sum_{i=1}^{n}w_{i}}\text{ for {n} = 1, 2 and 3}. \tag{1}\]
Thus, when a single strip has been hit, the weight factor assigned to the strip was 1. The position of the event in terms of the strip has been calculated by a weighted sum of the strips for both the readout planes (X,Y). The two-dimensional histograms of the muon events for the given trigger conditions have been illustrated in figure 10 in terms of readout strips.
### DAQ Scalability
The DAQ system has been tested for its scalability which is an important requirement for building up a tomography setup consisting of multiple
Figure 8: (a) A typical signal from a readout strip on oscilloscope, (b) Corresponding output from the present DAQ system, (c) TOT distribution histogram for a readout strip, (d) Efficiency of a readout strip as measured by oscilloscope and the present DAQ system.
muon tracking detectors with larger readout granularity. The present 8 \(\times\) 8 readout configuration of the RPC has been operated with different BEE configurations where each of the two NINO-boards of two readout planes (X,Y) has been connected to a MAX(r)-10 FPGA-board. The FPGA-boards have been configured in _master-master_ and _master-slave_ configuration. The muon trigger has been produced from the coincidence of paddle-shaped SCN3 and a similar one placed above covering the entire active area of the RPC. The trigger has been passed to both of the FPGA-boards simultaneously. The schematic diagram of the experimental setup have been illustrated in figure 11(a) and 11(b) respectively. A typical muon event histogram obtained with the _master-slave_ configuration has been depicted in figure 12. Events with \(\sim\) 10000 events are shown in figure 13(a) and 13(b) respectively. The _master-slave_ configuration has been depicted in figure 14(a) and 14(b) respectively. The _master-slave_ configuration has been depicted in figure 15(a) and 15(b) respectively. The _master-slave_ configuration has been depicted in figure 16(a) and 16(b) respectively. The _master-slave_ configuration has been depicted in figure 17(a) and 17(b) respectively. The _master-slave_ configuration has been depicted in figure 18(a) and 18(b) respectively. The _master-slave_ configuration has been depicted in figure 19(a) and 19(b) respectively. The _master-slave_ configuration has been depicted in figure 19(b) and 19(b) respectively. The _master-slave_ configuration has been depicted in figure 19(a) and 19(b) respectively.
one, two and three strips hit have been considered for the hit map reconstruction and the events with greater than three strips hit, treated as streamers, have been excluded. This highlights the scalability feature of the DAQ where a 8 \(\times\) 8 system can be easily scaled to a 16 \(\times\) 16 system or more by adding additional units with very small changes in the software code.
## 4 Summary & Conclusion
In the present work, we have presented the development of a multi-channel DAQ system to be used for muon tracking using RPC in a MST setup. The FEE stage of the proposed DAQ system has been built around NINO ASIC
Figure 11: (a) Scalable data acquisition system with Master-master configuration, (b) Master-slave configuration.
Figure 12: 2D muon event distribution with master-slave configuration of BEE
and the BEE stage has been configured using MAX(r)-10 FPGA development board. The valid data has been transmitted to an external PC for offline processing through UART. The DAQ system has been tested on a glass RPC for its performance. It has been found capable of direct acquisition of LVDS signals from the FEE stage. The availability of 500_MHz_ sampling frequency on the FPGA-board has offered a timing resolution of \(\pm\) 2_ns_ in measuring the TOT pulse provided by the NINO. This has been found fairly acceptable for our application where the main focus is to produce the map of the muon event position. It will also matter a little for a readout strips of 1_cm_ width to be used in future. We have deliberately used a MAX(r)-10 FPGA development board, in the back-end, instead of a custom-made FPGA board. The custom-made FPGA board usually has a very long development cycle and moreover, it is comparatively costly. In comparison, the readily available development boards are much cheaper and there is no need a long development cycle. Many advantages of a custom-made boards can be achieved by using a modular structure with easy scalability feature. We have demonstrated in section 3.4 that the proposed DAQ has a modular structure that can be easily scaled to accommodate higher number of channels with small modifications in the software code.
## Acknowledgements
We are thankful to Mr. Shaibal Saha and other members of our laboratory for assistance and advice for the work. We acknowledge the help extended by the TIFR and INO Collaboration in making NINO-boards and the procurement of FPGA-boards. The author S. Das thanks the UGC, Govt. of India, for financial support.
|
2302.04428 | A complete characterization of sharp thresholds to spherically symmetric
multidimensional pressureless Euler-Poisson systems | The Euler-Poisson system describes the dynamic behavior of many important
physical flows including charge transport, plasma with collision and
cosmological waves. We prove sharp threshold conditions for the global
existence/finite-time-breakdown of solutions to the multidimensional
pressureless Euler-Poisson (EP) system with or without background and general
initial data. In particular, the initial data could include points where
velocity is negative, that is, the flow is directed towards the origin.
Obtaining threshold conditions for such systems is extremely hard due to the
coupling of various local/nonlocal forces. Remarkably, we are able to achieve a
sharp threshold for the zero background case and most importantly, the positive
background case, which is quite delicate due to the oscillations present in the
solutions. We discover a completely novel nonlinear quantity that helps to
analyze the system. In the case of positive background, if the initial data
results in a global-in-time solution, then we show that the density is periodic
along any single characteristic path. We use the Floquet Theorem to prove
periodicity. | Manas Bhatnagar, Hailiang Liu | 2023-02-09T03:54:41Z | http://arxiv.org/abs/2302.04428v1 | A complete characterization of sharp thresholds to spherically symmetric multidimensional pressureless Euler-Poisson systems
###### Abstract.
The Euler-Poisson system describes the dynamic behavior of many important physical flows including charge transport, plasma with collision and cosmological waves. We prove sharp threshold conditions for the global existence/finite-time-breakdown of solutions to the multidimensional pressureless Euler-Poisson (EP) system with or without background and general initial data. In particular, the initial data could include points where velocity is negative, that is, the flow is directed towards the origin. Obtaining threshold conditions for such systems is extremely hard due to the coupling of various local/nonlocal forces. Remarkably, we are able to achieve a sharp threshold for the zero background case and most importantly, the positive background case, which is quite delicate due to the oscillations present in the solutions. We discover a completely novel nonlinear quantity that helps to analyze the system. In the case of positive background, if the initial data results in a global-in-time solution, then we show that the density is periodic along any single characteristic path. We use the Floquet Theorem to prove periodicity.
Key words and phrases:Critical thresholds, global regularity, shock formation, Euler-Poisson system 2020 Mathematics Subject Classification: 35A01; 35B30; 35B44; 35L45
## 1. Introduction
A general system of pressureless Euler-Poisson (EP) equations has the following form,
\[\rho_{t}+\nabla\cdot(\rho\mathbf{u})=0,\quad t>0,\mathbf{x}\in \mathbb{R}^{N}, \tag{1.1b}\] \[\mathbf{u}_{t}+(\mathbf{u}\cdot\nabla)\mathbf{u}=-k\nabla\phi,\] (1.1c) \[-\Delta\phi=\rho-c, \tag{1.1a}\]
with smooth initial data (\(\rho_{0}\geq 0,\mathbf{u_{0}}\)). The constant parameters \(k,c\geq 0\) are the forcing coefficient and background state respectively. The sign of the forcing coefficient \(k\) signifies the type of particles being modeled and its magnitude gives a measure of the strength between them. When the force in-between particles is repulsive, for example in the case of charge flow, then \(k>0\). Within the pressureless setup (1.1), \(k<0\) is relevant in the case of interstellar clouds where the pressure gradient becomes negligible compared to the gravitation forces, see [11]. In the pressureless setup with same charge particles (\(k>0\)), the background state is, in practicality, a profile, that is, a function of the spatial variable, \(c=c(x)\), see [13]. The background models the doping profile for charge flow in semiconductors. However, the sheer complicacy of the system has restricted researchers to consider the background as a constant. To our knowledge, [3] is the only work that considers background as a profile in obtaining critical thresholds.
A locally well-posed PDE system exhibiting critical threshold phenomena is the one wherein existence of global-in-time solutions is dependent on whether the initial data crosses a certain threshold manifold. This critical threshold manifold divides the phase space of initial data into two mutually exclusive regions or sets. If the initial data lies completely in one of the sets (also called **subcritical region**), there is global solution. However, if some part of initial data lies outside this set, or in other words, in the **supercritical region**, a breakdown occurs and solution loses its smoothness in finite time.
As one can expect, the question of proving the existence and subsequently finding the critical threshold manifold is simpler in one dimension (\(N=1\)). In this case, the threshold is a curve on the \((u_{0x},\rho_{0})\) plane and the subcritical region is given by,
\[|u_{0x}|<\sqrt{k(2\rho_{0}-c)}.\]
A vast amount of literature exists for critical thresholds to (1.1) and other similar systems. The existence of such curve was first identified and analyzed in [9] for EP systems. The authors analyzed the one dimensional and multidimensional (with spherical symmetry) cases. A series of works then followed for EP as well as other systems, [2, 3, 4, 5, 7, 12, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29] among many others. As is known for general hyperbolic conservation laws that for any initial data, singularities develop as one moves forward in time, [15]. This is due to the convective forces present. Addition of source terms however, can result in a set of initial data that leads to global solutions. To extract a set of initial data (subcritical region) for which there is global well-posedness, is an interesting territory to be explored. The 'goodness' of the external forces can balance or even outweigh the 'bad' convective forces and result in a 'large' subcritical region. An example where the external forces completely obliterate the convective forces is the strongly singular Euler-Poisson-alignment (EPA) system. In [14], the authors show that the system (1.1) with \(N=1\) and an additional nonlocal alignment force in the momentum equation results in global-in-time solutions for any initial data. This result is remarkable and in stark opposition to the conventional results for EP systems.
For threshold results in one dimensional EP and EPA systems, one can refer to [2, 3, 4, 5, 8, 14, 26, 28]. In [26], the authors include pressure and exclude background. It is quite difficult to obtain thresholds for the full EP system (with pressure) due to the strict hyperbolicity resulting in two characteristic flow paths. Critical threshold for EP systems with pressure and \(c>0\) is largely an open problem.
For pressureless EP systems, the key is to obtain bounds on the gradient of velocity. Owing to the local existence result, global-in-time solutions to (1.1) can be 'extended' upon a priori smooth local solutions. The local existence for such systems is well-known, see [23] for a proof. We state the result here.
**Theorem 1.1** (Local wellposedness).: _Consider (1.1) with smooth initial data, \(\rho_{0}-c\in H^{s}(\mathbb{R}^{N})\) and \(\mathbf{u_{0}}\in\left(H^{s+1}(\mathbb{R}^{N})\right)^{N}\) with \(s>N/2\). Then there exists a time \(T>0\) and functions \(\rho,\mathbf{u}\) such that,_
\[\rho-c\in C([0,T];H^{s}(\mathbb{R}^{N})),\quad\mathbf{u}\in\left(C([0,T];H^{s+ 1}(\mathbb{R}^{N}))\right)^{N},\]
_are unique smooth solutions to (1.1). In addition, the time \(T\) can be extended as long as,_
\[\int_{0}^{T}||\nabla\mathbf{u}(t,\cdot)||_{\infty}dt<\infty.\]
As a result, in 1D, one needs to bound \(|u_{x}|\) to guarantee global well-posedness. However, things are different in higher dimensions where the gradient of velocity is in fact an \(N\times N\) matrix. One finds out that an ODE along the characteristic path can be obtained for the divergence of the gradient matrix. However, analyzing the divergence is not enough. One needs to control the spectral gap (sum of all the absolute differences of eigenvalues), see for example [12], in order to guarantee smooth solutions for all time. This is the tricky part. Given the inherent property of hyperbolic balance laws, it is in general easier to obtain sufficient conditions for breakdown of solutions as compared to conditions for global existence. In [6, 7], the authors obtain bounds on the supercritical region for (1.1) for \(k<0\).
To obtain bounds on subcritical region is more involved. Several workarounds have been used by various researchers by simplifying the EP system. In [16, 21], the authors study and obtain thresholds for the restricted EP equations in two and three dimensions respectively.
Another derivative of the EP system that has been extensively studied is (1.1) with spherical symmetry. It can be shown that if the initial data to (1.1) is such that \(\rho(0,\mathbf{x})=f(|\mathbf{x}|),\mathbf{u}(0,\mathbf{x})=g(|\mathbf{x}|) \frac{\mathbf{x}}{|\mathbf{x}|}\) for smooth functions \(g,f\), then for any fixed time and as long as the solution is smooth, the symmetry extends, that is, the solutions are \(\rho(t,|\mathbf{x}|),u(t,|\mathbf{x}|)\frac{\mathbf{x}}{|\mathbf{x}|}\). Applying this simplification to (1.1), we obtain the following system of equations,
\[\rho_{t}+\frac{(\rho ur^{N-1})_{r}}{r^{N-1}}=0,\qquad t>0,\quad r >0, \tag{1.2b}\] \[u_{t}+uu_{r}=-k\phi_{r},\] (1.2c) \[-(r^{N-1}\phi_{r})_{r}=r^{N-1}(\rho-c), \tag{1.2a}\]
with \(r=|\mathbf{x}|>0,N\geq 2\) and subject to smooth initial density and velocity
\[(\rho(t,\cdot),u(t,\cdot))\big{|}_{t=0}=(\rho_{0}\geq 0,u_{0}).\]
Even though the system is now simplified as compared to the general EP system, it turns out that it is still tricky to analyze for thresholds, especially when \(k>0\). (1.2) was first studied in [9] with \(c=0\) and expanding flows (\(u_{0}>0\)). Sufficient conditions on global existence and finite-time-breakdown were obtained for \(N=2,3\) and a sharp condition was obtained for \(N=4\). Later, a sharp condition was derived in [29] however, once again, only for zero background and expanding flows. A sufficient blow-up condition was derived in [30] for \(c=0\), however, no conclusion was made with regards to global existence.
The dynamics of (1.2) is quite different for \(c>0\) compared with \(c=0\). One major difference is that in the former, the density is a perturbation around the constant background. The mass is infinite and the following holds,
\[\int_{0}^{\infty}(\rho(t,r)-c)dr=0,\]
which is different from the zero background state where the mass is finite and conserved. A major step forward to mitigate the restrictiveness of the past results was taken in [28]. The author there reduced (1.2) to a \(4\times 4\) ODE system along a characteristic path, and subsequently proved the existence of critical thresholds. This \(4\times 4\) ODE system derived by the author is crucial in our analysis as well. An intriguing discovery by the author was that the Poisson forcing in (1.2b) is enough to avoid concentration at the origin, even if
there are points where the initial velocity points towards the origin. To our knowledge, it is the first work with regards to thresholds of spherically symmetric EP system wherein the assumption of expanding flows was dropped. It was also noted that \(N=2\) is the critical case and the analysis needs to be done fairly differently as compared to the case when \(N\geq 3\). The author obtained partial results for thresholds for \(c=0\), that is, bounds on the subcritical and supercritical regions were presented. However, no results were presented for \(c>0\) citing the highly chaotic dynamics in such a scenario.
In this paper, we present sharp thresholds for both \(c>0\) as well as \(c=0\). Several steps are needed to arrive at the precise thresholds. Special care has to be taken with regards to finding the subcritical region for \(c>0\). We essentially start out by characterizing the supercritical region and narrowing down the possible initial configurations that might allow for global solutions, eventually extracting out the subcritical region.
Our main results can be stated non-technically as follows:
* For the nonzero background case (\(c\not\equiv 0\)), we show that the EP system admits a global-in-time smooth solution if and only if the initial data is smooth, lying in a subcritical region, \(\Theta_{N}\). Theorem 2.1 contains such critical threshold result. The explicit definition of \(\Theta_{N}\) is stated after the result.
* For the zero background case (\(c=0\)), we show that the EP system admits a global-in-time smooth solution if and only if the initial data is smooth, and lying in certain subcritical region, \(\Sigma_{N}\). Theorem 2.6 contains the precise thresholds for dimensions greater than or equal to three. Theorem 2.8 contains the threshold results in dimension two. The explicit definition of \(\Sigma_{N}\) is stated after each of the results.
We discover a completely novel nonlinear quantity that helps to analyze the system. At \(t=0\), this is given by
\[A_{0}(r):=\frac{u_{0}(r)u_{0r}(r)+k\phi_{0r}(r)}{r\rho_{0}(r)},\quad\rho_{0}(r )>0. \tag{1.3}\]
In fact, such quantity is very crucial in analyzing and simplifying the representation of subcritical/supercritical regions. The full motivation and usage of this quantity will be thoroughly discussed in the Sections to come. It is found that a simpler breakdown condition can be obtained using this expression, the corresponding result for dimensions greater than or equal to three is provided in Theorem 2.4.
### A roadmap of the \(c>0\) case
Before stating our main results, we give a short roadmap of how threshold regions are identified. Following is a list of key points:
* The full dynamics of (1.2) can be reduced to a weakly coupled system of four equations (along characteristic path \(\{(t,X):dX/dt=u(t,X),\;X(0)=\beta,\beta>0\}\)) as, \[\rho^{\prime} =-(N-1)\rho q-p\rho,\] \[p^{\prime} =-p^{2}-k(N-1)s+k(\rho-c),\] \[q^{\prime} =ks-q^{2},\] \[s^{\prime} =-q(c+Ns)\]
with \[p:=u_{r},\quad q:=\frac{u}{r},\quad s:=-\frac{\phi_{r}}{r}.\] The initial data being \(\rho_{0},p_{0},q_{0},s_{0}\) respectively, which depends on \(\beta\), but we do not specify it explicitly as we will be analyzing one characteristic path at a time. A thorough analysis of this ODE system becomes the main task. We restrict to the case \(k>0\), \(N\geq 2\) and \(c\geq 0\).
* The \(q-s\) system is decoupled and admits a closed form of trajectory curves, expressed as, \[R_{N}(q(t),\tilde{s}(t))=R_{N}(q_{0},s_{0}+c/N),\quad t\geq 0,\quad\tilde{s}=s+c/N\] with, (1.4) \[R_{N}(q,\tilde{s})=\left\{\begin{array}{cc}\tilde{s}^{-1}\left(q^{2}+\frac{ kc}{2}+k\tilde{s}\ln\left(\tilde{s}\right)\right),&N=2,\\ \tilde{s}^{-\frac{2}{N}}\left(q^{2}+\frac{kc}{N}+\frac{2k\tilde{s}}{N-2} \right),&N\geq 3.\end{array}\right.\] This ensures that the \(q-s\) system admits a global bounded solution if and only if \(s_{0}>-c/N\). Moreover, the solutions are periodic and the trajectories rotate clockwise on the \((q,s)\) plane as time progresses. We also have \[s_{\text{min}}<0<s_{\text{max}},\] where \(s_{min},s_{max}\) are the minimum/maximum values of \(s\) attained. In addition, \(\int_{0}^{t}q(\tau)\,d\tau\) is shown to be bounded along with, \[\Gamma(t):=e^{-\int_{0}^{t}q(\tau)d\tau}=\left(\frac{s(t)+c/N}{s_{0}+c/N} \right)^{\frac{1}{N}}.\]
* Another key transformation of form, \[\eta:=\frac{1}{\rho}\Gamma^{N-1},\qquad w=\frac{p}{\rho}\Gamma^{N-1}\] leads to a new system, \[\eta^{\prime}=w,\] \[w^{\prime}=-k\eta(c+s(N-1))+k\Gamma^{(N-1)},\] This is a linear system with time-dependent yet bounded coefficients. This way, the key to existence of global solution to \((\rho,p)\) is equivalent to ensuring \(\eta(t)>0\) for all \(t>0\).
* A nonlinear quantity of the form, \[A=qw-k\eta s,\] is shown to be bounded. \(A_{0}:=q_{0}w_{0}-k\eta_{0}s_{0}\) and \(A_{0}\) in (1.3) are essentially the same and we will see more about this in Section 5. More precisely, for \(N\geq 3\), we obtain \[A(\Gamma)=\left(A_{0}+\frac{k}{N-2}\right)\Gamma-\frac{k}{N-2}\Gamma^{N-1}.\]
We are able to conclude that \(\eta(t)\) will surely be zero at some positive time if \(A\) does not change sign. This is in fact the case if either \(1+\frac{A_{0}(N-2)}{k}\leq 0\), or \(1+\frac{A_{0}(N-2)}{k}>0\) with \[\Gamma_{\max}\leq\kappa\quad\text{or }\Gamma_{\min}\geq\kappa,\] where \(\kappa\) is the unique positive root of \(A\), \[\kappa:=\left(1+\frac{A_{0}(N-2)}{k}\right)^{\frac{1}{N-2}}.\]
* Therefore, to precisely identify the possible initial configurations for global existence, it is necessary to require \[\kappa\in(\Gamma_{\min},\Gamma_{\max}).\] The key idea is to construct two nonnegative functions to precisely demarcate the subcritical region. We will show that if a solution, \(\eta\) starts in between these functions at \(t=0\), then it remains so for all except discrete times (where it is positive), thereby providing a condition to ensure positivity of \(\eta\). The converse also holds. The two constructed functions form beads and the strictly positive solutions are contained within those beads, see Figure 5. More precisely, we show that if \(\kappa\) satisfies the above inclusion, then there exists two nonnegative functions \(\eta_{i}=\eta_{i}(t;q_{0},s_{0},A_{0}),i=1,2\) so that the following holds. For the case \(q_{0}\neq 0\), we show that if \[\min\{\eta_{1}(0),\eta_{2}(0)\}<\eta_{0}<\max\{\eta_{1}(0),\eta_{2}(0)\},\] then, \[\min\{\eta_{1}(t),\eta_{2}(t)\}\leq\eta(t)\leq\max\{\eta_{1}(t),\eta_{2}(t)\},\quad t>0.\] The converse also holds. Quite similarly, for \(q_{0}=0\), we have that if \[\frac{d\eta_{1}}{dt}(0)<w_{0}<\frac{d\eta_{2}}{dt}(0),\] then, \[\min\{\eta_{1}(t),\eta_{2}(t)\}\leq\eta(t)\leq\max\{\eta_{1}(t),\eta_{2}(t)\},\quad t>0.\] The functions \(\eta_{1},\eta_{2}\) are the key components through which the threshold curves will be defined.
## 2. Main Results
This section is devoted to stating precise versions of our main results. We start by specifying that for (1.2), the finite-time-breakdown is manifested in the following form,
\[\lim_{t\to t_{c}^{-}}|u_{r}(t,r_{c})|=\infty,\quad\lim_{t\to t_{c}^{-}}\rho(t,r_{c})=\infty\ (or\,0),\]
for some \(r_{c}>0\) and finite \(t_{c}\). The first blowup (shock formation) can be concluded from Theorem 1.1. The density blowing up at the same time/position is a consequence of weak hyperbolicity of the pressureless Euler-Poisson systems and is a well-known phenomena for such systems. In the next section, our analysis will allow us to conclude the same. Now we state our results and follow it up with an interpretation. As mentioned in the previous section that the \(N=2\) case is critical and needs to be handled separately. Consequently,
the representation of subcritical regions is also different for \(N=2\) than those for \(N\geq 3\). In this section, we also provide the subcritical region sets.
**Theorem 2.1** (Sharp threshold condition).: _Suppose \(c>0\) and \(N\geq 2\) in (1.2). If for all \(r>0\), the set of points_
\[(r,u_{0}(r),\phi_{0r}(r),u_{0r}(r),\rho_{0}(r))\in\Theta_{N},\]
_then there is global solution. Moreover, if there exists an \(r_{c}>0\) such that,_
\[(r_{c},u_{0}(r_{c}),\phi_{0r}(r_{c}),u_{0r}(r_{c}),\rho_{0}(r_{c}))\notin\Theta _{N},\]
_then there is finite-time-breakdown. Here,_
\[\Theta_{N}\subseteq\{(\alpha,x,y,z,\omega):y/\alpha<c/N,\omega>0\},\]
_is as in Definition 2.3 (for \(N=2\)) and in Definition 2.2 (for \(N\geq 3\))._
We will describe the subcritical region set with the help of the quantity (1.3). Motivated by this, we set, for a point \((\alpha,x,y,z,\omega)\),
\[a:=\frac{xz+ky}{\alpha\omega}. \tag{2.1}\]
**Definition 2.2**.: A point \((\alpha,x,y,z,\omega)\in\Theta_{N}\) (\(N\geq 3\)) if and only if the following holds,
\[\begin{split}&\bullet\text{ For }a\in\frac{k}{N-2}\left(\left(\frac{y_{m,N}}{ \frac{z}{N}-\frac{x}{\alpha}}\right)^{\frac{N-2}{N}}-1,\left(\frac{y_{M,N}}{ \frac{x}{N}-\frac{x}{\alpha}}\right)^{\frac{N-2}{N}}-1\right),\\ &\frac{1}{\max\{\eta_{1}(0),\eta_{2}(0)\}}<\omega<\frac{1}{\min \{\eta_{1}(0),\eta_{2}(0)\}},\quad\text{for }x\neq 0,\\ & z\in\frac{ky}{\alpha a}\left(\frac{d\eta_{1}}{dt}(0),\frac{d \eta_{2}}{dt}(0)\right),\quad\text{for }x=0.\end{split} \tag{2.2}\]
**Definition 2.3**.: A point \((\alpha,x,y,z,\omega)\in\Theta_{2}\) if and only if the following holds,
* For \(a\in\frac{k}{2}\left(\ln\left(\frac{y_{m,2}}{\frac{z}{2}-\frac{x}{\alpha}} \right),\ln\left(\frac{y_{M,2}}{\frac{z}{2}-\frac{x}{\alpha}}\right)\right),\) (2.3) \[\begin{split}&\frac{1}{\max\{\eta_{1}(0),\eta_{2}(0)\}}< \omega<\frac{1}{\min\{\eta_{1}(0),\eta_{2}(0)\}},\quad\text{for }x\neq 0,\\ & z\in\frac{ky}{\alpha a}\left(\frac{d\eta_{1}}{dt}(0),\frac{d \eta_{2}}{dt}(0)\right),\quad\text{for }x=0.\end{split}\]
Here, \(0<y_{m,N}<\frac{c}{N}-\frac{y}{\alpha}<y_{M,N}\) are the roots of the equation,
\[k\ln(y)+\frac{kc}{2y}=R_{2},\]
and
\[\frac{2k}{N-2}y^{1-\frac{2}{N}}+\frac{kc}{N}y^{\frac{2}{N}}=R_{N},\quad N\geq 3,\]
with
\[R_{N}=R_{N}\left(\frac{u_{0}(\alpha)}{\alpha},\frac{c}{N}-\frac{\phi_{0r}( \alpha)}{\alpha}\right).\]
The explicit expression for \(R_{N}(\cdot,\cdot)\) is in (1.4), also in (4.5). The functions
\[\eta_{i}=\eta_{i}\left(t;\frac{u_{0}(\alpha)}{\alpha},\frac{\phi_{0r}(\alpha)}{ \alpha},A_{0}(\alpha)\right),\quad i=1,2,\]
are defined via a second order linear IVP and are positive for all except at infinitely many discrete times.
We now give an interpretation of Theorem 2.1. Similar to existing threshold results, Theorem 2.1 enables us to check pointwise whether or not the given initial data lies in the subcritical region. Not only this, we can in fact construct the subcritical region piece by piece. Our result enables us to divide the \(\mathbb{R}^{5}\) space of \((r,u_{0},\phi_{0r},u_{0r},\rho_{0})\) into a \(3D\) space and a plane, that is, \((r,u_{0},\phi_{0r})\) (say \(P_{1}\)) and \((u_{0r},\rho_{0})\) (say \(P_{2}\)). For each point in \(P_{1}\), we can construct the subricitical region on \(P_{2}\) through \(A_{0}\). Let \((\alpha,x,y):=(r,u_{0},\phi_{0r})\) be any point in \(P_{1}\). Note that \(y_{m,N},y_{M,N}\) can now be calculated using \(\alpha,x,y\) and hence, one can now explicitly find the interval given in Definition 2.2. For any \(a\) in that interval, (2.1) outputs a line in \(P_{2}(z,\omega)\). In other words, by fixing an \(a\), we have a linear relation between \((u_{0r},\rho_{0})\). For this value of \(a\), we can find the two constants, \(\eta_{1}(0),\eta_{2}(0)\). Indeed for fixed \(\alpha,x,y,a\), the functions \(\eta_{i}\) are known functions. Finally, (2.2) gives the desired portion of the line on \(P_{2}\) that forms part of the subcritical region, as and according to whether \(x\) is zero or not. We can do this for all \(a\) in the above interval and obtain the complete part of the subcritical region for the fixed point \((\alpha,x,y)\) in \(P_{1}\). Carrying out this procedure for each point in \(P_{1}\) gives the entire subcritical region.
The next two results give a complete picture of the zero background case with dimension greater than or equal to three.
**Theorem 2.4** (Sufficient condition for blow-up for \(N\geq 3\)).: _Suppose \(c=0\) and \(N\geq 3\) in (1.2). If there exists an \(r_{c}>0\) such that,_
\[A_{0}(r_{c})\leq-\frac{k}{N-2},\]
_then there is finite-time-breakdown._
_Remark 2.5_.: This result gives an easy criteria to check breakdown. Moreover, it shows the criticality at \(N=2\). For \(N\geq 3\), there is a flat cutoff that bounds the subcritical region from one side. However, this flat cutoff is absent in \(N=2\) where \(A_{0}(r)\) can take any negative value and still possibly lie in the subcritical region.
In the next theorem, we give a full picture of what happens for initial data which satisfies \(A_{0}(r)>-\frac{k}{N-2}\) for all \(r>0\).
**Theorem 2.6** (Global solution for \(N\geq 3\) with zero background).: _Suppose \(c=0\) and \(N\geq 3\) in (1.2). If for all \(r>0\), the set of points_
\[(r,u_{0}(r),\phi_{0r}(r),u_{0r}(r),\rho_{0}(r))\in\Sigma_{N}\cup\left\{(\alpha,x,y,z,0):x>0,z\geq-\frac{ky}{x}\right\},\]
_where \(\Sigma_{N}\subseteq\left\{(\alpha,x,y,z,\omega):y<0,\omega>0,\ \frac{xz+ky}{\alpha\omega}>-\frac{k}{N-2}\right\}\) is as in Definition 2.7, then there is global solution._
_Moreover, if there exists an \(r_{c}>0\) such that_
\[(r_{c},u_{0}(r_{c}),\phi_{0r}(r_{c}),u_{0r}(r_{c}),\rho_{0}(r_{c}))\notin \Sigma_{N}\cup\left\{(\alpha,x,y,z,0):x>0,z\geq-\frac{ky}{x}\right\},\]
_then there is finite-time-breakdown._
The quantity \(a\) in the Definitions 2.7, 2.9 below is the same as in (2.1).
**Definition 2.7**.: A point \((\alpha,x,y,z,\omega)\in\Sigma_{N}\) (for \(N\geq 3\)) if and only if one of the following holds,
* For \(a\in\left(-\frac{k}{N-2},0\right)\), (2.4) \[\begin{split} z&>-\frac{ky}{x}+\frac{\alpha a}{x \eta_{1}(0)},\quad\text{for }x\neq 0,\\ z&>\frac{ky}{\alpha a}\frac{d\eta_{1}}{dt}(0), \quad\text{for }x=0.\end{split}\]
* For \(a=0\), (2.5) \[\begin{split} z&=-\frac{ky}{x},\;\omega>0, \quad\text{for }x>0.\end{split}\]
* For \(a\in\left(0,\frac{k}{N-2}\left(\left(\frac{-\alpha y_{N}}{y}\right)^{1-\frac {2}{N}}-1\right)\right)\), (2.6) \[\begin{split}-\frac{ky}{x}+\frac{\alpha a}{x\eta_{1}(0)}<z<-\frac{ ky}{x}+\frac{\alpha a}{x\eta_{2}(0)},\quad\text{for }x<0,\\ \omega>0,\quad\text{for }x>0.\end{split}\]
* For \(a\in\left[\frac{k}{N-2}\left(\left(\frac{-\alpha y_{N}}{y}\right)^{1-\frac {2}{N}}-1\right),\infty\right)\), (2.7) \[\omega>0,\quad\text{for }x>0.\]
Here, \(0<-\frac{y}{\alpha}<y_{N}=y_{N}\left(\frac{u_{0}(\alpha)}{\alpha},\frac{\phi_{ 0\alpha}(\alpha)}{\alpha}\right)=\left(\frac{(N-2)}{2k}\right)^{\frac{N}{N-2} }\left(R_{N}\left(\frac{u_{0}(\alpha)}{\alpha},\frac{-\phi_{0\alpha}(\alpha)} {\alpha}\right)\right)^{\frac{N}{N-2}}\). The explicit expression for \(R_{N}(\cdot,\cdot)\) is in (4.5) (with \(c=0\)). The functions
\[\eta_{i}=\eta_{i}\left(t;\frac{u_{0}(\alpha)}{\alpha},\frac{\phi_{0r}(\alpha) }{\alpha},A_{0}(\alpha)\right),\quad i=1,2,\]
are defined later via linear IVPs.
**Theorem 2.8** (Global solution for \(N=2\) with zero background).: _Suppose \(c=0\) and \(N=2\) in (1.2). If for all \(r>0\), the set of points_
\[(r,u_{0}(r),\phi_{0r}(r),u_{0r}(r),\rho_{0}(r))\in\Sigma_{2}\cup\left\{( \alpha,x,y,z,0):x>0,z\geq-\frac{ky}{x}\right\},\]
_where \(\Sigma_{2}\subseteq\left\{\left(\alpha,x,y,z,\omega\right):y<0,\omega>0\right\}\) is as in Definition 2.9, then there is global solution._
_Moreover, if there exists an \(r_{c}>0\) such that_
\[(r_{c},u_{0}(r_{c}),\phi_{0r}(r_{c}),u_{0r}(r_{c}),\rho_{0}(r_{c}))\notin \Sigma_{2}\cup\left\{(\alpha,x,y,z,0):x>0,z\geq-\frac{ky}{x}\right\},\]
_then there is finite time breakdown._
**Definition 2.9**.: A point \((\alpha,x,y,z,\omega)\in\Sigma_{2}\) if and only if one of the following holds,
* For \(a\in(-\infty,0)\), (2.8) \[\begin{split} z&>-\frac{ky}{x}+\frac{\alpha a}{x\eta_ {1}(0)},\quad\text{for }x\neq 0,\\ z&>\frac{ky}{\alpha a}\frac{d\eta_{1}(0)}{dt},\quad \text{for }x=0.\end{split}\]
* For \(a=0\), (2.9) \[\begin{split} z&=-\frac{ky}{x},\ \omega>0,\quad \text{for }x>0.\end{split}\]
* For \(a\in\left(0,\frac{k}{2}\ln\left(\frac{-\alpha y_{2}}{y}\right)\right)\), (2.10) \[\begin{split}&-\frac{ky}{x}+\frac{\alpha a}{x\eta_{1}(0)}<z<- \frac{ky}{x}+\frac{\alpha a}{x\eta_{2}(0)},\quad\text{for }x<0,\\ &\omega>0,\quad\text{for }x>0.\end{split}\]
* For \(a\in\left[\frac{k}{2}\ln\left(\frac{-\alpha y_{2}}{y}\right),\infty\right)\), (2.11) \[\omega>0,\quad\text{for }x>0.\]
Here, \(0<-\frac{y}{\alpha}<y_{2}=y_{2}\left(\frac{u_{0}(\alpha)}{\alpha},\frac{\phi_ {0\alpha}(\alpha)}{\alpha}\right)=e^{\frac{R_{2}\left(\frac{u_{0}(\alpha)}{ \alpha},\frac{-\phi_{0\alpha}(\alpha)}{\alpha}\right)}{k}}\). The explicit expression for \(R_{2}(\cdot,\cdot)\) is in (4.5) (with \(c=0\)).
This paper is arranged as follows. Section 3 entails some preliminary calculations showing how to reduce the full dynamics to a weakly coupled system of four ODEs along characteristics. Section 4 is devoted to the analysis of the system of two of the four ODEs, that are decoupled from the other two, obtaining a closed form of trajectory curves with related properties established for later use. Section 5 entails conditions under which the velocity gradient/density blow up. Section 6 is devoted to constructions of the precise subcritical region leading to global solutions, proving Theorem 2.1. The zero-background case is analysed in Section 7. Finally concluding remarks are given in Section 8.
## 3. Preliminary calculations
From Theorem 1.1, we note that to extend the local solutions, we need to obtain bounds on the gradient of velocity. The following result, that was derived by the authors in [25], designates the quantities that need to be bounded for (1.2).
**Lemma 3.1**.: _Suppose \(f(\mathbf{x})=g(r)\) be a radially symmetric function. Then for \(\mathbf{x}\neq 0\), \(\nabla^{2}f\) has exactly two eigenvalues: \(g_{rr}\) and \(g_{r}/r\). Also, \(x\) is an eigenvector corresponding to \(g_{rr}\) and \(x^{\perp}\) is the eigenspace for the other eigenvalue._
Though the proof has already been given by the authors in [25], we include it here for the sake of completion.
Proof.: Let \(r=|x|\). Taking gradient of \(f\), we have
\[\nabla f=g_{r}\frac{x}{r}.\]
Taking gradient again,
\[\nabla^{2}f=\left(g_{rr}-\frac{g_{r}}{r}\right)\frac{x\otimes x}{r^{2}}+\frac{g_ {r}}{r}\mathbb{I}.\]
Note that \((\nabla^{2}f)x=g_{rr}x\) and \((\nabla^{2}f)v=\frac{g_{r}}{r}v\) for any \(v\in x^{\perp}\). This completes the proof of the Lemma.
Through this Lemma, we conclude that the Hessians of radial functions are diagonalizable using the same matrix of eigenvectors. From Theorem 1.1, we know that for solutions to persist for all time, the gradient matrix of velocity should be bounded. Given the velocity is radial vector field, we have the following relation for \(\mathbf{u}\), \(u\) in (1.1b), (1.2b) respectively,
\[\mathbf{u}=u\frac{\mathbf{x}}{r}.\]
Hence, the two eigenvalues of \(\nabla\mathbf{u}\) will be \(u_{r}\) and \(u/r\). Therefore, to ensure that the gradient of velocity is bounded, we need to control the two quantities: \(u_{r},u/r\).
Inspired by this, we will obtain ODE along the characteristic path,
\[\left\{(t,X(t)):\frac{dX}{dt}=u(t,X),\ X(0)=\beta\right\}, \tag{3.1}\]
for the desired quantities. Rearranging (1.2a), we obtain,
\[\rho_{t}+(\rho u)_{r}=-(N-1)\rho\frac{u}{r}. \tag{3.2}\]
Note that by (1.2c),
\[-\phi_{rr}=(N-1)\frac{\phi_{r}}{r}+\rho-c.\]
Taking spatial derivative of (1.2b) and using the above equation we obtain,
\[u_{rt}+uu_{rr}+u_{r}^{2} =-k\phi_{rr} \tag{3.3}\] \[=k(N-1)\frac{\phi_{r}}{r}+k(\rho-c).\]
Next using (1.2b),
\[\left(\frac{u}{r}\right)_{t}+u\left(\frac{u}{r}\right)_{r} =\frac{u_{t}+uu_{r}}{r}-\frac{u^{2}}{r^{2}} \tag{3.4}\] \[=-k\frac{\phi_{r}}{r}-\frac{u^{2}}{r^{2}}.\]
Upon integrating (1.2c) from \(0\) to \(r\),
\[-r^{N-1}\phi_{r}+\lim_{r\to 0^{+}}r^{N-1}\phi_{r}=\int_{0}^{r}(\rho-c)y^{N-1} \,dy. \tag{3.5}\]
Local well-posedness requires a boundary condition at the origin. The convention is to assume zero boundary conditions, that is, the above limit is zero. Similarly the density
flux (\(\rho ur^{N-1}\)) approaches zero, which signifies that there is no loss of material at the origin. Taking time derivative of (3.5),
\[-r^{N-1}\phi_{rt} =\int_{0}^{r}\rho_{t}y^{N-1}\,dy\] \[=-\int_{0}^{r}(\rho ug^{N-1})_{y}\,dy\qquad\text{from (\ref{eq:r1})}\] \[=-\rho ur^{N-1}+\lim_{r\to 0^{+}}\rho ur^{N-1}\] \[=-\rho ur^{N-1}.\]
As a consequence, we get \(\phi_{rt}=\rho u\). We use this and the previously obtained expression for \(\phi_{rr}\) in the below calculation.
\[\left(\frac{\phi_{r}}{r}\right)_{t}+u\left(\frac{\phi_{r}}{r} \right)_{r} =\frac{\phi_{rt}}{r}+u\frac{\phi_{rr}}{r}-\frac{u}{r}\frac{\phi_{r }}{r}\] \[=\frac{\rho u}{r}-(N-1)\frac{u}{r}\frac{\phi_{r}}{r}-(\rho-c) \frac{u}{r}-\frac{u}{r}\frac{\phi_{r}}{r} \tag{3.6}\] \[=-N\frac{u}{r}\frac{\phi_{r}}{r}+c\frac{u}{r}.\]
Set
\[p:=u_{r},\quad q:=\frac{u}{r},\quad s:=-\frac{\phi_{r}}{r}. \tag{3.7}\]
Equations (3.2), (3.3), (3.4) and (3.6) can be used to obtain an ODE system,
\[\rho^{\prime} =-(N-1)\rho q-p\rho, \tag{3.8b}\] \[p^{\prime} =-p^{2}-k(N-1)s+k(\rho-c),\] (3.8c) \[q^{\prime} =ks-q^{2},\] (3.8d) \[s^{\prime} =-q(c+Ns), \tag{3.8a}\]
with initial data \(\rho_{0}:=\rho(0,\beta),p_{0}:=p(0,\beta),q_{0}:=q(0,\beta),s_{0}:=s(0,\beta)\) respectively. Here, \({}^{\prime}\) indicates differentiation along the path (3.1). Note that we use the notation \(\rho\) for the density as in (1.2a), which is a function of time and the spatial variable, as well as for the solution to the ODE (3.8a), which is another function of time only (for a fixed parameter \(\beta\)). Similarly for the notation \(\rho_{0}\). However, it will be clear from context whether we are referring to \(\rho\) as the solution of the main PDE system (1.2) or as the unknown in (3.8a).
## 4. The \(q-s\) system
Note that (3.8c), (3.8d) are decoupled from (3.8a), (3.8b). In fact, a closed form for the trajectory curve for equations (3.8c), (3.8d) can be obtained. We have the following Lemma which describes the behavior of the system (3.8c),(3.8d).
**Lemma 4.1**.: _Global solution to (3.8c), (3.8d):_
1. \(s,q\) _exist for all time and are uniformly bounded if and only if_ \(s_{0}>-\frac{c}{N}\)_. In particular, if_ \(s_{0}\leq-\frac{c}{N}\)_, then_ \(q\to-\infty\) _in finite time._
2. _When_ \(s_{0}>-\frac{c}{N}\)_, the solutions lie on bounded trajectory curves satisfying,_ (4.1) \[\frac{1}{\left(s+\frac{c}{2}\right)}\left(q^{2}+\frac{kc}{2}+k \left(s+\frac{c}{2}\right)\ln\left(s+\frac{c}{2}\right)\right)=\text{constant}, \qquad N=2,\] (4.2) \[\left(s+\frac{c}{N}\right)^{-\frac{2}{N}}\left(q^{2}+\frac{kc}{N }+\frac{2k\left(s+\frac{c}{N}\right)}{N-2}\right)=\text{constant},\qquad N>2.\]
3. _When_ \(s_{0}>-\frac{c}{N}\)_, the solutions are periodic and the trajectories rotate clockwise on the_ \((q,s)\) _plane as time progresses._
We will time and again use the variable,
\[\tilde{s}:=s+\frac{c}{N},\quad\tilde{s}_{0}:=s_{0}+\frac{c}{N}, \tag{4.3}\]
and (4.4b) instead of \(s\) and (3.8d) as the situation demands since this will make calculations simpler and easy to understand. The equivalent \(q,\tilde{s}\) system is,
\[q^{\prime} =k\tilde{s}-\frac{kc}{N}-q^{2}, \tag{4.4b}\] \[\tilde{s}^{\prime} =-qN\tilde{s}. \tag{4.4a}\]
Proof of Lemma 4.1:.: It seems intuitive to use \(\tilde{s}\) instead of \(s\). We first prove the only if part of the first assertion through a contradiction argument. From (4.4b), one can see that \(\tilde{s}\) maintains sign as long as \(q\) exists. Now suppose \(\tilde{s}_{0}\leq 0\). This implies \(\tilde{s}\leq 0\) as long as \(q\) exists. For the sake of contradiction, suppose \(q\) remains bounded for all time. From (4.4a) we have,
\[q^{\prime}\leq-\frac{kc}{N}-q^{2}\leq\min\left\{-\frac{kc}{N},-q^{2}\right\}.\]
Since \(q^{\prime}\) is bounded above by a strictly negative constant, there exists some finite time, \(t_{-}\geq 0\), such that \(q(t_{-})<0\). Once \(q\) is negative, the dynamics of \(q\) is that of a Ricatti equation,
\[q^{\prime}<-q^{2},\qquad q(t_{-})<0,\]
and therefore, \(\lim_{t\to t_{c}^{-}}q=-\infty\) for some \(t_{c}<t_{-}-1/q(t_{-})\). This is a contradiction.
Now we prove the if part of the first assertion along with the second assertion. We derive the equation of trajectory assuming \(\tilde{s}_{0}>0\), which in turn implies \(\tilde{s}>0\) for all \(t>0\) as long as the solutions to (4.4) exist. Dividing (4.4a) by (4.4b) we have the following,
\[q\frac{dq}{d\tilde{s}}=-\frac{k}{N}+\frac{kc}{N^{2}\tilde{s}}+ \frac{q^{2}}{N\tilde{s}},\] \[\frac{d(q^{2})}{d\tilde{s}}-\frac{2q^{2}}{N\tilde{s}}=\frac{2k}{N ^{2}}\left(\frac{c}{\tilde{s}}-N\right).\]
Using the integrating factor \(\tilde{s}^{-\frac{2}{N}}\),
\[\frac{d}{d\tilde{s}}\left(q^{2}\tilde{s}^{-\frac{2}{N}}\right)=\frac{2k}{N^{2} }\left(c\tilde{s}^{-1-\frac{2}{N}}-N\tilde{s}^{-\frac{2}{N}}\right).\]
Owing to the last term, we see that \(N=2\) case has to be handled separately before integrating the above equation. First, we move on with \(N>2\) case. Integrating the
equation above, we obtain
\[R_{N}(q,\tilde{s}):=\tilde{s}^{-2/N}\left(q^{2}+\frac{2k\tilde{s}}{N-2}+\frac{kc }{N}\right)=\text{constant}.\]
For \(N=2\), upon integration,
\[R_{2}(q,\tilde{s}):=\frac{q^{2}}{\tilde{s}}+k\ln(\tilde{s})+\frac{kc}{2\tilde{ s}}=\text{constant}.\]
From the trajectory equation for \(N>2\), we have for \(\tilde{s}_{0}>0\),
\[\tilde{s}<\left(\frac{R_{N}(N-2)}{2k}\right)^{\frac{N}{N-2}},\] \[|q|<\tilde{s}^{1/N}\sqrt{R_{N}}<R_{N}^{\frac{N}{2N-4}}\left(\frac {N-2}{2k}\right)^{\frac{N}{N-2}}.\]
Hence, solutions are uniformly bounded. Very similarly, bounds on \(q,\tilde{s}\) can be derived for \(N=2\) as well. This completes the proof to the first and second assertions.
Note that trajectory equation is invariant under the transformation \(q\to-q\). Putting this together with the fact that linearized system around the only critical point, \((0,c/N)\), has imaginary eigenvalues, we obtain that the solution trajectories starting at any \(\tilde{s}_{0}>0\) are closed curves around the critical point. Hence, the solutions are periodic. The direction of motion of trajectory as time progresses is clear from (4.4b) since \(\tilde{s}^{\prime}<0\) for \(q>0\) and \(\tilde{s}^{\prime}>0\) for \(q<0\).
Figure 1. \(q-s\) phase diagram for \(k=c=1,N=4\). Trajectories move clockwise with increasing time.
**Corollary 4.2**.: \(q,s\) _in system (3.8c), (3.8d) exist for all time. In particular, assertions \(2\) and \(3\) of Lemma 4.1 apply._
Proof.: Using the expression of \(s\) from (3.7) in (3.5) at \(t=0\), we have,
\[s_{0} =-\frac{\phi_{0r}(\beta)}{\beta}=\frac{1}{\beta^{N}}\int_{0}^{ \beta}(\rho_{0}(\xi)-c)\xi^{N-1}\,d\xi\] \[>-\frac{c}{N}.\]
Therefore, by Lemma 4.1, we conclude the result.
_Remark 4.3_.: This result simply asserts that in multidimension, Poisson forcing is enough to avoid concentrations at the origin, irrespective of how negative the initial velocity is. This is not the case in one dimension. A more detailed discussion comparing 1D and multi-D cases is included in Section 8 (conclusion).
At this point, we set up the notation for the trajectory curve as obtained in Lemma 4.1. It will be used in this as well as in later sections.
\[R_{N}(q,\tilde{s})=\left\{\begin{array}{cc}\tilde{s}^{-2/N}\left(q^{2}+ \frac{2k\tilde{s}}{N-2}+\frac{kc}{N}\right),&N>2,\\ \frac{q^{2}}{\tilde{s}}+k\ln(\tilde{s})+\frac{kc}{2\tilde{s}},&N=2.\end{array}\right. \tag{4.5}\]
From Lemma 4.1,
\[R_{N}(q,\tilde{s})=R_{N}(q_{0},\tilde{s}_{0}).\]
Since \(R_{N}\) is a constant, we will use the notation \(R_{N}\) to denote it as a single constant or \(R_{N}(q,\tilde{s})\) for a function of \((q,\tilde{s})\). The next Lemma pertaining to (4.4) will be useful in proving the blowup/global existence results in Section 6.
**Lemma 4.4**.: _Consider the system (3.8c) and (3.8d). Denote the two coordinates at which the trajectory curve intersects the \(s\)-axis by \((0,s_{min})\) and \((0,s_{max})\). Then_
\[0<-s_{min}<s_{max}.\]
_Moreover, the algebraic equations,_
\[R_{N}(0,y)=R_{N}(q_{0},\tilde{s}_{0}),\ y>0,\]
_have exactly two roots, which are \(s_{min}+\frac{c}{N}\) and \(s_{max}+\frac{c}{N}\)._
Proof.: We give a proof for \(N>2\) only. Very similar arguments apply for \(N=2\). Note that for \(\tilde{s}\), the statement is equivalent to saying,
\[0<\frac{c}{N}-\tilde{s}_{min}<\tilde{s}_{max}-\frac{c}{N}, \tag{4.6}\]
where \(\tilde{s}_{max}:=s_{max}+c/N\), \(\tilde{s}_{min}:=s_{min}+c/N\).
Since the solution trajectories, \((q,\tilde{s})\), are bounded periodic orbits around \((0,c/N)\), \((0,\tilde{s}_{min})\) and \((0,\tilde{s}_{max})\) lie on either side of \((0,c/N)\) implying that \(c/N-\tilde{s}_{min}>0\). We now analyze the following function,
\[g(\tilde{s}):=\frac{2k}{N-2}\tilde{s}^{1-\frac{2}{N}}+\frac{kc}{N}\tilde{s}^{ -\frac{2}{N}}.\]
The function \(g\) is essentially the left-hand-side of (4.2) with \(q=0\). The aim is to show that \(g(\tilde{s})=R_{N}\) has exactly two roots.
\(g\) goes to infinity as \(\tilde{s}\to 0\) or \(\tilde{s}\to\infty\). We also have,
\[\frac{dg}{d\tilde{s}}=\frac{2k}{N}\tilde{s}^{-\frac{2}{N}}-\frac{2kc}{N^{2}} \tilde{s}^{-1-\frac{2}{N}}.\]
Setting the derivative as zero, we obtain that the minimum is unique and is obtained at \(\tilde{s}=\frac{c}{N}\). Hence, \(g:(0,\infty)\longrightarrow(g(c/N),\infty)\).
Given the structure of \(g\), we have that the algebraic equation,
\[g(\tilde{s})=R_{N},\]
has exactly two roots if the constant, \(R_{N}>g\left(\frac{c}{N}\right)\). These roots correspond to \(\tilde{s}_{min}\) and \(\tilde{s}_{max}\). Clearly, if \(\tilde{s}_{max}\geq\frac{2c}{N}\), the second inequality in (4.6) stands true since nonpositive reals are not in the domain of \(g\). In other words, we only need to prove the inequality for \(R_{N}<g\left(\frac{2c}{N}\right)\) or equivalently, when \(\tilde{s}_{max}<2c/N\). We achieve this by making use of the sign of the third derivative of \(g\),
\[\frac{d^{3}g}{d\tilde{s}^{3}}=\frac{4k(N+2)}{N^{4}}\left(\tilde{s}-\frac{c}{N} \left(N+1\right)\right)\tilde{s}^{-3-\frac{2}{N}}.\]
Since,
\[\frac{c}{N}(N+1)>\frac{2c}{N},\]
we have that for all \(\tilde{s}\in(0,2c/N)\), \(\frac{d^{3}g}{d\tilde{s}^{3}}<0\). Hence,
\[\frac{d^{2}g}{d\tilde{s}^{2}}(\tilde{s}_{1})>\frac{d^{2}g}{d\tilde{s}^{2}}(c/N )>\frac{d^{2}g}{d\tilde{s}^{2}}(\tilde{s}_{2}),\quad\tilde{s}_{1}\in\left(0, \frac{c}{N}\right),\ \tilde{s}_{2}\in\left(\frac{c}{N},\frac{2c}{N}\right).\]
Consequently, for any \(\delta<c/N\),
\[\int_{c/N-\delta}^{c/N}\frac{d^{2}g}{d\tilde{s}^{2}}(\tilde{s}_{1 })d\tilde{s}_{1}>\delta\frac{d^{2}g}{d\tilde{s}^{2}}(c/N)>\int_{c/N}^{c/N+ \delta}\frac{d^{2}g}{d\tilde{s}^{2}}(\tilde{s}_{2})d\tilde{s}_{2}\] \[-\frac{dg}{d\tilde{s}}(c/N-\delta)>\frac{dg}{d\tilde{s}}(c/N+ \delta).\]
The second inequality is a result of the fact that \(\frac{dg}{d\tilde{s}}(c/N)=0\). Integrating the obtained inequality with respect to \(\delta\) with zero as the lower limit,
\[g(c/N-\delta)>g(c/N+\delta).\]
Consequently, for the same shift from \(\tilde{s}=c/N\) the function attains a higher value on the left than on the right. Therefore, the two points, \(\tilde{s}_{min},\tilde{s}_{max}\), in the level set \(\{\tilde{s}:g(\tilde{s})=R_{N}\}\) will be such that
\[\frac{c}{N}-\tilde{s}_{min}<\tilde{s}_{max}-\frac{c}{N}.\]
This completes the proof.
We also include a short lemma to derive a different relationship between \(\tilde{s}\) and \(q\) which will be helpful in the later sections. To this end, we first define the following useful quantity,
\[\Gamma(t)=e^{-\int_{0}^{t}q(\tau)\,d\tau}. \tag{4.7}\]
**Lemma 4.5**.: _We have the following,_
\[\Gamma(t)=\left(\frac{\tilde{s}(t)}{\tilde{s}_{0}}\right)^{\frac{1}{N}}, \tag{4.8}\]
_for all time. In particular, \(\int_{0}^{T}q(t)dt=0\) and \(\Gamma(t)\) is uniformly bounded and periodic. Here, \(T\) is the period of the system (4.4) (equivalently (3.8c),(3.8d))._
Proof.: From Corollary 4.2, we have the all time existence of \(\tilde{s},q\) and strict positivity of \(\tilde{s}\). Note that \(\Gamma^{\prime}=-q\Gamma\). We can divide this by (4.4b) to obtain,
\[\frac{d\Gamma}{d\tilde{s}}=\frac{\Gamma}{N\tilde{s}}.\]
Upon integrating, we conclude (4.8). Uniform boundedness of \(e^{-\int_{0}^{t}q}\) follows immediately from the uniform boundedness of \(\tilde{s}\). Moreover, since \(\tilde{s}\) is periodic with period \(T\), we have that for any \(t\in\mathbb{R}\),
\[e^{-\int_{0}^{t}q}=e^{-\int_{0}^{t+T}q}.\]
Therefore,
\[0 =\int_{t}^{t+T}q\] \[=\int_{t}^{0}q+\int_{0}^{T}q+\int_{T}^{t+T}q\] \[=\int_{0}^{T}q.\]
To obtain the last equality, we used the fact that \(q\) has period \(T\).
## 5. Blow up of solutions
Now we move onto the analysis of (3.8a) and (3.8b). In this section, our aim is to entail conditions under which \(p,\rho\) blow up. From Corollary 4.2, we already know that \(q,s\) are uniformly bounded, periodic. Hence, the quantities in (3.8) that can blowup are \(p\) or \(\rho\).
If \(\rho_{0}=0\), then from (3.8a), \(\rho\equiv 0\) as long as \(\rho,p\) exist. From (3.8b),
\[p^{\prime}\leq-p^{2}-kc,\]
leading to Ricatti-type blow up of \(p\). Consequently, zero density always leads to blowup. Hence, further onwards we will assume \(\rho_{0}>0\). This in turn implies \(\rho(t)>0\) for \(t>0\) as long as \(\rho,p\) exist. With this, we move onto the simplification of the system (3.8a), (3.8b). This simplification is along the lines of [28]. From (3.8a), (3.8b), we obtain that,
\[\left(\frac{1}{\rho}\right)^{\prime} =q(N-1)\frac{1}{\rho}+\frac{p}{\rho},\] \[\left(\frac{p}{\rho}\right)^{\prime} =-k(c+s(N-1))\frac{1}{\rho}+q(N-1)\frac{p}{\rho}+k.\]
Noticing that the coefficients of \(1/\rho,p/\rho\) in their respective ODEs are the same, we can multiply the above ODEs by integrating factor \(e^{-(N-1)\int_{0}^{t}q}\) to obtain,
\[\left(\frac{1}{\rho}e^{-(N-1)\int_{0}^{t}q}\right)^{\prime} =\frac{p}{\rho}e^{-(N-1)\int_{0}^{t}q},\] \[\left(\frac{p}{\rho}e^{-(N-1)\int_{0}^{t}q}\right)^{\prime} =-k(c+s(N-1))\frac{1}{\rho}e^{-(N-1)\int_{0}^{t}q}+ke^{-(N-1)\int_ {0}^{t}q}.\]
Setting
\[\eta:=\frac{1}{\rho}\Gamma^{N-1},\qquad w=\frac{p}{\rho}\Gamma^{N-1}, \tag{5.1}\]
we obtain a new system,
\[\eta^{\prime}=w, \tag{5.2b}\] \[w^{\prime}=-k\eta(c+s(N-1))+k\Gamma^{N-1}. \tag{5.2a}\]
We label the initial data as \(\eta_{0},w_{0}\).
_Remark 5.1_.: Owing to Corollary 4.2 and Lemma 4.5, the coefficients in the linear ODE system (5.2) are uniformly bounded. Hence, \(\eta,w\) remain bounded and well-defined for all \(t\in(-\infty,\infty)\). Consequently, the key to existence of global solution is to ensure \(\eta(t)>0\) for all \(t>0\). From (5.1), this means that \(p,\rho\) are both bounded for all \(t>0\). Conversely, if there is a finite time, \(t_{c}\), at which \(\eta\) becomes zero, then
\[\lim_{t\to t_{c}^{-}}\rho(t)=\lim_{t\to t_{c}^{-}}\frac{1}{\eta(t)}=\infty.\]
Moreover, since \(w\) is bounded for all times,
\[\lim_{t\to t_{c}^{-}}|p(t)|=\lim_{t\to t_{c}^{-}}\frac{|w(t)|}{\eta(t)}=\infty.\]
Therefore, at the time of breakdown, \(\rho,|p|\) blow up together.
The above remark results in the following key proposition.
**Proposition 5.2**.: _Suppose \(\rho_{0}\neq 0\). \(\rho,p,q,s\) in (3.8) are well-defined for all \(t>0\) if and only if \(\eta(t)>0\) for all \(t>0\) in (5.2). In particular, if there is a \(t_{c}>0\) such that \(\eta(t_{c})=0\), then \(\lim_{t\to t_{c}^{-}}|p(t)|=\lim_{t\to t_{c}^{-}}\rho(t)=\infty\)._
Next, we have one of the key contributions of this paper in the form of a nonlinear quantity. This quantity will be instrumental in analyzing the system (3.8a),(3.8b). Set,
\[A:=qw-k\eta s. \tag{5.3}\]
Using (3.8c),(3.8d) and (5.2),
\[A^{\prime} =q^{\prime}w+qw^{\prime}-k\eta^{\prime}s-k\eta s^{\prime}\] \[=(ks-q^{2})w+q\left(-k\eta(c+s(N-1))+k\Gamma^{N-1}\right)-kws+ kq\eta(c+Ns)\] \[=-q^{2}w+kq\eta s+kq\Gamma^{N-1}\] \[=-q(qw-k\eta s)+kq\Gamma^{N-1} \tag{5.4}\] \[=-qA+kq\Gamma^{N-1}.\]
We have,
\[A^{\prime}+qA=-\left(\frac{k}{N-1}\Gamma^{N-1}\right)^{\prime}.\]
We assume \(N>2\). As evident in the calculations below, the \(N=2\) case has to be handled separately and we will tackle it at the end of the section. Upon integration and setting \(A_{0}:=q_{0}w_{0}-k\eta_{0}s_{0}\),
\[A\Gamma^{-1}-A_{0} =-\frac{k}{N-1}\int_{0}^{t}(\Gamma(s))^{-1}\left(\Gamma^{N-1}(s) \right)^{\prime}ds\] \[=-\frac{k}{N-1}\left[\Gamma^{N-2}-1-\int_{0}^{t}q(s)(\Gamma(s))^{ N-2}ds\right]\] \[=-\frac{k}{N-1}\left[\Gamma^{N-2}-1+\frac{1}{N-2}\left(\Gamma^{N- 2}-1\right)\right]\] \[=\frac{k}{N-2}\left[1-\Gamma^{N-2}\right].\]
Finally,
\[A(\Gamma) =\left(A_{0}+\frac{k}{N-2}\right)\Gamma-\frac{k}{N-2}\Gamma^{N-1},\quad\Gamma(t):=e^{-\int_{0}^{t}q}>0,\] \[A(t) :=A(\Gamma(t)). \tag{5.5}\]
Here, we have abused the notation \(A\) and \(\Gamma\) by assigning both of them to two functions. \(A(t)\) is a function of time and \(A(\Gamma)\) a function of \(\Gamma\) as in (5.5). \(\Gamma\) is the argument of the function \(A(\Gamma)\) as well as a function itself as in (4.7) and takes positive values. It will, however, be clear from context which functions we are referring to.
_Remark 5.3_.: It is important to note that \(A(t)\) is connected to \(\eta,w\) only through the initial data, \(\eta_{0},w_{0}\). In other words, (3.8c), (3.8d), (5.4) with initial data \(q_{0},s_{0},A_{0}\) constitute an IVP independent from the dynamics of (5.2). Moreover, \(A(t)\) can be explicitly solved for. Also, from Lemma 4.5, \(A(t)\) has the same period as \(q,s\).
Next, we state a sufficient condition for \(\eta\) taking zero value in finite time.
**Proposition 5.4**.: _If \(A(t)\) is nonnegative (or nonpositive) for all time, then there exists a time \(t_{c}>0\) such that \(\eta(t_{c})=0\)._
Proof.: Suppose \(A(t)\geq 0\) for all time. At the time \(t_{0}\) where \(s\) is maximum, from (3.8d) we have \(q(t_{0})=0\) and,
\[0\leq A(t_{0})=q(t_{0})w(t_{0})-k\eta(t_{0})s(t_{0})=-k\eta(t_{0})s(t_{0}).\]
From Lemma 4.4, \(s(t_{0})>0\). Hence, \(\eta(t_{0})\leq 0\). This gives us the existence of a time \(t_{c}>0\) with \(t_{c}\leq t_{0}\) such that \(\eta(t_{c})=0\).
The proof is similar for \(A(t)\leq 0\) situation.
_Remark 5.5_.: At the times where \(s\) achieves its extrema, \(\eta\) is a priori known and essentially depends only on \(A,q,s\). Remarkably, a simple computation gives \(\eta\) at these times. Indeed when \(s\) achieves max/min (say at \(t=t_{*}\)), \(q(t_{*})=0\). Hence, from (5.3), \(\eta(t_{*})=-\frac{A(t_{*})}{ks(t_{*})}\). Moreover, since \(A,s\) are periodic with same period \(T\), \(\eta(t_{*})=\eta(t_{*}+lT),l=0,1,2,\ldots\).
**Lemma 5.6**.: _Suppose \(N\geq 3\)._
* _If_ \(1+\frac{A_{0}(N-2)}{k}\leq 0\)_, then_ \(A(t)<0\) _for all_ \(t>0\)_._
* _If_ \(\int_{0}^{t}q\leq-\frac{1}{N-2}\ln\left(1+\frac{A_{0}(N-2)}{k}\right)\) _for all_ \(t>0\)_, then_ \(A(t)\leq 0\) _for all_ \(t>0\)_._
* _If_ \(\int_{0}^{t}q\geq-\frac{1}{N-2}\ln\left(1+\frac{A_{0}(N-2)}{k}\right)\) _for all_ \(t>0\)_, then_ \(A(t)\geq 0\) _for all_ \(t>0\)_._
Proof.: The first assertion follows from the fact that if \(1+\frac{A_{0}(N-2)}{k}\leq 0\), then from (5.5), \(A(t)<0\) for all time.
Note that if \(1+\frac{A_{0}(N-2)}{k}>0\), then it can be readily seen from (5.5) that \(A(\Gamma)\) has exactly two real roots, \(0\) and
\[\kappa:=\left(1+\frac{A_{0}(N-2)}{k}\right)^{\frac{1}{N-2}}>0. \tag{5.6}\]
Moreover, for nonnegative arguments \(\Gamma\) in (5.5), \(A(\Gamma)>0\) for \(\Gamma\in(0,\kappa)\) and \(A(\Gamma)<0\) for \(\Gamma>\kappa\).
Now suppose the hypothesis of the second assertion holds. Straightforward calculations then imply \(\Gamma(t)\geq\kappa\) for all \(t\). Therefore, \(A(\Gamma)\leq 0\) for all attainable values of \(\Gamma\). This proves the second assertion.
The third assertion is similar, only that here the hypothesis implies \(\Gamma(t)\leq\kappa\). Therefore, \(A(t)\geq 0\).
**Corollary 5.7**.: _Suppose \(N\geq 3\). If one of the following is true,_
* \(1+\frac{A_{0}(N-2)}{k}\leq 0\)_,_
* \(1+\frac{A_{0}(N-2)}{k}>0\) _and the two roots,_ \(y_{2}>y_{1}>0\)_, of the equation,_ (5.7) \[\frac{2ky}{N-2}+\frac{kc}{N}-R_{N}y^{2/N}=0,\qquad(\text{equivalently }R_{N}(0,y)=R_{N}),\] _are such that,_ \[\kappa\notin\left(\left(\frac{y_{1}}{\tilde{s}_{0}}\right)^{1/N},\left(\frac{ y_{2}}{\tilde{s}_{0}}\right)^{1/N}\right),\]
_then there is a time \(t_{c}>0\) such that \(\eta(t_{c})=0\). Here, \(R_{N}\) is the constant in (4.5) and \(\kappa\) is as in (5.6)._
_Remark 5.8_.: Note that all the conditions in the hypothesis depend only on the initial data, thereby, will lead to provide a characterization of the supercritical region for (1.2).
Proof.: If the first condition holds, then by the first assertion of Lemma 5.6, \(A(t)<0\) for all \(t>0\). Proposition 5.4 leads to the conclusion.
As mentioned, (5.7) is equivalent to \(R_{N}(0,y)=R_{N}(q_{0},\tilde{s}_{0})\). Hence, from Lemma 4.4, the two roots of (5.7) are the maximum and minimum attainable values of \(\tilde{s}\). Now suppose the hypothesis of the second assertion holds. There could be two situations:
1. \(\kappa\leq(y_{1}/\tilde{s}_{0})^{1/N}\leq(\tilde{s}(t)/\tilde{s}_{0})^{1/N}\) or,
2. \(\kappa\geq(y_{2}/\tilde{s}_{0})^{1/N}\geq(\tilde{s}(t)/\tilde{s}_{0})^{1/N}\),
for all \(t>0\). Suppose (1) holds. Using Lemma 4.5,
\[e^{-\int_{0}^{t}q}=\left(\frac{\tilde{s}(t)}{\tilde{s}_{0}}\right)^{\frac{1}{N}} \geq\kappa.\]
Hence, \(\int_{0}^{t}q\leq-\ln(\kappa)\) for all \(t>0\). Using the second assertion of Lemma 5.6, we obtain that \(A(t)\leq 0\) for all time. Then by Proposition 5.4, we conclude the result. Very similar arguments hold for (2) as well.
We complete this section by presenting the \(N=2\) case. Upon integrating (5.4) with \(N=2\), we have,
\[\begin{split}& B(\Gamma):=(A_{0}-k\ln\Gamma)\Gamma,\\ & B(t):=B(\Gamma(t)),\quad\Gamma(t)=e^{-\int_{0}^{t}q}.\end{split} \tag{5.8}\]
For the initial data to \(B\), we use the same notation, \(A_{0}\). Proposition 5.4 holds as it is for \(B\) in place of \(A\). Analogous to Lemma 5.6, we have the following result.
**Lemma 5.9**.: _Suppose \(N=2\)._
* _If_ \(\int_{0}^{t}q\leq-\frac{A_{0}}{k}\) _for all_ \(t>0\)_, then_ \(B(t)\leq 0\) _for all_ \(t>0\)_._
* _If_ \(\int_{0}^{t}q\geq-\frac{A_{0}}{k}\) _for all_ \(t>0\)_, then_ \(B(t)\geq 0\) _for all_ \(t>0\)_._
The proof is very similar to that of Lemma 5.6 and fairly straightforward given the simplicity of (5.8).
_Remark 5.10_.: Interestingly, the result of Lemma 5.9 can be related to that of Lemma 5.6 by replacing,
\[-\frac{1}{N-2}\ln\left(1+\frac{A_{0}(N-2)}{k}\right),\]
with
\[-\lim_{\beta\to 2^{+}}\frac{1}{\beta-2}\ln\left(1+\frac{A_{0}(\beta-2)}{k} \right).\]
This shows that \(N=2\) is indeed a critical case.
Finally, we have the result analogous to Corollary 5.7.
**Corollary 5.11**.: _Suppose \(N=2\). If the two roots, \(y_{2}>y_{1}>0\), of the equation,_
\[ky\ln(y)+\frac{kc}{2}-R_{2}y=0, \tag{5.9}\]
_are such that,_
\[e^{\frac{A_{0}}{k}}\notin\left(\sqrt{\frac{y_{1}}{\tilde{s}_{0}}},\sqrt{ \frac{y_{2}}{\tilde{s}_{0}}}\right),\]
_then there is a finite time \(t_{c}>0\) such that \(\eta(t_{c})=0\). Here, \(R_{2}\) is the constant right-hand-side in (4.5)._
Once again, the proof is very similar.
Before we begin the new section, we would like to introduce some notations. \(t_{q},t_{s},t_{A}\) will be used to denote a time when \(q,s,A\) are zero respectively. Owing to their periodicity, there are infinitely many such times. In view of this, we will also use the notations \(t_{q}^{i},t_{s}^{i},t_{A}^{i}\) for \(i=1,2,\ldots\) to refer to more than one such times as and when it is required.
## 6. Global Solution
Up to this point, we have narrowed down the class of possible initial data for (3.8) that could ensure \(\eta(t)>0\), or equivalently, \(\rho(t)<\infty\) for all \(t>0\). This section is devoted to precisely identifying the possible initial configurations, from the set remaining from Section 5, that lead to the all-time positivity of \(\eta\). Therefore, from this point onwards, we will assume the negation of the hypothesis in Corollary 5.7. Hence, for this section, we assume the following for \(N>2\),
\[1+\frac{A_{0}(N-2)}{k}>0,\qquad\kappa\in\left(\left(\frac{\tilde{s}_{min}}{ \tilde{s}_{0}}\right)^{1/N},\left(\frac{\tilde{s}_{max}}{\tilde{s}_{0}}\right) ^{1/N}\right), \tag{6.1}\]
with \(\kappa\) as in (5.6) and \(\tilde{s}_{min},\tilde{s}_{max}\) are the minimum and maximum attainable values of \(\tilde{s}(t)\), which are also the roots of (5.7). For \(N=2\), we set \(\kappa=e^{\frac{A_{0}}{k}}\) and \(\tilde{s}_{min},\tilde{s}_{max}\) being the minimum and maximum attainable values of \(\tilde{s}(t)\), which are the roots of (5.9). The first assumption in (6.1) is a vacuous statement for \(N=2\). Under these notational assumptions, the analysis and results of this section for the \(N>2\) case is the same as that of \(N=2\) with \(A\) replaced by \(B\). Hence, in this section, we do not differentiate between the \(N>2\) and \(N=2\) cases as there is no critical behaviour with respect to the system (5.2).
Owing to the first assumption in (6.1), \(A(\Gamma)\) has a unique positive root equal to \(\kappa\). From (4.8), the second assumption in (6.1) implies,
\[\kappa\in\{\Gamma(t):t>0\}^{\circ},\]
where \({}^{\circ}\) denotes the interior of the set. Since \(\kappa\) is the root of \(A(\Gamma)\), from (5.5) we conclude that \(A(t)\) changes sign. Recall that this is essentially the negation of the hypothesis of Proposition 5.4 and, therefore, \(A\) must necessarily change sign to hope for a situation wherein \(\eta(t)>0\) for all \(t>0\). See Figure 2 for a visualization of this situation.
From the expression for \(\Gamma(t)\) in (5.5), we see that the leftmost point on the green line in Figure 2 corresponds to the time when \(\tilde{s}\) attains its minimum and the rightmost point is when it attains the maximum. Using this fact in (4.8) and (5.5), we make an important conclusion that will be quite helpful throughout this section. For a \(t>0\),
\[\begin{split}\tilde{s}(t)=\tilde{s}_{max}\implies A(t)<0,\\ \tilde{s}(t)=\tilde{s}_{min}\implies A(t)>0.\end{split} \tag{6.2}\]
In most parts of this section, we will be analyzing the system (5.2). Our aim will be to obtain a necessary and sufficient condition to ensure \(\eta(t)>0\) for all \(t>0\). Recall from Remark 5.5 that at the times when \(s\) (or equivalently \(\tilde{s}\)) attains extrema, the values of \(\eta\) are a priori known. That is to say, if \(t_{q}\) is such that \(q(t_{q})=0\), then from (5.3),
\[\eta(t_{q})=-\frac{A(t_{q})}{ks(t_{q})}. \tag{6.3}\]
In particular, when \(s(t_{q})=s_{min}\), we have \(\eta(t_{q})=-\frac{A(t_{q})}{ks_{min}}\). From Lemma 4.4, we have \(s_{min}<0\). Using this in (6.2), we have \(A(t_{q})>0\). From (6.3), this implies \(\eta(t_{q})>0\). Likewise, at the time when \(s\) achieves maximum, \(s\) and \(A\) switch signs ensuring that \(\eta\) is again positive.
Keeping this in mind, we need to ensure that \(\eta\) remains strictly positive at other times apart from the ones where \(s\) attains its extreme values. To this end, we will construct two functions that form a cloud around \(\eta\). This will enable us to prove the positivity of \(\eta\).
We ensure this as follows. Suppose \(q_{0},s_{0},A_{0}\) are given. This fixes the functions \(q,s,A\), which are unknowns of a closed ODE system, whose properties have been analyzed. Fixing \(A_{0}\) also establishes a linear relation between \(\eta_{0},w_{0}\). Given these three functions, we will derive a condition for exactly one of either \(\eta_{0}\) or \(w_{0}\). Each of these conditions will ensure the positivity of \(\eta\). Once a condition on \(\eta_{0}\) (or \(w_{0}\)) is imposed, it automatically implies a condition on \(w_{0}\) (or \(\eta_{0}\)) through \(A_{0}\) as in (5.3). For example, if \(\delta_{1}<\eta_{0}<\delta_{2}\), then for given \(q_{0},s_{0},A_{0}\), and using (5.3) we have,
\[\delta_{1}<\frac{q_{0}w_{0}-A_{0}}{ks_{0}}<\delta_{2},\]
which gives the subsequent bounds on \(w_{0}\) as well. Therefore, it remains to find the appropriate conditions.
From Sections 4 and 5, we have that if the initial data \(q_{0},s_{0},A_{0}\) is given, then functions \(s,q,A\) are all known, periodic and uniformly bounded. Then (5.2) is an inhomogeneous, linear \(2\times 2\) system with uniformly bounded coefficients and, therefore, solutions \(\eta,w\) exist for all \(t\in(-\infty,\infty)\). We now state and prove a few lemmas.
**Lemma 6.1**.: _Consider the first order linear ODE,_
\[\tilde{\eta}^{\prime}=\frac{A+k\tilde{\eta}s}{q}. \tag{6.4}\]
_Let \(\mathbb{D}=\{t\in(-\infty,\infty):q(t)=0\}\). For all intervals \(I\) with \(I\subset\mathbb{D}^{c}\), suppose \(\tilde{\eta}_{1}\) and \(\tilde{\eta}_{2}\) satisfy (6.4) and \(\tilde{\eta}_{1}(t_{*})\neq\tilde{\eta}_{2}(t_{*})\) for some \(t_{*}\in\mathbb{D}^{c}\). Then following statements hold,_
1. \(\tilde{\eta}_{1},\tilde{\eta}_{2}\) _satisfy_ (6.5) \[\tilde{\eta}^{\prime\prime}+k\tilde{\eta}(c+(N-1)s)=k\Gamma^{N-1},\]
_for all \(t\in(-\infty,\infty)\),_
2. \(\tilde{\eta}_{1}(t)=\tilde{\eta}_{2}(t)=-\frac{A(t)}{ks(t)}>0\) _for all_ \(t\in\mathbb{D}\)_,_
3. \(\tilde{\eta}_{1}(t)\neq\tilde{\eta}_{2}(t)\) _for all_ \(t\in\mathbb{D}^{c}\)_,_
4. \(\tilde{\eta}^{\prime}_{1}(t)\neq\tilde{\eta}^{\prime}_{2}(t)\) _for all_ \(t\in\mathbb{D}\)_. In particular,_ \(\tilde{\eta}_{1}-\tilde{\eta}_{2}\) _changes sign at only and all_ \(t\in\mathbb{D}\)_._
Proof.: Taking derivative of (6.4), and using (3.8c), (3.8d), (5.4) results in (6.5), which is linear with bounded coefficients. Note that since \(\mathbb{D}\) is discrete, \(\tilde{\eta}_{1},\tilde{\eta}_{2}\) satisfy (6.5) for all \(t\) by continuity. Hence, the first statement holds.
Consequently, \(\tilde{\eta}_{1},\tilde{\eta}_{2}\) are well-defined for all times. Therefore, the limit in (6.4),
\[\lim_{t\to t_{0}}\tilde{\eta}^{\prime}_{i}(t),\quad t_{q}\in\mathbb{D},\ i=1,2,\]
must exist. Hence, \(A(t_{q})+k\tilde{\eta}_{i}(t_{q})s(t_{q})=0\) for any \(t_{q}\in\mathbb{D}\). The arguments to the fact that \(\tilde{\eta}_{i}(t_{q})>0\) are the same as were used in the paragraph following (6.3). This completes the proof to the second assertion.
We will prove the third assertion by contradiction. To that end, assume \(\tilde{\eta}_{1}(\tau_{*})=\tilde{\eta}_{2}(\tau_{*})\) for some \(\tau_{*}\in\mathbb{D}^{c}\). Note that \(\tilde{\eta}_{1},\tilde{\eta}_{2}\) satisfy (6.4) as well as (6.5). Consider the IVP (6.5) along with initial data \(\tilde{\eta}(\tau_{*})=\tilde{\eta}_{1}(\tau_{*}),\tilde{\eta}^{\prime}(\tau_ {*})=\frac{A(\tau_{*})+k\tilde{\eta}_{1}(\tau_{*})s(\tau_{*})}{q(\tau_{*})}\). This IVP has a unique solution. Consequently, \(\tilde{\eta}_{1}\equiv\tilde{\eta}_{2}\). However, this is a contradiction since \(\tilde{\eta}_{1}(t_{*})\neq\tilde{\eta}_{2}(t_{*})\). Hence, the assertion stands.
For \(t_{q}\in\mathbb{D}\), \(\tilde{\eta}^{\prime}_{1}(t_{q})\neq\tilde{\eta}^{\prime}_{2}(t_{q})\) follows by the second assertion and a uniqueness of ODE argument as above. Consequently, if \(\tilde{\eta}_{1}(t)-\tilde{\eta}_{2}(t)<0\) for \(t\in(t_{q}-\epsilon,t_{q})\) for \(\epsilon\) sufficiently small, then from the second and third assertions, we have \(\tilde{\eta}^{\prime}_{1}(t_{q})-\tilde{\eta}^{\prime}_{2}(t_{q})>0\). To see this,
\[-\tilde{\eta}_{1}(t)>-\tilde{\eta}_{2}(t),\quad t\in(t_{q}-\epsilon,t_{q}),\] \[\frac{\tilde{\eta}_{1}(t_{q})-\tilde{\eta}_{1}(t)}{t_{q}-t}>\frac {\tilde{\eta}_{2}(t_{q})-\tilde{\eta}_{2}(t)}{t_{q}-t},\quad\text{by Assertion 2},\] \[\tilde{\eta}^{\prime}_{1}(t_{q})\geq\tilde{\eta}^{\prime}_{2}(t_{ q}),\] \[\tilde{\eta}^{\prime}_{1}(t_{q})>\tilde{\eta}^{\prime}_{2}(t_{q}),\quad\text{by Assertion 3}.\]
Hence, \(\tilde{\eta}_{1}-\tilde{\eta}_{2}\) changes sign at \(t_{q}\). This completes the proof.
Figure 3 below gives an illustration of the Lemma.
**Lemma 6.2**.: _Suppose \(\tilde{\eta}\) satisfies (6.4) on the intervals as specified in Lemma 6.1 and \(\tilde{\eta}(0)>0\). Let \(t_{c}>0\) be the first time (if it exists) when \(\tilde{\eta}(t_{c})=0\). Let \(J:=(t_{q}^{1},t_{q}^{2})\) be the smallest interval such that \(t_{c}\in J\) and \(q(t_{q}^{i})=0\), \(i=1,2\). \(t_{1}^{q}\) could possibly be negative. Let \(t_{A}\in J\) be the unique time when \(A(t_{A})=0\). Then it must be that \(t_{c}\in(t_{q}^{1},t_{A}]\) and \(\tilde{\eta}(t)<0\) for \(t\in(t_{c},t_{A}]\)._
_In particular, if \(\tilde{\eta}(t_{A})>0\), then there is no such \(t_{c}\) and \(\tilde{\eta}(t)>0\) for all \(t\in[t_{q}^{1},t_{q}^{2}]\). Or if \(\tilde{\eta}(t_{A})=0\), then \(\tilde{\eta}(t)>0\) for all \(t\in[t_{q}^{1},t_{q}^{2}]\backslash\{t_{A}\}\)._
Note that from (4.4b) and (6.2), \(A\) and \(q\) cannot be zero at the same time. Moreover, from (4.8) and (5.5) there is a unique time \(t_{A}\in(t_{q}^{1},t_{q}^{2})\) with \(A(t_{A})=0\). Therefore, \(t_{A}\in J\) exists and is unique. Also, \(t_{A}\) could be nonpositive. In that case, \(\tilde{\eta}(t)>0\) for \(t\in[0,t_{q}^{2}]\subset[t_{A},t_{q}^{2}]\).
Proof.: From Lemma 6.1, \(\tilde{\eta}(t_{q}^{1})>0\). Hence, for some \(\epsilon>0\) small enough such that \(t_{q}^{1}+\epsilon<t_{A}\), we have \(\tilde{\eta}(t)>0\) for \(t\in[t_{q}^{1},t_{q}^{1}+\epsilon]\). \(\tilde{\eta}\) satisfies (6.4) in the interval \([t_{q}^{1}+\epsilon,t_{q}^{2})\). Multiplying (6.4) with the appropriate integrating factor, we have,
\[\left(\tilde{\eta}e^{-k\int^{t}\frac{s}{q}}\right)^{\prime}=\frac{A}{q}e^{-k \int^{t}\frac{s}{q}}.\]
Note that, \(A(t)\) and \(q(t)\) have opposite signs in the interval \([t_{q}^{1}+\epsilon,t_{A})\) and same sign in \((t_{A},t_{q}^{2})\). Indeed, if \(t_{q}^{1}\) is such that \(s(t_{q}^{1})=s_{max}\), then from (4.8) and (5.5), \(A(t)<0\) for \(t\in[t_{q}^{1}+\epsilon,t_{A})\) and strictly positive in \((t_{A},t_{q}^{2})\). From assertion three of Lemma 4.1, \(q(t)>0\) in \(J\) and hence, signs of \(A(t),q(t)\) are opposite. Same holds if \(s(t_{q}^{1})=s_{min}\).
Therefore, from the ODE above, the quantity \(\tilde{\eta}e^{-k\int^{t}\frac{s}{q}}\) is decreasing in the interval \([t_{q}^{1}+\epsilon,t_{A})\) and increasing in the interval \((t_{A},t_{q}^{2})\). Since \(\left.\tilde{\eta}e^{-k\int^{t}\frac{s}{q}}\right|_{t_{q}^{1}+\epsilon}>0\), we conclude that \(t_{c}\in(t_{q}^{1}+\epsilon,t_{A}]\subset(t_{q}^{1},t_{A}]\). \(\tilde{\eta}(t)<0\) for \(t\in(t_{c},t_{A}]\) follows directly from the above ODE.
Lastly, if \(\tilde{\eta}(t_{A})>0\), then there is no \(t_{c}\) because if it is so, then \(\eta(t_{A})<0\) which is a contradiction. Hence, \(\tilde{\eta}(t)>0\) for all \(t\in(t_{q}^{1},t_{q}^{2})\) and \(\tilde{\eta}\) is positive at \(t_{q}^{1},t_{q}^{2}\) by Lemma 6.1. Also, if \(t_{c}=t_{A}\), then from (6.4) we have \(\tilde{\eta}^{\prime}(t_{A})=0\) and (6.5) then implies that \(\tilde{\eta}^{\prime\prime}(t_{A})>0\). Hence, \(t_{c}=t_{A}\) is a local minima and \(\tilde{\eta}\) is positive in a neighbourhood of \(t_{c}\). However, on the right, \(A,q\) have same sign and the ODE above implies \(\tilde{\eta}>0\) after \(t_{A}\).
We will now move on to define two functions through an IVP using the linear differential equation (6.5) along with appropriate initial conditions. These functions will form a cloud around the solution, \(\eta\).
Let \(t_{A}^{i},i=1,2\) with \(0\leq t_{A}^{1}<t_{A}^{2}\) be the first two times when \(A(t_{A}^{i})=0\). Define the two functions \(\eta_{i},i=1,2\) as follows,
\[\begin{split}&\eta_{i}^{\prime\prime}+k\eta_{i}(c+(N-1)s)=k \Gamma^{N-1},\\ &\eta_{i}(t_{A}^{i})=0,\qquad\eta_{i}^{\prime}(t_{A}^{i})=0. \end{split} \tag{6.6}\]
Now we state an important result for these functions as well as \(\eta\).
**Proposition 6.3**.: _The functions \(\eta,\eta_{i},i=1,2\) are periodic with the same period as that of the system (3.8c),(3.8d)._
Proof.: _Part 1: The homogeneous system._
To prove this, we will use the Floquet Theorem, [10]. To use the result, we will first study the homogeneous form of (5.2) as follows,
\[\Theta^{\prime}_{H}:=\left[\begin{array}{c}\eta_{H}\\ w_{H}\end{array}\right]^{\prime}=\left[\begin{array}{cc}0&1\\ -k(c+(N-1)s)&0\end{array}\right]\left[\begin{array}{c}\eta_{H}\\ w_{H}\end{array}\right]=:E\Theta_{H}. \tag{6.7}\]
Let \(T\) be the period of \(s\) and therefore, the period of the matrix \(E\). By Floquet Theorem, a fundamental matrix to (6.7) is of the form \(F(t)e^{tG}\), where \(F\) is a \(T\)-periodic, \(2\times 2\) matrix which is nonsingular for all times and \(G\) is a \(2\times 2\) constant matrix. \(F,G\) could possibly be complex but if one of them is real, then the other will be since \(E\) is real.
The substitution \(\Phi_{H}:=F^{-1}\Theta_{H}\) reduces (6.7) to
\[\Phi^{\prime}_{H}=G\Phi_{H}. \tag{6.8}\]
To see this, note that
\[EF\Phi_{H}=E\Theta_{H}=\Theta^{\prime}_{H}=F^{\prime}\Phi_{H}+F\Phi^{\prime}_{ H}.\]
Also, since \(F(t)e^{tG}\) is a fundamental matrix of (6.7), we have,
\[(Fe^{tG})^{\prime}=EFe^{tG},\] \[F^{\prime}+FG=EF.\]
Substituting this for \(F^{\prime}\) above and multiplying by \(F^{-1}\), we obtain (6.8).
Next, we define a quantity for (6.7) analogous to (5.3). Define \(A_{H}:=qw_{H}-k\eta_{H}s\). Using (6.7), (3.8c), (3.8d),
\[A^{\prime}_{H} =(ks-q^{2})w_{H}-kq(c+(N-1)s)\eta_{H}-kw_{H}s+k\eta_{H}q(c+Ns)\] \[=-q(qw_{H}-k\eta_{H}s)\] \[=-qA_{H}.\]
Therefore, \(A_{H}(t)=A_{H}(0)e^{-\int_{0}^{t}q}=A_{H}(0)(\tilde{s}(t)/\tilde{s}_{0})^{1/N}\). We used (4.8) for the last equality. If \(t_{q}\) is a time where \(\tilde{s}\) (or equivalently \(s\)) achieves maximum, then for \(l=0,1,2,\ldots\),
\[\beta:=\eta_{H}(t_{q}+lT)=-\frac{A_{H}(0)}{ks_{max}}\left(\frac{\tilde{s}_{max }}{\tilde{s}_{0}}\right)^{\frac{1}{N}}=\text{constant}. \tag{6.9}\]
Since \(B\) is a constant matrix, the general solution form of (6.8) is known. Consequently, from (6.8) and that \(\Theta_{H}=F\Phi_{H}\), we obtain that a general solution to (6.7) depending on whether the eigenvalues of \(B\) are distinct or repeated,
\[\Theta_{H}=c_{1}e^{\lambda_{1}t}\mathbf{f_{1}}(t)+c_{2}e^{\lambda_{2}t} \mathbf{f_{2}}(t),\quad\lambda_{1}\neq\lambda_{2}.\]
or,
\[\Theta_{H}=c_{1}e^{\lambda_{1}t}\mathbf{f_{1}}(t)+c_{2}te^{\lambda_{1}t} \mathbf{f_{2}}(t),\]
where \(\lambda_{1},\lambda_{2}\) are the eigenvalues (possibly complex) of \(B\), \(\mathbf{f_{1}},\mathbf{f_{2}}\) are linearly independent, periodic vectors and \(c_{1},c_{2}\in\mathbb{R}\) are arbitrary constants. We will show that \(\eta_{H}\) is periodic.
Suppose the eigenvalues are repeated and latter formula is the general solution, then we have,
\[\eta_{H}(t)=e^{\lambda_{1}t}\left(c_{1}f_{1}(t)+c_{2}tf_{2}(t)\right),\]
where \(f_{1},f_{2}\) are the first elements of \(\mathbf{f_{1}},\mathbf{f_{2}}\) respectively. This should be a general solution form for any constant \(\beta\). Using (6.9),
\[\beta e^{-\lambda_{1}(t_{q}+lT)}=c_{1}f_{1}(t_{q})+c_{2}(t_{q}+lT)f_{2}(t_{q}), \quad l=0,1,2,\ldots\]
Assume \(\beta\neq 0\). If \(\operatorname{Re}(\lambda_{1})>0\), then left-hand-side tends to zero as \(l\to\infty\) but right-hand-side does not. If \(\operatorname{Re}(\lambda_{1})<0\), then left-hand-side is an exponential function of \(l\) while the right-hand-side is linear and therefore, the equation cannot hold for all \(l\). Lastly, if \(\lambda_{1}\) is purely imaginary, then the left-hand-side changes periodically with \(l\) but the right-hand-side does not. This leaves us with the only possibility that the repeated eigenvalue must be zero. In this case,
\[\Theta_{H}=c_{1}\mathbf{f_{1}}(t)+c_{2}tf_{2}(t).\]
However, note that since \(\eta_{H}^{\prime}=w_{H}\) and the vectors \(\mathbf{f_{1}},\mathbf{f_{2}}\) are periodic, it must be that,
\[(tf_{2})^{\prime}=tf_{2}^{\prime}+f_{2}=tg_{2},\]
for some periodic function \(g_{2}\). This could only hold if \(f_{2}\equiv f_{2}^{\prime}\equiv 0\), which is a contradiction to \(F\) being nonsingular. Consequently,
\[\eta_{H}(t)=c_{1}e^{\lambda_{1}t}f_{1}(t)+c_{2}e^{\lambda_{2}t}f_{2}(t),\]
for \(\lambda_{1}\neq\lambda_{2}\). Again, using (6.9),
\[\beta=c_{1}e^{\lambda_{1}(t_{q}+lT)}f_{1}(t_{q})+c_{2}e^{\lambda_{2}(t_{q}+lT) }f_{2}(t_{q}),\quad l=0,1,\ldots.\]
It is clear that the real part of both eigenvalues is zero because if not, then a contradiction is obtained as \(l\to\infty\). Consequently, \(\lambda_{1},\lambda_{2}\) are purely imaginary. Taking difference with \(l=0\) in (6.9) with itself,
\[0=c_{1}f_{1}(t_{q})e^{\lambda_{1}t_{q}}\left(e^{\lambda_{1}lT}-1\right)+c_{2} f_{2}(t_{q})e^{\lambda_{2}t_{q}}\left(e^{\lambda_{2}lT}-1\right).\]
Since \(c_{1},c_{2}\) are arbitrary,
\[f_{1}(t_{q})\left(e^{\lambda_{1}lT}-1\right)=f_{2}(t_{q})\left(e^{\lambda_{2} lT}-1\right)=0,\quad l=1,2,\ldots\]
As argued before, both \(f_{1}(t_{q}),f_{2}(t_{q})\) cannot be zero together. So, at least one of the \(e^{\lambda_{i}T}-1=0\). In fact, both these terms are zero, because if \(e^{\lambda_{1}T}=\omega\neq 1\) (WLOG), then \(f_{1}(t_{q})=0\), in which case \(c_{2}\) is no longer arbitrary, hence, a contradiction. Therefore,
\[\eta_{H}(t)=c_{1}f_{1}(t)+c_{2}f_{2}(t),\]
where \(f_{1},f_{2}\) are new \(T-\)periodic functions obtained after dissolving the exponential terms, which had the same period.
_Part 2: Periodicity._
Owing to the fact that \(\eta_{H}^{\prime}=w_{H}\), we have eventually proved that a fundamental matrix to (6.7), is of the form,
\[\bar{F}(t)=\left[\begin{array}{cc}f_{1}(t)&f_{2}(t)\\ f_{1}^{\prime}(t)&f_{2}^{\prime}(t)\end{array}\right].\]
We will now show that \(\eta_{1}\) is periodic. Very similar arguments apply for \(\eta_{2}\) as well as any solution to (5.2a), \(\eta\). The important fact is that all of these three functions satisfy (6.4)
in the intervals as mentioned in the hypothesis of Lemma 6.1 and hence, the assertions of the Lemma hold.
\([\eta_{1}\ \eta_{1}^{\prime}]^{T}\) has the following closed form expression,
\[\left[\begin{array}{c}\eta_{1}(t)\\ \eta_{1}^{\prime}(t)\end{array}\right]=\bar{F}(t)\int_{t_{A}^{1}}^{t}\bar{F}^{- 1}(\tau)\left[\begin{array}{c}0\\ k(\Gamma(\tau))^{N-1}\end{array}\right]d\tau.\]
Through routine calculations, one can check that \(\eta_{1}\) in the above expression indeed satisfies (6.6) for \(i=1\). We can further evaluate this expression as follows,
\[\left[\begin{array}{c}\eta_{1}(t)\\ \eta_{1}^{\prime}(t)\end{array}\right] = \bar{F}(t)\int_{t_{A}^{1}}^{t}\bar{F}^{-1}(\tau)\left[\begin{array} []{c}0\\ k(\Gamma(\tau))^{N-1}\end{array}\right]d\tau\] \[= \bar{F}(t)\int_{t_{A}^{1}}^{t}\frac{1}{\det(\bar{F}(\tau))}\left[ \begin{array}{cc}f_{2}^{\prime}(\tau)&-f_{2}(\tau)\\ -f_{1}^{\prime}(\tau)&f_{1}(\tau)\end{array}\right]\left[\begin{array}{c}0\\ k(\Gamma(\tau))^{N-1}\end{array}\right]d\tau\] \[= \left[\begin{array}{c}f_{1}(t)&f_{2}(t)\\ f_{1}^{\prime}(t)&f_{2}^{\prime}(t)\end{array}\right]\int_{t_{A}^{1}}^{t} \frac{k(\Gamma(\tau))^{N-1}}{\det(\bar{F}(\tau))}\left[\begin{array}{c}-f_{ 2}(\tau)\\ f_{1}(\tau)\end{array}\right]d\tau\] \[= \left[\begin{array}{c}f_{1}(t)\int_{t_{1}^{1}}^{t}g_{1}(\tau)d \tau+f_{2}(t)\int_{t_{1}^{1}}^{t}g_{2}(\tau)d\tau\\ f_{1}^{\prime}(t)\int_{t_{A}^{1}}^{t}g_{1}(\tau)d\tau+f_{2}^{\prime}(t)\int_{t_{ A}^{1}}^{t}g_{2}(\tau)d\tau\end{array}\right],\]
where \(g_{1},g_{2}\) are T-periodic functions defined by,
\[g_{1}(t):=-f_{2}(t)\frac{k\Gamma^{N-1}}{\det(\bar{F}(t))},\quad g_{2}(t):=f_{ 1}(t)\frac{k\Gamma^{N-1}}{\det(\bar{F}(t))}.\]
Periodicity of \(g_{1},g_{2}\) follows from (4.8) as \(\int_{0}^{t}q\) also has period \(T\). Since \([\eta_{1}\quad\eta_{1}^{\prime}]^{T}\) is a solution to a \(2\times 2\) ODE system, proving periodicity is equivalent to proving equality at a single point. In particular, if we prove the following for some \(t^{*}\), then we obtain periodicity.
\[\left[\begin{array}{c}\eta_{1}(t^{*}+T)\\ \eta_{1}^{\prime}(t^{*}+T)\end{array}\right]=\left[\begin{array}{c}\eta_{1} (t^{*})\\ \eta_{1}^{\prime}(t^{*})\end{array}\right].\]
From (6.10) and periodicity of \(g_{1},g_{2}\),
\[\left[\begin{array}{c}\eta_{1}(t^{*}+T)\\ \eta_{1}^{\prime}(t^{*}+T)\end{array}\right] = \left[\begin{array}{c}f_{1}(t^{*})\int_{t_{1}^{1}}^{t^{*}+T}g_{ 1}(\tau)d\tau+f_{2}(t^{*})\int_{t_{1}^{1}}^{t^{*}+T}g_{2}(\tau)d\tau\\ f_{1}^{\prime}(t^{*})\int_{t_{A}^{1}}^{t^{*}+T}g_{1}(\tau)d\tau+f_{2}^{\prime}(t ^{*})\int_{t_{A}^{1}}^{t^{*}+T}g_{2}(\tau)d\tau\end{array}\right]\] \[= \left[\begin{array}{c}\eta_{1}(t^{*})\\ \eta_{1}^{\prime}(t^{*})\end{array}\right]+\left[\begin{array}{c}f_{1}(t^{*} )\int_{0}^{T}g_{1}(\tau)d\tau+f_{2}(t^{*})\int_{0}^{T}g_{2}(\tau)d\tau\\ f_{1}^{\prime}(t^{*})\int_{0}^{T}g_{1}(\tau)d\tau+f_{2}^{\prime}(t^{*})\int_{0}^{ T}g_{2}(\tau)d\tau\end{array}\right].\]
Set
\[h(t):=f_{1}(t)\int_{0}^{T}g_{1}(\tau)d\tau+f_{2}(t)\int_{0}^{T}g_{2}(\tau)d\tau.\]
If \(h(t^{*})=0=h^{\prime}(t^{*})\), then \(h\) is identically zero. Indeed, if \(h,h^{\prime}\) are both zero together at the same time then from nonsingularity of \(F\), we have that \(\int_{0}^{T}g_{i}(\tau)d\tau=0,i=1,2\), and hence, \(h(t)=h^{\prime}(t)\equiv 0\), thereby proving that \(\eta_{1}\) is periodic. Our aim is to find such a \(t^{*}\)
We begin with some calculations. For \(l=1,2,\ldots\) and \(t\in[0,T)\), and owing to periodicity of \(g_{i},i=1,2\),
\[\int_{t_{A}^{1}}^{t+lT}g_{i}(\tau)d\tau =\int_{t_{A}^{1}}^{t}g_{i}(\tau)d\tau+\int_{t}^{0}g_{i}(\tau)d\tau+ \int_{0}^{lT}g_{i}(\tau)d\tau+\int_{lT}^{t+lT}g_{i}(\tau)d\tau\] \[=\int_{t_{A}^{1}}^{t}g_{i}(\tau)d\tau+l\int_{0}^{T}g_{i}(\tau)d\tau.\]
Using this, we can obtain,
\[\eta_{1}(t+lT) =f_{1}(t)\int_{t_{A}^{1}}^{t+lT}g_{1}(\tau)d\tau+f_{2}(t)\int_{t_{ A}^{1}}^{t+lT}g_{2}(\tau)d\tau\] \[=\eta_{1}(t)+l\left(f_{1}(t)\int_{0}^{T}g_{1}(\tau)d\tau+f_{2}(t) \int_{0}^{T}g_{2}(\tau)d\tau\right) \tag{6.11}\] \[=\eta_{1}(t)+lh(t).\]
Similar calculation for \(\eta_{1}^{\prime}\) leads to
\[\eta_{1}^{\prime}(t+lT) =\eta_{1}^{\prime}(t)+l\left(f_{1}^{\prime}(t)\int_{0}^{T}g_{1}( \tau)d\tau+f_{2}^{\prime}(t)\int_{0}^{T}g_{2}(\tau)d\tau\right) \tag{6.12}\] \[=\eta_{1}^{\prime}(t)+lh^{\prime}(t).\]
Note that from the second assertion of Lemma 6.1, we have \(\eta_{1}(t_{q})=0=\eta_{1}(t_{q}+T)\), where \(t_{q}\) is such that \(q(t_{q})=0\). From (6.11), \(h(t_{q})=0\). However, \(h^{\prime}(t_{q})\) may not be equal to zero. In a similar way, observing (6.4) and applying it to (6.12), we also have that \(h^{\prime}(t_{s})=0\), where \(t_{s}\) is such that \(s(t_{s})=0\), whereas \(h(t_{s})\) may not be zero. The key is to find a \(t^{*}\) such that both \(h(t^{*}),h^{\prime}(t^{*})\) are zero.
We will consider two cases. The first is when \(s(t)\) and \(A(t)\) are zero at the same times and the other when they are never zero together. The two situations are exhaustive, in the sense that there can never be a scenario when \(A,s\) are zero together at some times and there is the existence of some other times when one of them is zero and the other is not. \(A,s\) sharing zeros is only dependent on the initial data \(s_{0},A_{0}\). Indeed from (4.8), \(s\) is zero at the times when \(\Gamma(t)=(c/N\tilde{s}_{0})^{1/N}\) and from (5.5), \(A\) is zero at times \(\Gamma(t)=\kappa\). Hence, \(s,A\) are zero at the same times if and only if \(\kappa^{N}=\frac{c}{N\tilde{s}_{0}}\), which only depends on the \(s_{0},A_{0}\). Conversely, if \(\kappa^{N}\neq\frac{c}{N\tilde{s}_{0}}\), then there is no time when \(s,A\) are both zero.
Taking note of this discussion, we first prove periodicity of \(\eta_{1}\) for the situation when \(A(t_{s})\neq 0\) for any \(t_{s}\in\{t:s(t)=0\}\). Since \(h\) is periodic, it is enough to consider the restricted function \(h:[t_{q}^{1},t_{q}^{1}+T]\to\mathbb{R}\), where \(t_{q}^{1}\) is such that \(q(t_{q}^{1})=0\). From second assertion of Lemma 6.1 and (6.11), we have \(h(t_{q}^{1})=0=h(t_{q}^{2})\), where \(t_{q}^{2}\in(t_{q}^{1},t_{q}^{1}+T)\) such that \(q(t_{q}^{2})=0\). In a similar way, observing (6.4) and applying it to (6.12), we also have that \(h^{\prime}(t_{s}^{i})=0\), where \(t_{s}^{i},i=1,2\) are the two unique times in the interval \((t_{q}^{1},t_{q}^{1}+T)\) where \(s(t_{s}^{i})=0\). Now suppose there is a point \(t^{*}\in[t_{q}^{1},t_{s}^{1})\cup(t_{s}^{1},t_{s}^{2})\cup(t_{s}^{2},t_{q}^{1 }+T]\) such that \(h^{\prime}(t^{*})=0\). Then firstly from (6.12), \(\eta_{1}^{\prime}(t^{*}+lT)=\eta_{1}^{\prime}(t^{*})\). Secondly, from (6.4) and (6.11),
\[-\frac{A(t^{*})-\eta_{1}^{\prime}(t^{*})q(t^{*})}{ks(t^{*})}=\eta_{1}(t^{*}+lT )=\eta_{1}(t^{*})+lh(t^{*}),\quad l=1,2,\ldots.\]
Therefore, \(h(t^{*})=0\). Consequently, \(h(t^{*})=h^{\prime}(t^{*})=0\). Hence, if \(h\) is not identically zero, then it must be that it is, without loss of generality, strictly increasing in the intervals \([t_{q}^{1},t_{s}^{1}]\cup[t_{s}^{2},t_{q}^{1}+T]\) and strictly decreasing in \([t_{s}^{1},t_{s}^{2}]\). This implies that \(h\) has exactly one maximum and it occurs at \(t_{s}^{1}\). Therefore, from (6.11), we have the existence of a natural number \(L\) large enough so that,
\[\max_{[t_{0}+(l-1)T,t_{0}+lT]}\eta_{1}(t)=\eta_{1}(t_{s}+lT),\quad l\geq L,\]
implying that \(\eta_{1}^{\prime}(t_{s}+lT)=0\) for all \(l\geq L\). However, this is a contradiction since from (6.4),
\[\eta_{1}^{\prime}(t_{s}+lT)=\frac{A(t_{s})}{q(t_{s})}\neq 0,\quad l=1,2,\ldots.\]
Therefore, \(h\) is identically zero.
Now we prove periodicity for the scenario when \(A,s\) are zero at the same times. Firstly, we argue that \(\eta_{1}\) being uniformly bounded implies periodicity. This can be seen by plugging in a special sequence of times in (6.11) and (6.12). To this end, let \(\{t_{l}^{*}\}_{l=1}^{\infty}\) be a sequence of times such that \(\eta_{1}^{\prime}(t_{l}^{*})=0\). We obtained this sequence by applying Rolle's Theorem to \(\eta_{1}\) in the intervals \([t_{0}+(l-1)T,t_{0}+lT]\). Rewrite \(t_{l}^{*}\) as,
\[t_{l}^{*}=t_{1}^{*}+(l-1)T+\alpha_{l}T,\quad\alpha_{l}\in(-1,1).\]
By compactness, there exists a convergent subsequence \(\alpha_{l_{m}}\). Let \(t_{1}^{*}+\alpha_{l_{m}}T\to t^{*}\) as \(m\to\infty\). Plugging in \(t_{l}^{*}\) in (6.12), we obtain,
\[0=\eta_{1}^{\prime}(t_{1}^{*}+\alpha_{l_{m}}T)+(l_{m}-1)h^{\prime}(t_{1}^{*}+ \alpha_{l_{m}}T).\]
Dividing by \(l_{m}-1\) and letting \(m\to\infty\), we have that
\[h^{\prime}(t^{*})=0.\]
Similarly using (6.11),
\[\eta_{1}(t_{l}^{*})=\eta_{1}(t_{1}^{*}+\alpha_{l_{m}}T)+(l_{m}-1)h(t_{1}^{*}+ \alpha_{l_{m}}T).\]
If \(\eta_{1}\) is uniformly bounded in time, then dividing by \(l_{m}-1\) and letting \(m\to\infty\) results in,
\[h(t^{*})=0.\]
As a result of this, it is enough to show that if \(A,s\) are zero at the same times, then \(\eta_{1}\) is uniformly bounded. Without loss of generality, assume it is not bounded above and attains a positive maximum value in the interval \([0,T]\). Let,
\[M:=\max_{[0,T]}\eta_{1}(t).\]
Let \(t_{M}>T\) be a time when \(\eta_{1}(t_{M})=3M\). By continuous dependence of ODE solutions on initial data, we can choose a solution to (6.5), \(\tilde{\eta}\), with initial data such that,
\[q_{0}\tilde{\eta}^{\prime}(0)-k\tilde{\eta}(0)s_{0}\neq A_{0},\]
and \(|\eta_{1}(0)-\tilde{\eta}(0)|,|\eta_{1}^{\prime}(0)-\tilde{\eta}^{\prime}(0)|\) small enough so that,
\[\max_{[0,t_{M}]}|\tilde{\eta}(t)-\eta_{1}(t)|<M.\]
However, by previous discussion, \(\tilde{\eta}\) is periodic. Therefore, it must be that for a \(t_{m}\in[0,T]\),
\[M<\tilde{\eta}(t_{M})-M=\tilde{\eta}(t_{m})-M<\eta_{1}(t_{m})\leq M,\]
hence, a contradiction. This completes the full proof.
**Proposition 6.4**.: _For each \(i=1,2\),_
\[\eta_{i}(t)>0,\quad t\in[0,T]\backslash\{t_{A}^{i}\}.\]
_In particular, \(\eta_{i}(0)>0\) and \(\eta_{i}\)'s are distinct._
Proof.: We will show that for any \(A_{0}\) satisfying the condition (6.1), there exists a strictly positive solution to (6.4). The proposition statement can then be proved through the following argument. From Lemma 6.2, \(\eta_{1}(t)>0\) for \(t\in[t_{q}^{1},t_{q}^{2}]\backslash\{t_{A}^{1}\}\). Hence, it can only be zero in \((t_{q}^{2},t_{q}^{3})\). If \(\eta_{1}\) just touches zero in \((t_{q}^{2},t_{q}^{3})\), then by (6.4), that time must be \(t_{A}^{2}\) and by (6.5) and uniqueness of ODE, \(\eta_{1}\equiv\eta_{2}\). If \(\eta_{1}\) crosses zero, then by Lemma 6.2, \(t_{A}^{2}\) must be in between the two roots. Similar statement holds for \(\eta_{2}\) as well. Both these cases are illustrated in Figure 4. In any case, Lemma 6.1 implies that there can never be a strictly positive solution. Hence, it must be that for each \(i=1,2\)\(\eta_{i}\) is positive everywhere on \([0,T]\) except at \(t_{A}^{i}\).
In view of the above discussion, we prove the statement mentioned at the beginning of the proof. Let,
\[\mathcal{C}:=\{a:A_{0}=a,(\ref{eq:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a: a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a::a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a::a:a:a:a:a:a:a:a:a:a:a:a:a::a:a:a:a::a:a:a:a:a:a:a:a::a:a:a:a::a:a:a:a:a::a:a:a:a:a::a:a:a:a:a:a::a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a:a::a:a:a:a::a:a:a:a:a:a::a:a:a:a:a:a::a:a:a:a:a::a:a:a:a:a:a:a:a:a:a:a::a:a:a:a:a:a:a::a:a:a::a:a:a:a:a:a:a::a:a:a:a:a:a::a:a:a:a::a:a:a:a:a:a:a::a:a::a:a:a:a:a::a:a:a::a:a:a:a:a::a:a:a:a:a:a::a:a:a:a::a:a::a:a::a:a:a::a:a:a::a:a:a::a:a:a::a:a::a:a::a:a::a:a::a::a:a:a::a:a::a:a:a::a::a:a:a::a:a::a::a:a:a:a:a::a:a:a::a:a:a::a:a::a:a::a::a::a::a::a::a::a:::a
\[=k\Gamma^{N-1}.\]
The last equality follows from (4.8). Consequently \(a=q_{0}g^{\prime}(0)-kg(0)s_{0}=\frac{q_{0}^{2}-ks_{0}}{N\tilde{s}_{0}}\) belongs to \(\mathcal{C}\).
_Claim 2:_\(\mathcal{C}\) is open.
If \(a\in\mathcal{C}\), then for \(A_{0}=a\), (6.4) has a strictly positive solution. By continuous dependence of solutions on initial data, there is a neighbourhood around \(a\) for which there is strictly positive solution.
_Claim 3:_\(\mathcal{C}\) is closed in the set where (6.1) holds.
Let \(\{a_{i}\}_{i=1}^{\infty}\subset\mathcal{C}\) be a sequence with \(a_{i}\to a\) and \(a\) belongs to the set where (6.1) holds. By well-posedness of ODEs, there must be solution \(\tilde{\eta}\) to (6.4) for \(A_{0}=a\) satisfying \(\tilde{\eta}(t)\geq 0\). If \(\tilde{\eta}\) is strictly positive then \(a\in\mathcal{C}\). If not, then from Lemma 6.2 and (6.4), it is positive everywhere except, \(\tilde{\eta}(t_{A}^{1})=\tilde{\eta}^{\prime}(t_{A}^{1})=0\) or \(\tilde{\eta}(t_{A}^{2})=\tilde{\eta}^{\prime}(t_{A}^{2})=0\) or both. If \(\tilde{\eta}(t_{A}^{i})=\tilde{\eta}^{\prime}(t_{A}^{i})=0\) for both \(i=1,2\), then by uniqueness, \(\tilde{\eta}\) is \((t_{A}^{2}-t_{A}^{1})\)-periodic. Note that \(t_{A}^{2}-t_{A}^{1}<T\), the period of \(q,s,A\). If this is true, then by (6.5),
\[(N-1)\tilde{\eta}(t)\left[\tilde{s}(t+t_{A}^{2}-t_{A}^{1})-\tilde{s}(t)\right] =\frac{1}{\tilde{s}_{0}^{1-\frac{1}{N}}}\left[\left(\tilde{s}(t+t_{A}^{2}-t_{ A}^{1})\right)^{1-\frac{1}{N}}-(\tilde{s}(t))^{1-\frac{1}{N}}\right],\]
for all \(t\). Since \(t_{A}^{2}-t_{A}^{1}<T\), we can obtain an interval on which a closed form of \(\tilde{\eta}\) is obtained given by above formula. This is a contradiction as it can be readily checked that it does not satisfy (6.5).
Therefore, \(\tilde{\eta}(t_{A}^{1})=\tilde{\eta}^{\prime}(t_{A}^{1})=0\) (WLOG) and \(\tilde{\eta}(t)>0\), \(t\in[0,T]\backslash\{t_{A}^{1}\}\). But by uniqueness, \(\tilde{\eta}=\eta_{1}\). By definition of \(\eta_{2}\) and Lemma 6.1, it must be such that \(\eta_{2}(t)>0\), \(t\in[0,T]\backslash\{t_{A}^{2}\}\). However, if this is the case, then from Lemma 6.1 there has to be a strictly positive solution to (6.4) squeezed between \(\eta_{1}\) and \(\eta_{2}\). Hence, \(a\in\mathcal{C}\) and consequently, \(\mathcal{C}\) is closed.
This finishes the proof.
The key results are as follows,
**Proposition 6.5**.: _Suppose \(q_{0}\neq 0\). If_
\[\min\{\eta_{1}(0),\eta_{2}(0)\}<\eta_{0}<\max\{\eta_{1}(0),\eta_{2}(0)\},\]
_then,_
\[\min\{\eta_{1}(t),\eta_{2}(t)\}<\eta(t)<\max\{\eta_{1}(t),\eta_{2}(t)\},\quad t \in\mathbb{D}^{c}.\]
_Conversely, if_
\[\eta_{0}\notin\left(\min\{\eta_{1}(0),\eta_{2}(0)\},\max\{\eta_{1}(0),\eta_{2 }(0)\}\right),\]
_then_
\[\eta(t)\notin\left(\min\{\eta_{1}(t),\eta_{2}(t)\},\max\{\eta_{1}(t),\eta_{2}( t)\}\right),\quad t\in\mathbb{D}^{c}.\]
_Here, \(\mathbb{D}\) is the same as in the statement of Lemma 6.1._
Proof.: Firstly, note that \(\eta_{i}^{\prime}s\) satisfy (6.4) along with \(\eta\). Indeed, if a function satisfies (6.4) with \(\tilde{\eta}(t_{A}^{1})=0\), then
\[\tilde{\eta}^{\prime}(t_{A}^{1})=\frac{A(t_{A}^{1})+k\tilde{\eta}(t_{A}^{1})s (t_{A}^{1})}{q(t_{A}^{1})}=0,\]
and hence, by first assertion of Lemma 6.1 and uniqueness of ODE, \(\tilde{\eta}\equiv\eta_{1}\). Similarly for \(\eta_{2}\). Also, the solution to (5.2a), \(\eta\), satisfies (6.4). This is true directly from the definition
of \(A\) as in (5.3) and (5.2a). Consequently, all the three functions, \(\eta,\eta_{1},\eta_{2}\) pairwise satisfy hypothesis of Lemma 6.1 with \(t_{*}=0\). The result follows from Lemma 6.1. Indeed at each \(t\in\mathbb{D}\), all \(\eta_{1},\eta_{2},\eta\) cross each other, hence, maintaining that \(\eta\) will be contained in between \(\eta_{1},\eta_{2}\) if and only if it was so initially.
An illustration of the situation is provided in Figure 5.
**Proposition 6.6**.: _Suppose \(q_{0}=0\). If_
\[\eta_{1}^{\prime}(0)<w_{0}<\eta_{2}^{\prime}(0),\]
_then for any \(t>0\),_
\[\min\{\eta_{1}(t),\eta_{2}(t)\}<\eta(t)<\max\{\eta_{1}(t),\eta_{2}(t)\},\quad t \in\mathbb{D}^{c}.\]
_Conversely, if_
\[w_{0}\notin(\eta_{1}^{\prime}(0),\eta_{2}^{\prime}(0))\,,\]
_then_
\[\eta(t)\notin(\min\{\eta_{1}(t),\eta_{2}(t)\},\max\{\eta_{1}(t),\eta_{2}(t)\} )\,,\quad t\in\mathbb{D}^{c}.\]
Proof.: The proof is very similar to that of Proposition 6.5 and follows from Lemma 6.1, only that here the starting time \(t=0\) is when the functions \(\eta_{1},\eta_{2},\eta\) cross each other. In other words, \(0\in\mathbb{D}\).
**Corollary 6.7**.: \(\eta(t)>0\) _for all \(t>0\) if and only if one of the following holds,_
* _If_ \(q_{0}\neq 0\) _then,_ \[\min\{\eta_{1}(0),\eta_{2}(0)\}<\eta_{0}<\max\{\eta_{1}(0),\eta_{2}(0)\}.\]
* _If_ \(q_{0}=0\) _then,_ \[\eta_{1}^{\prime}(0)<w_{0}<\eta_{2}^{\prime}(0).\]
Figure 5. A representation of a plausible solution squeezed between \(\eta_{i}\)’s. \(k=c=1,N=4,q_{0}=0.1,s_{0}=-0.1,A_{0}=0.15\)
Proof.: The result follows from Propositions 6.5 or 6.6, followed by an application of Proposition 6.4.
Using the results developed above, we move on to proving Theorem 2.1.
_Proof of Theorem 2.1:_ We will prove for \(N\geq 3\) as \(N=2\) is very similar. Suppose initial data satisfies the hypothesis of the Theorem. Along the characteristic path (3.1), this translates to the condition that for all \(\beta>0\),
\[(\beta,u_{0}(\beta),\phi_{0r}(\beta),u_{0r}(\beta),\rho_{0}(\beta))\in\Theta_{ N}.\]
We will now analyze a single characteristic path and replace the initial data notations with \((\beta,u_{0},\phi_{0r},u_{0r},\rho_{0})\). Under the transformation (3.7), we now turn to the unknowns of the ODE system (3.8), \((q,s,p,\rho)\) with initial data \((q(0),s(0),p(0),\rho(0))=\left(\frac{u_{0}}{\beta},-\frac{\phi_{0r}}{\beta},u_ {0r},\rho_{0}\right)\). Global-in-time existence of these variables is equivalent to the global-in-time existence of the original variables. Note that if \(\rho(0)=0\), then as argued, through simple calculations, at the beginning of Section 5, there is blowup of density, \(\rho\). Hence, we can safely assume \(\rho(0)>0\). Using the transformations (5.1), we work with the unknowns \((q,s,\eta,w)\) with initial data \((q(0),s(0),\eta(0),w(0))=(q(0),s(0),p(0)/\rho(0),1/\rho(0))\). Through (3.7) and (5.1), we see that indeed \(a=q(0)w(0)-k\eta(0)s(0)\). Set \(A(0)=a\) to obtain \(A(t)\) as in (5.3) satisfying (5.4). Turning to the Definition 2.2, we use (4.3) and \(y_{m,N}=y_{1},y_{M,N}=y_{2}\) as in (5.7) to get,
\[A(0)\in\frac{k}{N-2}\left(\left(\frac{y_{1}}{\tilde{s}_{0}}\right)^{\frac{N-2 }{N}}-1,\left(\frac{y_{2}}{\tilde{s}_{0}}\right)^{\frac{N-2}{N}}-1\right).\]
Rearranging this result in condition (6.1). Also, \(u_{0}=0\) if and only if \(q(0)=0\). In (2.2), this is equivalent to whether \(x\) is zero or not. If \(x\neq 0\) (equivalently \(q(0)\neq 0\)), then through the transformation (5.1), (2.2) is equivalent to,
\[\min\{\eta_{1}(0),\eta_{2}(0)\}<\eta(0)<\max\{\eta_{1}(0),\eta_{2}(0)\},\]
which is exactly the hypothesis of Proposition 6.5. On the other hand, if \(x=0\) (equivalently \(q(0)=0\)), then using (3.7) and (5.1), (2.2) reduces to
\[w(0)\in-\frac{ks(0)\eta(0)}{A(0)}\ (\eta_{1}^{\prime}(0),\eta_{2}^{\prime}(0)).\]
From \(q(0)=0\) and (5.3), the above inclusion becomes,
\[w(0)\in(\eta_{1}^{\prime}(0),\eta_{2}^{\prime}(0)),\]
which is exactly the hypothesis of Proposition 6.6. Note that on a single characteristic path, \(\eta_{1},\eta_{2}\) are known functions because \(A(0),q(0),s(0)\) are fixed. In particular, \(\eta_{1},\eta_{2}\) can first be evaluated and then it can be checked that one of the above hypothesis is satisfied depending on whether \(q(0)=0\) or not. By Corollary 6.7, we obtain the all-time positivity of \(\eta\). By Proposition 5.2, all unknowns \((q,s,p,\rho)\) in (3.8) exist for all time. Since the above analysis holds for all characteristic paths, then by Lemma 3.1 and Theorem 1.1 we have global-in-time solution to (1.2).
Conversely, suppose there is a characteristic path corresponding to some parameter \(\beta^{*}>0\) such that,
\[(\beta^{*},u_{0}^{*},\phi_{0r}^{*},u_{0r}^{*},\rho_{0}^{*}):=(\beta^{*},u_{0}( \beta^{*}),\phi_{0r}(\beta^{*}),u_{0r}(\beta^{*}),\rho_{0}(\beta^{*}))\notin \Theta_{N}.\]
Without loss of generality, we assume \(\rho_{0}^{*}>0\). Then there could be two situations. Either the inclusion in Definition 2.2 does not hold or the inclusion holds but \((\beta^{*},u_{0}^{*},\phi_{0r}^{*},u_{0r}^{*},\rho_{0}^{*})\) violates (2.2). Suppose the first case is true. Similar to how we argued above, we have,
\[A_{0}^{*}\notin\frac{k}{N-2}\left(\left(\frac{y_{1}}{\tilde{s}_{0}}\right)^{ \frac{N-2}{N}}-1,\left(\frac{y_{2}}{\tilde{s}_{0}}\right)^{\frac{N-2}{N}}-1 \right).\]
Using (5.6) in the above expression, and subsequently with the help of Corollary 5.7, we obtain that there is blow up of density. On the other hand suppose the above inclusion holds and (2.2) does not. Then very similar to the way as we argued for the global existence result, we have that for \(q(0)\neq 0\),
\[\eta(0)\notin\left(\min\{\eta_{1}(0),\eta_{2}(0)\},\max\{\eta_{1}(0),\eta_{2}( 0)\}\right),\]
and for \(q(0)=0\),
\[w(0)\notin(\eta_{1}^{\prime}(0),\eta_{2}^{\prime}(0)).\]
From Corollary 6.7, we obtain that there is finite time \(t_{c}\) with \(\eta(t_{c})=0\). From Proposition 5.2, there is breakdown at \(t=t_{c}\) and the solution ceases to be smooth. This completes the proof of the Theorem.
## 7. The zero background case
The zero background case has been analyzed by several researchers. Most notably, the authors in [29] give a sharp threshold condition, however, assuming that the flow is expanding (\(u_{0}>0\)). A more refined analysis was done by the author in [28], however, the threshold condition was not sharp. In this section, we present a sharp characterization of the subcritical and supercritical regions for general velocity. To this end, we consider (3.8) with \(c=0\),
\[\rho^{\prime} =-(N-1)\rho q-p\rho, \tag{7.1b}\] \[p^{\prime} =-p^{2}-k(N-1)s+k\rho,\] (7.1c) \[q^{\prime} =ks-q^{2},\] (7.1d) \[s^{\prime} =-Nqs, \tag{7.1a}\]
with the same notation for initial data. Also recall the system (5.2). With zero background, it reduces to,
\[\eta^{\prime} =w, \tag{7.2b}\] \[w^{\prime} =-ks(N-1)\eta+k\Gamma^{N-1}. \tag{7.2a}\]
We will make use of the same quantity \(A\) as in (5.3). Using similar computations, one finds that the expression for \(A\) is exactly the same as in (5.5) for \(N>2\). Also for \(N=2\), the expression for the corresponding quantity \(B\) is the same as in (5.8).
A robust analysis of the \(q,s\) system in this case has been carried out by the author in [28]. We only state the important results which will be used directly.
**Proposition 7.1** ([28][Theorem 3.5, Lemma 3.4, 3.7]).: _] Consider the \(2\times 2\) ODE system (7.1c), (7.1d). Suppose \(s_{0}>0\). Then_
* \((q(t),s(t))\to(0,0)\) _as_ \(t\to\infty\)_. Moreover, if_ \(q_{0}<0\) _there exists a unique time_ \(t_{q}\) _such that_ \(q(t_{q})=0\)_,_ \(s(t_{q})=\max_{[0,\infty)}s(t)\) _and_ \(q(t)>0\) _for all_ \(t>t_{q}\)_._
* _There is a_ \(T\) _large enough so that,_ (7.3a) \[C_{m}^{q}(t+1)^{-1}\leq q(t)\leq C_{M}^{q}(t+1)^{-1},\] (7.3b) \[C_{m}^{s}(t+1)^{-N}\leq s(t)\leq C_{M}^{s}(t+1)^{-N},\qquad N\geq 3,\] (7.3c) \[C_{m}^{s}(t+1)^{-2}(1+\ln(t+1))^{-1}\leq s(t)\leq C_{M}^{s}(t+1)^{-2}(1+\ln(t+1))^{-1},N=2,\] _for all_ \(t\geq T\)_._
_\(C_{m}^{q},C_{M}^{q},C_{M}^{s},C_{m}^{s}\) are all positive constants depending on \(N,q_{0},s_{0}\)._
Once again, the Poisson forcing is enough to avoid concentrations at the origin. In particular, Corollary 4.2 holds and \(s_{0}>0\). We state it as a Lemma below.
**Lemma 7.2**.: \(q,s\) _in system (7.1c), (7.1d) exist for all time._
We now move on to some key results. We first give a sufficient condition for \(\eta\) achieving zero in finite time.
**Proposition 7.3**.: _Suppose \(N\geq 3\). If_
\[A_{0}+\frac{k}{N-2}<0,\]
_then there is a time \(t_{c}>0\) such that \(\eta(t_{c})=0\)._
Proof.: Since \(\eta\) satisfies (6.4), we have,
\[\left(\eta e^{-k\int^{t}\frac{s}{q}}\right)^{\prime}=\frac{A}{q}e^{-k\int^{t} \frac{s}{q}}. \tag{7.4}\]
We will use the convergence estimates from Proposition 7.1. Suppose \(t_{1}>0\) is a time so that the convergence rates are valid for all \(t\geq t_{1}\). Throughout our calculations \(0<C_{m}<C_{M}\) are constants that may change from step to step but only depend on \(k,N,q_{0},s_{0},\eta_{0},w_{0},t_{1}\). From (7.3), we have for \(t_{1}\leq\tau<t\),
\[k\int_{\tau}^{t}\frac{s}{q} \leq C_{M}\int_{\tau}^{t}(\xi+1)^{-(N-1)}d\xi,\] \[k\int_{\tau}^{t}\frac{s}{q} \leq C_{M}. \tag{7.5}\]
Integrating (7.4), we have,
\[\eta(t) =\eta(t_{1})e^{k\int_{t_{1}}^{t}\frac{s}{q}}+\int_{t_{1}}^{t} \frac{A(\tau)}{q(\tau)}e^{k\int_{\tau}^{t}\frac{s}{q}}d\tau\] \[=:\mathrm{I}+\mathrm{II}.\]
From (7.5), \(\mathrm{I}\) is uniformly bounded. Owing to (5.5) and (4.8), we can find convergence rates for \(A\) as well. Using (7.3b) in (5.5) along with (4.8), we have for \(t\geq t_{1}\),
\[A(t)=\left(A_{0}+\frac{k}{N-2}\right)\Gamma-\frac{k}{N-2}\Gamma^{N-1}\]
\[\leq-C_{m}(t+1)^{-1}.\]
Consequently, from (7.3a),
\[\frac{A(t)}{q(t)}\leq-C_{m},\quad t\geq t_{1}.\]
Since \(q,s\) are positive and \(A\) is negative, we have for \(t>t_{1}\),
\[\mathrm{II} =\int_{t_{1}}^{t}\frac{A(\tau)}{q(\tau)}e^{k\int_{\tau}^{t}\frac{ s}{q}}d\tau\] \[\leq\int_{t_{1}}^{t}\frac{A(\tau)}{q(\tau)}d\tau\] \[\leq-C_{m}\int_{t_{1}}^{t}d\tau.\]
Finally, for \(t>t_{1}\), we have,
\[\eta(t) =\mathrm{I}+\mathrm{II}\] \[\leq C_{M}-C_{m}\int_{t_{1}}^{t}d\tau\]
Therefore, there exists a \(t_{c}\) such that \(\eta(t_{c})=0\). This completes the proof.
We now move on to characterizing the subcritical region. We first define a function \(\eta_{1}\) through an IVP using (6.4),
\[\eta_{1}^{\prime}=\frac{A}{q}+\frac{ks}{q}\eta_{1},\quad\eta_{1}(t_{A})=0, \tag{7.6}\]
where \(t_{A}>0\) is a time (if it exists) such that \(A(t_{A})=0\).
**Proposition 7.4**.: _Consider the \(N\geq 3\) case. Suppose \(s_{0}>0,q_{0}>0\). Given \(\eta_{0}>0\), we have that \(\eta(t)>0\) for all \(t>0\) if one of the following conditions is satisfied,_
* \(A_{0}\geq 0\)_,_
* \(-\frac{k}{N-2}<A_{0}<0\) _and_ \(\eta_{0}>\eta_{1}(0)\)_._
_Additionally, if \(-\frac{k}{N-2}<A_{0}<0\) and \(\eta_{0}\leq\eta_{1}(0)\), then there is a time \(t_{c}>0\) such that \(\eta(t_{c})=0\)._
Proof.: Firstly, note that the hypothesis and Proposition 7.1 imply \(s,q\) are positive for all times. Hence, from (7.1d), \(s\) is monotonically decreasing to zero. Owing to (4.8), \(\Gamma(t)=e^{-\int_{0}^{t}q}\) is strictly decreasing and tends to zero as \(t\to\infty\). Moreover, if \(A_{0}\geq 0\), then,
\[\kappa\geq 1,\]
where \(\kappa\) is the root of \(A(\Gamma)\) as in (5.6). From (5.5), \(A(t)>0\) for all \(t>0\). Consequently, from (7.4), \(\eta\) can never be zero in finite time if it was initially positive.
Now suppose the second hypothesis holds. Here, we have that \(\kappa<1\). Owing to the properties of \(s,q,\Gamma,A\) as listed above, we obtain that there is a unique time \(t_{A}>0\) such that \(A(t_{A})=0\). Moreover, \(A(t)<0\) for \(t<t_{A}\) and \(A(t)>0\) for \(t>t_{A}\). Owing to (7.4) once again, \(\eta\) remains greater than zero if it is so at \(t=t_{A}\). In fact, \(\eta_{1}(t)\) serves as a lower
bound for \(\eta(t)\) at each time, see Figure 6. Since \(\eta,\eta_{1}\) satisfy the same first order ODE we have a comparison principle. Indeed on taking difference, we obtain,
\[(\eta-\eta_{1})^{\prime}=\frac{ks}{q}(\eta-\eta_{1}).\]
From (7.6), we have \(\eta_{1}^{\prime}(t_{A})=0\). Also, upon taking derivative of (7.6), one can check that \(\eta_{1}^{\prime\prime}(t_{A})>0\) and, therefore, \(t_{A}\) is indeed the unique time where the minimum of \(\eta_{1}\) is attained. Hence, \(\eta(t)>0\) for all \(t>0\). Conversely, if \(\eta_{0}\leq\eta_{1}(0)\), then since \(\eta\) remains below \(\eta_{1}\), there is a time \(t_{c}\leq t_{A}\), such that \(\eta(t_{c})=0\).
**Proposition 7.5**.: _Consider the \(N\geq 3\) case. Suppose \(s_{0}>0,q_{0}=0\) and \(A_{0}>-\frac{k}{N-2}\). Given \(\eta_{0}>0\), we have that \(\eta(t)>0\) for all \(t>0\) if and only if \(w_{0}>\eta_{1}^{\prime}(0)\)._
Note that \(\eta_{1}^{\prime}(0)\) is in the limit sense in (7.6) since \(q(0)=0\). We know that this limit exists because \(\eta_{1}\) satisfies (6.5) (with \(c=0\)), which is an inhomogeneous second order linear ODE with bounded coefficients. Also, the assumption \(q_{0}=0\) implies \(A_{0}<0\).
Proof.: The proof of Proposition 7.5 is very similar to that of the second assertion of Proposition 7.4. The comparison principle holds because of uniqueness of ODE. Since \(\eta,\eta_{1}\) both satisfy (6.5) and \(\eta_{0}=\eta_{1}(0)=-\frac{A_{0}}{ks_{0}}\), we have that if \(\eta^{\prime}(0)=w_{0}>\eta_{1}^{\prime}(0)\), then \(\eta(t)>\eta_{1}(t)\) for all time, thereby maintaining positivity, since \(\eta_{1}(t)\geq 0\). Converse also holds since \(\eta_{1}(t_{A})=0\).
Now we present the result for \(q_{0}<0\). We take derivative of (7.6) and state two second order IVPs for \(i=1,2\),
\[\begin{split}&\eta_{i}^{\prime\prime}+k(N-1)s\eta_{i}=k\Gamma^{N-1}, \\ &\eta_{i}(t_{A}^{i})=0,\qquad\eta_{i}^{\prime}(t_{A}^{i})=0,\end{split} \tag{7.7}\]
where \(t_{A}^{i}\geq 0\) is such that \(A(t_{A}^{i})=0\). There need not be two such times. In that case, we only consider \(\eta_{1}\). Note that by uniqueness, \(\eta_{1}\) in (7.6) is the same function as \(\eta_{1}\) above. Also, similar to (4.2), we obtain the trajectory for this case as,
\[q^{2}s^{-\frac{2}{N}}+\frac{2k}{N-2}s^{1-\frac{2}{N}}=R_{N}, \tag{7.8}\]
From this, one can directly note the maximum attained value of \(s\), which is when \(q=0\), is
\[s_{max}:=\left(\frac{R_{N}(N-2)}{2k}\right)^{\frac{N}{N-2}}. \tag{7.9}\]
**Proposition 7.6**.: _Consider the \(N\geq 3\) case. Suppose \(s_{0}>0,q_{0}<0\). Given \(\eta_{0}>0\), we have the following._
* _Suppose_ \(A_{0}\geq 0\)_. Then_ \(\eta(t)>0\) _for all_ \(t>0\) _if and only if_ \(\kappa<(s_{max}/s_{0})^{1/N}\) _and_ \(\eta_{1}(0)<\eta_{0}<\eta_{2}(0)\)_._
* _Suppose_ \(-\frac{k}{N-2}<A_{0}<0\)_. Then_ \(\eta(t)>0\) _for all_ \(t>0\) _if and only if_ \(\eta_{0}<\eta_{1}(0)\)_._
Proof.: Suppose \(A_{0}\geq 0\). From the first assertion of Proposition 7.1 and (5.3), we conclude that at \(t=t_{q}\),
\[\eta(t_{q})=-\frac{A(t_{q})}{ks(t_{q})}=-\frac{A(t_{q})}{ks_{max}}.\]
Therefore, a necessary condition for \(\eta\) to be positive is that \(A(t_{q})<0\). From (4.8) and (5.5), the condition is equivalent to,
\[\kappa<\left(\frac{s_{max}}{s_{0}}\right)^{\frac{1}{N}}.\]
Here, we must keep in mind that the dynamics of \(s\) of is such that increases until \(t=t_{q}\) then decreases monotonically and approaches zero as \(t\to\infty\). If the above inequality holds, then there are two positive times, \(t_{A}^{i},i=1,2\), when \(A(t_{A}^{i})=0\). Also, \(t_{q}\in(t_{A}^{1},t_{A}^{2})\). The very same arguments as in Proposition 6.4 allow us to conclude that \(\eta_{i}(t)>0\) for \(t\in[0,\infty)\backslash\{t_{A}^{i}\}\). Since \(\eta_{i}\)'s also satisfy (7.6), the all-time-positivity of \(\eta\) is guaranteed once again by arguments as in Lemma 6.1 if it was in between \(\eta_{1}\) and \(\eta_{2}\) at \(t=0\). In particular, assertions 1-4 of Lemma 6.1 are valid with a slight modification that the set \(\mathbb{D}=\{t_{q}\}\) has only one element, see Figure 7 for a visualization. Conversely, if \(\eta_{0}\notin(\eta_{1}(0),\eta_{2}(0))\), then from Lemma 6.1\(\eta\) becomes zero in finite time.
If \(-\frac{k}{N-2}<A_{0}<0\), then \(\kappa<1\). From Proposition 7.1, \(s\) increases to a maximum and then decreases to zero. Once again, making use of the relation (4.8) and (5.5), we have that \(A\) is zero only once, at \(t=t_{A}\). From (7.6), we have,
\[\left(\eta_{1}e^{-k\int\nolimits^{t}\frac{a}{q}}\right)^{\prime}=\frac{A}{q}e ^{-k\int\nolimits^{t}\frac{a}{q}}.\]
We conclude that for \(t<t_{q}\), \(\eta(t)>0\) since \(A,q\) are both negative in this interval. Since from Lemma 6.1, \(\eta(t)>\eta_{1}(t)\) for \(t>t_{q}\), we conclude that \(\eta(t)>0\) for all \(t>t_{q}\) since \(\eta_{1}\) is nonnegative and serves as a lower bound for \(\eta\) in this domain. Conversely, if \(\eta_{0}>\eta_{1}(0)\), then \(\eta(t)<\eta_{1}(t)\) for all \(t>t_{q}\) and hence, it must be zero in a finite time, \(t<t_{A}\), since \(\eta_{1}(t_{A})=0\).
**Proposition 7.7**.: _Consider the \(N\geq 3\) case. Suppose \(s_{0}>0,q_{0}\) be given. If_
\[A_{0}=-\frac{k}{N-2},\]
_then there is a time \(t_{c}>0\) such that \(\eta(t_{c})=0\)._
We state this proposition and its proof separately because the technique used in the proof of Proposition 7.3 does not apply. In fact, it turns out that it is much easier to work in the \(p,\rho\) variables instead of the \(\eta,w\) variables.
_Proof of Proposition 7.7:_ Using (5.3) and \(A_{0}=-k/(N-2)\) in the expression of \(A\) as in (5.5), we obtain,
\[q(t)w(t)-k\eta(t)s(t)=A(t)=-\frac{k}{N-2}\Gamma^{N-1}.\]
Substituting for \(\eta,w\) using (5.1), we obtain,
\[\frac{qp-ks}{\rho}=-\frac{k}{N-2}.\]
As a result, we can find \(p\) in terms of the other variables as,
\[p=\frac{ks}{q}-\frac{k\rho}{(N-2)q}.\]
We can divide by \(q\) because from (7.3a), \(q\) is eventually positive and decays accordingly. We will consider sufficiently large times so that this holds. Plugging this in the ODE of \(\rho\), (7.1a), we get,
\[\rho^{\prime}=\frac{k}{q(N-2)}\rho^{2}-\rho\left(q(N-1)+\frac{ks}{q}\right).\]
For all sufficiently large times, the rates in Proposition 7.1 hold. We can assume \(\rho\) has not already blown up, because if it has then the proof is done. Using these rates, we conclude that \(ks/q\), \(q\) are bounded. Therefore, there occurs a Riccati-type blow up of density, and the blow up is aggravated by the \(q\) in the denominator.
Propositions 7.3, 7.4, 7.5, 7.6 and 7.7 enable us to put together the picture for \(N\geq 3\). Now, we move onto the critical case \(N=2\). We will omit the repetitive parts in the proofs of these results. Note the quantity \(B\) in (5.8) analogous to \(A\). Also note that there always exist a positive root of \(B(\Gamma)\), \(\kappa=e^{\frac{A_{0}}{k}}\), no matter the sign of \(A_{0}\), which is unlike the case for \(A\). \(\kappa\) could lie on or either side of \(1\) depending on the sign of \(A_{0}\). For the \(N=2\) case, we will consider the function \(\eta_{1}\) as in (7.6) and the functions \(\eta_{i}\)'s as in (7.7) with \(A\) replaced by \(B\) in the definition.
**Proposition 7.8**.: _Suppose \(N=2\) and consider \(\eta_{1}\) as in (7.6). Suppose \(s_{0}>0,q_{0}>0\). Given \(\eta_{0}>0\), we have that \(\eta(t)>0\) for all \(t>0\) if one of the following conditions is satisfied,_
* \(A_{0}\geq 0\)_,_
* \(A_{0}<0\) _and_ \(\eta_{0}>\eta_{1}(0)\)_._
_Additionally, if \(A_{0}<0\) and \(\eta_{0}\leq\eta_{1}(0)\), then there is a time \(t_{c}>0\) such that \(\eta(t_{c})=0\)._
The proof is very similar to that of Proposition 7.4.
**Proposition 7.9**.: _Consider the \(N=2\) case. Suppose \(s_{0}>0,q_{0}=0\). Given \(\eta_{0}>0\), we have that \(\eta(t)>0\) for all \(t>0\) if and only if \(w_{0}>\eta_{1}^{\prime}(0)\)._
The proof is very similar to that of Proposition 7.5.
**Proposition 7.10**.: _Consider the \(N=2\) case. Suppose \(s_{0}>0,q_{0}<0\). Given \(\eta_{0}>0\), we have the following._
* _Suppose_ \(A_{0}\geq 0\)_. Then_ \(\eta(t)>0\) _for all_ \(t>0\) _if and only if_ \(e^{\frac{A_{0}}{k}}<\sqrt{(s_{2,max}/s_{0})}\) _and_ \(\eta_{1}(0)<\eta_{0}<\eta_{2}(0)\)_._
* _Suppose_ \(A_{0}<0\)_. Then_ \(\eta(t)>0\) _for all_ \(t>0\) _if and only if_ \(\eta_{0}<\eta_{1}(0)\)_._
Once again, the proof is very similar to that of Proposition 7.6. The expression for \(s_{2,max}\) is different from \(s_{max}\) since the trajectory equation in the critical case is different. Similar to the way we obtained (7.8), we can obtain the trajectory for the critical case \(N=2\),
\[\frac{q^{2}}{s}+k\ln(s)=R_{2}, \tag{7.10}\]
where \(R_{2}=\frac{q_{0}^{2}}{s_{0}}+k\ln(s_{0})\). From this, one concludes,
\[s_{2,max}=e^{\frac{R_{2}}{k}}.\]
Finally, we analyze the case when the initial density is zero, that is, \(\rho_{0}=0\). Firstly, from (7.1a), \(\rho_{0}=0\) is equivalent to \(\rho\equiv 0\), as long as \(p\) in (7.1b) exists. Therefore, as long as \(p\) exists, (7.1b) reduces to,
\[p^{\prime}=-p^{2}-k(N-1)s. \tag{7.11}\]
**Proposition 7.11**.: _Consider (7.1) and suppose \(\rho_{0}=0\) in (7.1a). Then \(p\) is bounded for all times if and only if \(q_{0}>0\) and \(p_{0}\geq\frac{ks_{0}}{q_{0}}\). Moreover, if \(q_{0}>0\) and \(p_{0}\geq\frac{ks_{0}}{q_{0}}\) then_
\[p(t)\geq\frac{ks(t)}{q(t)},\quad t>0,\]
_and if \(q_{0}\leq 0\) or \(p_{0}<\frac{ks_{0}}{q_{0}}\), then there exists a time, \(t_{c}>0\), such that,_
\[\lim_{t\to t_{c}^{-}}p(t)=-\infty.\]
Proof.: Consider the quantity \(qp-ks\). From (7.11), (7.1c) and (7.1d), we have,
\[(qp-ks)^{\prime} =p(ks-q^{2})-q(p^{2}+k(N-1)s)+kNsq\] \[=kps-pq^{2}-qp^{2}+kqs\]
\[=-(p+q)(qp-ks).\]
Consequently, as long as \(p\) exists, \(qp-ks\) maintains sign and we have,
\[qp-ks=(q_{0}p_{0}-ks_{0})e^{-\int_{0}^{t}q}e^{-\int_{0}^{t}p}. \tag{7.12}\]
Suppose \(q_{0}>0\) and \(p_{0}\geq\frac{ks_{0}}{q_{0}}\). From the first assertion of Proposition 7.1, we have that \(q(t)>0\) for all time. For the sake of contradiction, suppose that \(p\) breaks down in finite time. Since, \(s\) is uniformly bounded, it is clear from (7.11) that at the time of breakdown, \(p\to-\infty\). Since \(q,s,e^{-\int_{0}^{t}}\) are uniformly bounded quantities, it must be that at a certain time before breakdown, the left-hand-side in (7.12) is negative but the right-hand-side is non-negative. This is a contradiction and hence, \(p(t)\) is finite for all times. In particular, it decays with a lower bound, \(p(t)\geq ks(t)/q(t)\), the decay rate for which can be obtained directly from Proposition 7.1.
Now suppose \(q_{0}>0\) but \(p_{0}<\frac{ks_{0}}{q_{0}}\). We first assume \(N\geq 3\). The critical case when \(N=2\) needs separate treatment in this regard and will be analyzed at the end. Note that if \(p\) has not broken down, the following holds,
\[k(N-1)\int_{t}^{\infty}s(\tau)d\tau\leq p(t)\leq\frac{ks(t)}{q(t)}.\]
From Proposition 7.1, the integral on the left-hand-side is well-defined. The second inequality is a direct result of (7.12). The first inequality can be shown as follows. Suppose for the sake of contradiction, there is a \(t_{1}\) such that \(k(N-1)\int_{t_{1}}^{\infty}s(\tau)d\tau>p(t_{1})\). Then using (7.11), we obtain that for some \(t_{2}>t_{1}\),
\[p(t_{2})<p(t_{1})-k(N-1)\int_{t_{1}}^{t_{2}}s(\tau)d\tau<0.\]
Consequently, from (7.11), we conclude that a Riccati-type blowup occurs and \(p\to-\infty\) at some time greater than \(t_{2}\).
Therefore, we can assume that \(k(N-1)\int_{t}^{\infty}s(\tau)d\tau\leq p(t)\) for all \(t>0\). Choose \(t_{*}\) large enough so that the convergence estimates of Proposition 7.1 hold. Using these estimates, we can rewrite the above bounds on \(p\) as follows,
\[0<C_{1}(1+t)^{-(N-1)}\leq p(t)\leq C_{2}(1+t)^{-(N-1)},\quad t\geq t_{*} \tag{7.13}\]
\(C_{i}^{\prime}s\) are appropriate positive constants whose values may change through the proof but they depend only on \(s_{0},q_{0},t_{*},k\). Once again from Proposition 7.1, and (7.13), we obtain
\[0>qp-ks\geq C_{1}(1+t)^{-N}-C_{2}(1+t)^{-N}\geq-C_{1}(1+t)^{-N}.\]
Now we analyze the right-hand-side of (7.12).
\[(q_{0}p_{0}-ks_{0})e^{-\int_{0}^{t}q}e^{-\int_{0}^{t}p} =-C_{2}s^{\frac{1}{N}}e^{-\int_{0}^{t}p}\] \[\leq-C_{2}(1+t)^{-1}.\]
We used Lemma 4.5 to obtain the equality. We used (7.13) to conclude that \(p\) is integrable and hence, the inequality holds. Combining the above two inequalities for the two sides of the equation (7.12), we have for sufficiently large times,
\[qp-ks\geq-C_{1}(1+t)^{-N}>-C_{2}(1+t)^{-1}\geq(q_{0}p_{0}-ks_{0})e^{-\int_{0}^ {t}q}e^{-\int_{0}^{t}p},\]
which is a contradiction. Therefore, \(p\) must blow up in finite time, that is, \(\lim_{t\to t_{c}^{-}}p(t)=-\infty\) for some \(t_{c}>0\).
Now suppose \(q_{0}\leq 0\). Firstly, note that \(p_{0}>0\) because if not, then a Riccati-type blowup occurs. This implies
\[q_{0}p_{0}-ks_{0}<0.\]
Hence, \(qp-ks<0\) for all time. Now from Proposition 7.1, we know that after a sufficiently large time \(q>0\), and \(q,s\) follow the convergence rates. Hence, the same arguments as above apply which lead to a finite-time-breakdown of \(p\).
We now analyze the \(N=2\) case. Just as for \(N\geq 3\) case, we need to prove a contradiction when \(q_{0}>0\) and \(p_{0}<\frac{ks_{0}}{q_{0}}\). All the other arguments are the same. To this end, we assume \(q_{0}>0\) and \(p_{0}<\frac{ks_{0}}{q_{0}}\). Assuming \(p\) exists for all times, we have the following,
\[k\int_{t}^{\infty}s(\tau)d\tau\leq p(t)\leq\frac{ks(t)}{q(t)}.\]
For \(t\geq t_{*}\) (\(t_{*}\) such that the rates of Proposition 7.1 hold), we have
\[C_{1}\int_{t}^{\infty}(\tau+1)^{-2}(1+\ln(1+\tau))^{-1}d\tau\leq p(t)\leq C_{2 }(t+1)^{-1}(1+\ln(1+t))^{-1}, \tag{7.14}\]
for all \(t\geq t_{*}\). We focus on the integral above. A substitution changes it into,
\[C_{1}\int_{1+\ln(1+t))}^{\infty}\frac{e^{-\tau}}{\tau}d\tau.\]
This expression can be represented using a well-known exponential integral function given by
\[E_{1}(t)=\int_{t}^{\infty}\frac{e^{-\tau}}{\tau}d\tau.\]
Moreover, it has the following bounds, see [1, Page 229],
\[\frac{e^{-t}}{2}\ln\left(1+\frac{2}{t}\right)<E_{1}(t)<e^{-t}\ln\left(1+\frac {1}{t}\right).\]
Using the above lower bound in (7.14), we obtain the following bounds,
\[\frac{C_{1}}{1+t}\ln\left(1+\frac{2}{1+\ln(1+t)}\right)\leq p(t)\leq\frac{C_ {2}}{1+t}(1+\ln(1+t))^{-1},\quad t\geq t_{*}. \tag{7.15}\]
Therefore, we have,
\[0>qp-ks \geq\frac{C_{1}}{(1+t)^{2}}\ln\left(1+\frac{2}{1+\ln(1+t)}\right) -\frac{C_{2}}{(1+t)^{2}}\frac{1}{(1+\ln(1+t))}\] \[\geq-\frac{C_{1}}{(1+t)^{2}}\frac{1}{(1+\ln(1+t))},\]
for all times sufficiently large. On the other hand, if we analyze the right-hand-side of (7.12), we see that for \(t\geq t_{*}\),
\[\frac{(q_{0}p_{0}-ks_{0})}{\sqrt{s_{0}}}\sqrt{s}e^{-\int_{0}^{t}p}\leq-\frac{ C_{1}}{(1+t)\sqrt{1+\ln(1+t)}}e^{-\int_{0}^{t}p}\]
\[=-\frac{C_{1}}{(1+t)\sqrt{1+\ln(1+t)}}e^{-\int_{t_{\star}}^{t}p}\] \[\leq-\frac{C_{1}}{(1+t)\sqrt{1+\ln(1+t)}}e^{-\int_{t_{\star}}^{t} \frac{C_{2}}{1+\tau}(1+\ln(1+\tau))^{-1}d\tau}\] \[=-\frac{C_{1}}{(1+t)\sqrt{1+\ln(1+t)}}e^{-C_{2}\ln(1+\ln(1+t))}\] \[=-\frac{C_{1}}{(1+t)[1+\ln(1+t)]^{C_{2}}}.\]
Once again, this contradicts (7.12) since for sufficiently large times,
\[qp-ks\geq-\frac{C_{1}}{(1+t)^{2}(1+\ln(1+t))}>-\frac{C_{1}}{(1+t)[1+\ln(1+t)]^{ C_{2}}}\geq(q_{0}p_{0}-ks_{0})e^{-\int_{0}^{t}q}e^{-\int_{0}^{t}p}.\]
This completes the proof.
_Proof of Theorem 2.4:_ Suppose the hypothesis holds. Then by using (5.1) in Proposition 7.3, one immediately obtains the blow up of density in finite time if \(A_{0}<-k/(N-2)\).
If \(A_{0}=-k/(N-2)\), then the result follows from Proposition 7.7.
_Proof of Theorem 2.6:_ Suppose initial data satisfies the hypothesis of the Theorem. Along the characteristic path (3.1), this translates to the condition that for all \(\beta>0\),
\[(\beta,u_{0}(\beta),\phi_{0r}(\beta),u_{0r}(\beta),\rho_{0}(\beta))\in\Sigma_{ N}\cup\{(\beta,x,y,z,0):x>0,z\geq-ky/x\}.\]
We will now analyze a single characteristic path and replace the initial data notations with \((\beta,u_{0},\phi_{0r},u_{0r},\rho_{0})\). Under the transformation (3.7), we now turn to the unknowns of system (7.1), \((q,s,p,\rho)\). Global-in-time existence of these variables is equivalent to the global-in-time existence of the original variables. If \(\rho(0)=0\), then Proposition 7.11 gives the all-time existence of \(p\) and hence, the solution is global.
Next, we prove global existence for the case when \(\rho(0)>0\). Turning to the Definition 2.7, we will use the equivalence of \(a\) and (5.3), and analyze the conditions (2.4), (2.5), (2.6), (2.7) one by one. To this end, let it be such that,
\[A_{0}\in\left(-\frac{k}{N-2},0\right).\]
We thus have fulfilment of (2.4). Rearranging (2.4), and noting the transformations (3.7), (5.1), we obtain for \(x\neq 0\),
\[\frac{\eta_{0}}{\beta q_{0}}<\frac{\eta_{1}(0)}{\beta q_{0}},\]
and for \(x=0\),
\[w_{0}>\frac{d\eta_{1}(0)}{dt}.\]
Also, \(x=0\) if and only if \(q_{0}=0\). The first inequality then is the hypothesis to the second assertion of Proposition 7.4 and the second assertion of Proposition 7.6 (depending on whether \(q_{0}>0\) or \(q_{0}<0\)). The second inequality is the hypothesis to Proposition 7.5. As a result, through (5.1), we obtain the all time existence of \(\rho,p\) in (7.1a), (7.1b).
Next, we suppose that \(A_{0}=0\). Then (2.5) reduces to \(\eta_{0}<\eta_{2}(0)\) for \(q_{0}<0\) and there are no extra conditions if \(q_{0}>0\). Note that \(q_{0}\) cannot be equal to zero because that would be a violation to \(A_{0}=0\). These two scenarios form the hypothesis of the first assertion of Proposition 7.4 and the first assertion of Proposition 7.6. Therefore, we have the all time existence of \(\rho,p\).
Now, suppose
\[A_{0}\in\left(0,\frac{k}{N-2}\left(\left(\frac{-\beta y_{\mathfrak{M},N}}{y} \right)^{1-\frac{2}{N}}-1\right)\right).\]
Then using (7.9) with \(y_{\mathfrak{M},N}=s_{max}\) and rearranging this gives,
\[\kappa\in\left(1,\left(\frac{s_{max}}{s_{0}}\right)^{\frac{1}{N}}\right),\]
with \(\kappa\) as in (5.6). Note that \(y^{M}\), which in the definition of \(\Sigma_{N}\) initially, was not given explicitly is now given explicitly by (7.9). Clearly it only depends on \(q_{0},s_{0}\). Also note that since \(A_{0}>0\), \(\kappa>1\) directly from its formula, (5.6). From (2.6), we obtain,
\[\eta_{1}(0)<\eta_{0}<\eta_{2}(0),\]
if \(q_{0}<0\), and no extra conditions if \(q_{0}>0\). Note that \(q_{0}\) cannot be equal to zero because that would be a violation to \(A_{0}>0\). Therefore, all time existence of \(\rho,p\) follows from the first assertion of Proposition 7.6 (if \(q_{0}<0\)) and first assertion of Proposition 7.4 (if \(q_{0}>0\)).
Lastly, we assume
\[A_{0}\geq\frac{k}{N-2}\left(\left(\frac{-\beta y_{\mathfrak{M},N}}{y}\right)^ {1-\frac{2}{N}}-1\right).\]
Similar to how we argued in the previous case, here the above inequality is equivalent to saying that \(\kappa\geq(s_{max}/s_{0})^{\frac{1}{N}}\). The conditions (2.7) imply that \(q_{0}>0\). We can then apply the first assertion of Proposition 7.4 to obtain the all time existence of \(\rho,p\).
Putting all the cases together and applying Proposition 5.2, we have obtained that solutions to (7.1) exist for all time if any single characteristic lies in any of the above situations. In particular, if all the characteristics satisfy the above conditions, then an application of Lemma 3.1 and Theorem 1.1 gives the existence of global-in-time solutions to (1.2) with \(c=0\).
Conversely, suppose there is a characteristic path corresponding to some parameter \(\beta^{*}>0\) such that,
\[(\beta^{*},u_{0}^{*},\phi_{0r}^{*},u_{0r}^{*},\rho_{0}^{*}) :=(\beta^{*},u_{0}(\beta^{*}),\phi_{0r}(\beta^{*}),u_{0r}(\beta^{ *}),\rho_{0}(\beta^{*}))\] \[\notin\Sigma_{N}\cup\{(\beta,x,y,z,0):x>0,z\geq-ky/x\}.\]
If \(\rho_{0}^{*}=0\), then a direct application of Proposition 7.11 gives the finite-time-breakdown of \(p\).
Now, we suppose \(\rho_{0}^{*}>0\). Then it could be that \(A_{0}^{*}\leq-k/(N-2)\). Finite time breakdown is then a direct result of Propositions 7.3 or 7.7.
If \(A_{0}^{*}>-k/(N-2)\), then negation of one of the conditions among (2.4), (2.5), (2.6), (2.7) has to be true as and according to the value of \(A_{0}^{*}\). Once again, we can check each condition one by one. All the analysis is a repetition of above except from the fact
that, instead of the all-time existence results, we use the finite-time-breakdown results of Propositions 7.4, 7.5 and 7.6. Consequently, solutions to (1.2) cease to be smooth.
This completes the proof of the Theorem.
Theorem 2.8 can be proved in a very similar way to that of Theorem 2.6 only that instead of Propositions 7.4, 7.5 and 7.6, we use Propositions 7.8, 7.9 and 7.10.
## 8. Conclusion
The techniques developed in this paper work for the one-dimensional case as well. However, there is one main difference in the 1D and multi-dimensional scenarios. As pointed out by the author in [28], that in multi-D, the Poisson forcing is enough to avoid flow concentration at the origin. In particular, by Corollary 4.2, we know that no matter how large (in absolute value) the initial velocity is, there are no concentrations at the origin. Let's do similar calculations for the 1D case. In 1D, system (3.8c), (3.8d) for \(N=1\) reduce to,
\[q^{\prime}=k\tilde{s}-kc-q^{2},\qquad\tilde{s}^{\prime}=-q\tilde{s},\]
with \(\tilde{s}:=s+c\). Using a transformation, \(a=q/\tilde{s},b=1/\tilde{s}\), we obtain a simple linear ODE system,
\[a^{\prime}=k-kcb,\qquad b^{\prime}=a.\]
We can analytically solve this to obtain,
\[b(t)=\frac{1}{c}+\left(b(0)-\frac{1}{c}\right)\cos(\sqrt{kct})+\frac{a(0)}{ \sqrt{kc}}\sin(\sqrt{kct}).\]
From this, one can conclude that \(b(t_{*})=0\) (or equivalently, \(\lim_{t\to t_{*}^{-}}\tilde{s}(t)=\infty\)) for some \(t_{*}>0\) if
\[a(0)^{2}\geq 2kb(0)-kc(b(0))^{2},\]
which in the original variables is equivalent to,
\[|q|>\sqrt{k(2\tilde{s}(0)-c)}.\]
Hence, for a large initial velocity (absolute value), there are concentrations at the origin. For \(c=0\), the result is one sided, that is, a sufficiently large negative initial velocity (flow pointing towards origin) would lead to concentrations at the origin. This concentration is completely avoided for \(N\geq 2\), wherein the Poisson forcing turns out to be sufficient.
In this work, we make a subsequent important discovery that, in multi-D, the subcritical region can contain arbitrarily large initial velocities. In other words, no matter how large the initial velocity is (positive or negative), one can have a region in the phase plane of initial density and gradient of velocity, that corresponds to all-time-existence of the solution.
Moreover, we are hopeful that our techniques can be applied to other systems such as Euler-Poisson-alignment and Euler-Poisson with swirl. However, we leave that for future studies.
## Acknowledgments
This research was partially supported by the National Science Foundation under Grant DMS1812666. |
2304.01806 | Connected and Autonomous Vehicle Scheduling Problems: Some Models and
Algorithms | In this paper, we consider scheduling problems that arise in connected and
autonomous vehicle systems. For four variants of such problems, mathematical
models and solution algorithms are presented. In particular, three polynomial
algorithms and a branch and bound algorithms are developed. | Evgeny R. Gafarov, Frank Werner | 2023-04-04T14:03:18Z | http://arxiv.org/abs/2304.01806v1 | # Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms.
###### Abstract
In this paper, we consider scheduling problems that arise in connected and autonomous vehicle systems. For four variants of such problems, mathematical models and solution algorithms are presented. In particular, three polynomial algorithms and a branch and bound algorithms are developed.
**Keywords:** Scheduling, Optimization, Dynamic programming, Connected and autonomous vehicle, Precedence relations
**MSC classification:** 90 B 35, 90 C 27, 68 Q 25, 68 W 40
Introduction
A car-to-car communication has been presented as a possible solution to many challenges encountered in this field. Many solutions have been presented involving the modeling of the entirety of an autonomous driving system as a muli-agent system, where vehicles interact to enable autonomous functionality such as emergency braking and traffic jam avoidance. Vehicle systems are developing towards fully connected and fully autonomous systems. Vehicular communication technologies have been considered e.g. in [1, 2].
These vehicle systems can coordinate the vehicles in order to speed up the traffic and avoid traffic jams. Vehicles can be coordinated by a centralized scheduler residing in the network (e.g., a base station in case of cellular systems) or a distributed scheduler, where the resources are autonomously selected by the vehicles
In [3], the authors proposed to optimize the departure times, travel routes, and longitudinal trajectories of connected and autonomous vehicles (CAVs) as well as signal timings at intersections to achieve a stable traffic state, where no vehicles need to stop before entering any intersection and no queue spillover occurs at any intersection. The departure times, travel routes and signal timings are optimized in a central controller, while the vehicle trajectories can be optimized by distributed roadside processors, which together form a hierarchical traffic management scheme.
In [4], the authors considered the coordination of lane changes of autonomous vehicles on a two-lane road segment before reaching a given critical position. An algorithm is presented that performs a lane change of a single vehicle in the shortest possible time. This algorithm is then iteratively applied in order to handle all lane changes required on the considered road segment while guaranteeing traffic safety.
In [5], the scheduling problem of a CAV crossing the intersection was considered to optimize the intersection efficiency. In addition, a solution algorithm was presented.
In [6], the time phases of the traffic light scheduling problem were considered with the goal of increasing the traffic fluency by decreasing the waiting time of the traveling vehicles at the signalized road intersections.
In this paper, we consider four scheduling problems that arise in connection with CAVs. The reminder of this paper is as follows. In each of the Sections 2 - 5, we consider one of these problems. The problem with a road of two lanes and a barrier on the lane is considered in Section 2. Section 3 deals with the case of a turn to a main road. Section 4 considers the case of a road with three lanes and a barrier on the middle lane. Section 5 deals with a crossroad having dividing lanes. For each of these cases, an appropriate scheduling problem is formulated and a solution algorithm is given. Finally, Section 6 gives a few concluding remarks.
## 2 A road with two lanes and a barrier on a lane
In this section, we consider a road with two lanes, where two sets \(N_{1}\) and \(N_{2}\) of CAVs are given. The CAVs from the set \(N_{1}\) go on lane 1, and the CAVs from the set \(N_{2}\) go on lane 2. Both lanes have the same direction. On lane 2, there is a barrier and the CAVs from the set \(N_{2}\) have to move to lane 1, see Fig. 1.
We have to find a sequence of passing the barrier by the CAVs from the sets \(N_{1}\) and \(N_{2}\) in order to minimize a given objective function, e.g. the total passing time.
We assume that
* a maximal feasible speed of the CAVs is given. The CAVs either go with the maximal feasible speed or brake in order to let another CAV change the lane;
* an acceleration is not taken into account;
* te time needed to change the lane is not taken into account, i.e., it is equal to zero;
* the CAVs have the same length;
the same problem arises e.g. on railway sections and in automated warehouses of logistics companies with autonomous robot-transporters. This simplified problem can be formulated as a single machine scheduling problem as follows.
Given a set \(N=N_{1}\bigcup N_{2}\) of \(n\) jobs that have to be processed on a single machine from time \(0\) on. For each job \(j\), a processing time \(p_{j}=p\), a release date \(r_{j}>0\), a due date \(d_{j}\) and a weight \(w_{j}\) are given. The processing time \(p\) can be computed from the maximal feasible speed and the length of a CAV. The value \(r_{j}\) corresponds to the position of the CAV \(j\) on the road.
A schedule is uniquely determined by a permutation \(\pi\) of the CAVs of the set \(N\). Let \(C_{j}(\pi)\) be the completion time of job \(j\) in the schedule \(\pi\). A precedence relation can be defined, i.e., for the jobs from the set \(N_{1}\), we have \(j_{1}^{1}\to j_{2}^{1}\rightarrow\ldots\to j_{n_{1}}^{1}\), where \(n_{1}=|N_{1}|\) and \(j\to i\) means that the processing of job \(j\) precedes the processing of job \(i\). Thus, there is a chain of jobs on lane \(1\). Analogously, a chain of jobs can be defined for the set \(N_{2}\).
For the single machine scheduling problem of minimizing total completion time, the goal is to find an optimal schedule \(\pi^{*}\) that minimizes the total completion time, i.e.,
\[\sum C_{j}=\sum_{j\in N}C_{j}. \tag{1}\]
Here the completion time of a job is equal to the time when the car passes the barrier. We denote this problem by \(1|2\ chains,p_{j}=p,r_{j}|\sum C_{j}\) according to the traditional three-field notation \(\alpha|\beta|\gamma\) for scheduling problems proposed by Graham et al. [7], where \(\alpha\) describes the machine environment, \(\beta\) gives the job characteristics and further constraints, and \(\gamma\) describes the objective function.
Let
\[T_{j}(\pi)=\max\{0,C_{j}(\pi)-d_{j}\}\]
be the tardiness of job \(j\) in the schedule \(\pi\). In addition, one can consider also the following objective functions:
\[\sum w_{j}C_{j} = \sum w_{j}C_{j}(\pi)\mbox{ -- total weighted completion time},\] \[\sum T_{j} = \sum T_{j}(\pi)\mbox{ -- total tardiness},\] \[\sum w_{j}T_{j} = \sum w_{j}T_{j}(\pi)\mbox{ -- total weighted tardiness}.\]
It is known that the problems \(1|chains,p_{j}=p,r_{j}|\sum w_{j}C_{j}\) and \(1|chains,p_{j}=p,r_{j}|\sum w_{j}T_{j}\) with an arbitrary number of chains are NP-hard [8]. This has been proven by a reduction from the 3-Partition Problem.
In [9], Baptiste presented polynomial time dynamic programming algorithms to solve the problems \(1|p_{j}=p,r_{j}|\sum T_{j}\) and \(1|p_{j}=p,r_{j}|\sum w_{j}U_{j}\).
In an optimal schedule for the problem \(1|2\ chains,p_{j}=p,r_{j}|\sum C_{j}\), the jobs are processed in non-decreasing order of the values \(r_{j}\). This can be easily proven by contradiction. For an illustration of the concepts introduced above, we consider the following small example.
Figure 1: A road with two lanes and a barrier on a lane
**Example.** Let \(N_{1}=\{1,2\},\ N_{2}=\{3,4\}\). Moreover, the values \(p=2,\ r_{1}=0,\ r_{2}=3,\ r_{3}=1,\ r_{4}=4\) and \(d_{1}=d_{2}=0,\ d_{3}=3,\ d_{4}=6\) are given. For the chosen job sequence \(\pi=(1,3,2,4)\), we obtain \(S_{1}(\pi)=0,\ S_{3}(\pi)=2,\ S_{2}(\pi)=4,\ S_{4}(\pi)=6\), \(C_{1}(\pi)=2,\ C_{3}(\pi)=4,\ C_{2}(\pi)=6,\ C_{4}(\pi)=8\). \(\sum_{j=1}^{4}C_{j}(\pi)=20\) and \(\sum_{j=1}^{4}T_{j}(\pi)=1+2=3\). For the job sequence \(\pi^{\prime}=(3,4,1,2)\), we get \(\sum_{j=1}^{4}T_{j}(\pi^{\prime})=0\).
We note that there exists a set \(\Theta\) of possible completion times of all jobs, where \(|\Theta|\leq n^{2}\), since:
* without loss of generality, we consider only active schedules, where no job can be processed earlier without loss of feasibility;
* there are no more than \(n\) different values \(r_{j}\);
* all processing times are equal to \(p\) and thus, for any job \(j\in N\), its completion time is equal to \(r_{i}+lp,\ i\in N,l\leq n\).
**Theorem 1.** The problems \(1|2\ chains,p_{j}=p,r_{j}|f,\ f\in\{\sum w_{j}C_{j},\sum w_{j}T_{j}\}\) can be solved in \(O(n^{5})\) time by a dynamic program.
A sketch of the proof is as follows. In the dynamic program (DP), we consider the jobs \(i_{1},i_{2},\ldots,i_{n_{2}}\in N_{2}\) one by one, where \(i_{1}\to i_{2}\rightarrow\ldots\to i_{n_{2}}\). Thus, at each stage \(k\) of the dynamic program, we consider a single job \(i_{k},\ k=1,2,\ldots,n_{2}\). Moreover, at each stage we consider all states \((f,C_{max},pos)\) stored at the previous stage. In addition, for each state, we store the best partial solution (sequence of jobs). The meaning of the above triplet is as follows. Here \(pos\in\{0,1,2,\ldots,n_{1}\}\) describes the position of a job, and it means that job \(i_{k-1}\) is processed between the jobs \(j_{pos}\in N_{1}\) and \(j_{pos+1}\in N_{1},\ 0<pos<n_{1}\), and \(C_{max}=C_{i_{k-1}}\in\Theta\) denotes the completion time of job \(i_{k-1}\) in the corresponding partial solution. Finally, \(f\) is the value of the considered objective function that corresponds to the partial solution. For each job \(i_{k}\) and a state \((f,C_{max},pos)\), we compute new states \((f^{\prime},C^{\prime}_{max},pos^{\prime})\), where \(pos^{\prime}\geq pos\), and \(C^{\prime}_{max}\) is the completion time of job \(i_{k}\) in a new partial solution, where job \(i_{k}\) is scheduled after job \(j_{pos^{\prime}}\in N_{1}\). If at any stage, there are two states \((f^{\prime},C^{\prime}_{max},pos^{\prime})\) and \((f^{\prime\prime},C^{\prime\prime}_{max}pos^{\prime})\) with \(f^{\prime}\leq f^{\prime\prime}\) and \(C^{\prime}_{max}\leq C^{\prime\prime}_{max}\), we only keep the state \((f^{\prime},C^{\prime}_{max},pos^{\prime})\). After the last stage, we have to select the best found complete solution among all states generated.
A pseudo-code of Algorithm DP is presented below.
**Algorithm DP**
1. \(StatesSet=\{(0,0,0)\}\);
2. FOR EACH \(i_{k}\in N_{2}\) DO 1. \(NewStatesSet=\{\}\); 2. FOR EACH \((f,C_{max},pos)\in StatesSet\) DO 2.2.1 Let \(PositionsList=\{pos,pos+1,\ldots,n_{1}\}\); 2.2.2 FOR EACH \(pos^{\prime}\in PositionsList\) DO 2.2.2.1 Calculate \(f^{\prime}\) for the resulting partial solution, if job \(i_{k}\) is processed after \(j_{pos^{\prime}}\), according to the partial solution corresponding to state \((f,C_{max},pos)\); 2.2.2.2 Add \((f^{\prime},C^{\prime}_{max},pos^{\prime})\) to \(NewStatesSet\). If in \(NewStatesSet\), there is a state \((f^{\prime\prime},C^{\prime\prime}_{max},pos^{\prime})\) with \(f^{\prime}\leq f^{\prime\prime}\) and \(C^{\prime}_{max}\leq C^{\prime\prime}_{max}\), then exclude the state \((f^{\prime\prime},C^{\prime\prime}_{max},pos^{\prime})\) from \(NewStatesSet\); 2.3 \(StatesSet:=NewStatesSet\);
3. Select the best found complete solution among all states generated.
## 3 Turn to a main road
There is a set \(N_{1}\) of CAVs going along a main road and a set \(N_{2}\) of CAVs turning into the main road from a side road, see Fig. 2. In contrast to the problems \(1|2\ chains,p_{j}=p|\gamma\), we have now \(p_{j}=p^{1},\ j\in N_{1}\) and \(p_{j}=p^{2},\ j\in N_{2}\). We denote this problem by \(1|2\ chains,p_{j}\in p^{1},p^{2},r_{j}|\gamma\). This problem can be solved by the same DP.
## 4 A road with three lanes and a barrier on the middle lane
In addition to the problems \(1|2\ chains,p_{j}=p|\gamma\), there are an additional lane \(3\) and a subset \(N_{3}\) of jobs, see Fig. 3. The jobs of the set \(N_{1}\) should be processed on the machine \(1\), and the jobs of th set \(N_{3}\) should be processed on the machine \(3\). The jobs of the set \(N_{2}\) can be processed on any of these two machines. Precedence relations among the jobs of the set \(N_{3}\) can be defined as a chain of jobs.
We denote this problem by \(P2|dedicated,3\ chains,p_{j}=p,r_{j}|\gamma\). This problem can be solved by a modified DP, where we consider the positions \(pos\) between the jobs of the set \(N_{1}\) and between the jobs of the set \(N_{3}\).
## 5 A crossroad with dividing lines
In this section, we consider a crossroad with dividing lines and four sets \(N_{1},N_{2},N_{3},N_{4}\) of CAVs. They share four sectors of the crossroad denoted by \(M_{1},M_{2},M_{3},M_{4}\), see Fig. 4. We have to find an optimal sequence of passing these sectors.
Figure 3: A road with three lanes and a barrier on the middle lane
Figure 2: Turn to a main road
We can formulate the following job shop scheduling problem with four machines. There are four sets \(N_{1},N_{2},N_{3},N_{4}\) of jobs and four machines corresponding to the sectors \(M_{1},M_{2},M_{3},M_{4}\). Each job \(j\) consists of two operations. For each job \(j\in N_{1}\), its first operation has to be processed on machine \(M_{1}\) and its second one has to be processed on machine \(M_{2}\). For each job \(j\in N_{2}\), its first operation has to be processed on machine \(M_{2}\) and its second one has to processed on machine \(M_{4}\). For each job \(j\in N_{3}\), its first operation has to be processed on machine \(M_{3}\) and its second one has to be processed on machine \(M_{1}\). For each job \(j\in N_{4}\), its first operation has to be processed on machine \(M_{3}\). The processing times of the operations are equal to \(p\). Precedence relations can be given as chains of jobs.
If the lengths of the dividing lines are equal to \(0\), then the second operation of a job \(j\) should be processed immediately after the first one. Otherwise for each of the sets \(N_{1},N_{2},N_{3},N_{4}\), there are four buffers of limited capacities, namely \(b_{1},b_{2},b_{3},b_{4}\) jobs for the corresponding machine. At any moment, for the set \(N_{1}\), there can be up to \(b_{1}\) jobs for which the first operation is completed and the second one is not yet started. We denote these problems by \(J4|4\ chains,p_{j}=p,r_{j}|f\),
\(f\in\{C_{max},\sum w_{j}C_{j},\sum w_{j}T_{j}\}\), where \(C_{max}\) is the makespan.
The problems \(J4|4\ chains,p_{j}=p,r_{j}|f,\ f\in\{C_{max},\sum w_{j}C_{j},\sum w_{j}T_{j}\}\) can be solved by a branch-and-bound (B&B) algorithm. The search (rooted) tree is constructed by the following branching rule. For any node of the tree, we consider the following 8 possible branches:
* Schedule the first unscheduled possible operation for a job \(j\in N_{1}\) on machine \(M_{1}\) at the earliest possible starting time. If there is no such an operation, skip this branch.
* Schedule the first unscheduled possible operation for a job \(j\in N_{3}\) on machine \(M_{1}\) at the earliest possible starting time.
* Schedule the first unscheduled possible operation for a job \(j\in N_{1}\) on machine \(M_{2}\) at the earliest possible starting time.
* Schedule the first unscheduled possible operation for a job \(j\in N_{2}\) on machine \(M_{2}\) at the earliest possible starting time.
* Schedule the first unscheduled possible operation for a job \(j\in N_{3}\) on machine \(M_{3}\) at the earliest possible starting time.
Figure 4: A crossroad with dividing lines
* Schedule the first unscheduled possible operation for a job \(j\in N_{4}\) on machine \(M_{3}\) at the earliest possible starting time.
* Schedule the first unscheduled possible operation for a job \(j\in N_{2}\) on machine \(M_{4}\) at the earliest possible starting time.
* Schedule the first unscheduled possible operation for a job \(j\in N_{4}\) on machine \(M_{4}\) at the earliest possible starting time.
Thus, there are up to \(2^{3}=8\) branches for each node to be considered. Since there are \(2n\) operations, where \(n=|N_{1}\bigcup N_{2}\bigcup N_{3}\bigcup N_{4}|\), there are no more than \(2n\) levels in the search tree. Thus, we have no more than \((2^{3})^{2n}=2^{6n}\) nodes to be considered. If some of the values \(b_{1},b_{2},b_{3},b_{4}\) are equal to \(0\), we have less nodes, e.g., if each of them is equal to \(0\), then we have only \(2^{3n}\) nodes.
Moreover, we can use the following trivial lower and upper bounds for the problem \(J4|4\ chains,p_{j}=p,r_{j}|C_{max}\).
**Upper bound.** To construct a feasible solution, we use a list scheduling algorithm. In this algorithm, we consider the unscheduled operations one-by-one according to a non-decreasing order of the release dates of the corresponding jobs. We schedule the next unscheduled operation at the earliest possible starting time according to the current partial schedule. To order the set of jobs, we need \(O(n\log n)\) operations. In addition, we need \(O(n)\) operations to construct a feasible solution.
**Lower bound.** Consider a set of unscheduled operations \(N^{\prime}\). For each of them, we calculate the earliest possible starting time according to the current partial schedule without taking into account the other unscheduled operations. In such a way, we get a schedule \(\pi\) that can be infeasible. Let \(C_{M_{1}}(\pi)\) be the makespan (i.e., the maximal completion time of an operation assigned to the machine) for machine \(M_{1}\), \(IT_{M_{1}}(\pi)\) be the idle time of machine \(M_{1}\) between the operations of the set \(N^{\prime}\), and \(OT_{M_{1}}(\pi)\) be the total overlap time, where more than one operation is processed at the same time. Moreover, let
\[C^{\prime}_{M_{1}}(\pi)=C_{M_{1}}(\pi)+\max\{0,OT_{M_{1}}(\pi)-IT_{M_{1}}(\pi)\}.\]
Then
\[LB1=\max\{C^{\prime}_{M_{1}}(\pi),C^{\prime}_{M_{2}}(\pi),C^{\prime}_{M_{3}}( \pi),C^{\prime}_{M_{4}}(\pi)\}\]
is a lower bound. It is easy to check that we need \(O(n)\) operations to calculate this bound.
If we use Upper bound and Lower bound, then the B&B algorithm requires \(O(n2^{6n})\) operations.
## 6 Concluding Remarks
In this note, four models of scheduling problems for CAVs have been given. Three of them can be solved by a dynamic programming algorithm in polynomial time. For the fourth problem, a B&B algorithm has been presented.
The following questions can be considered in the future:.
* Are the problems \(J4|4\ chains,p_{j}=p,r_{j}|f\), \(f\in\{C_{max},\sum w_{j}C_{j},\sum w_{j}T_{j}\}\) NP-hard or can be solved in polynomial time?
* Are there problems with CAVs having equal processing times and a fixed number of chains of jobs which is an NP-hard problem? |
2302.03347 | An Informative Path Planning Framework for Active Learning in UAV-based
Semantic Mapping | Unmanned aerial vehicles (UAVs) are frequently used for aerial mapping and
general monitoring tasks. Recent progress in deep learning enabled automated
semantic segmentation of imagery to facilitate the interpretation of
large-scale complex environments. Commonly used supervised deep learning for
segmentation relies on large amounts of pixel-wise labelled data, which is
tedious and costly to annotate. The domain-specific visual appearance of aerial
environments often prevents the usage of models pre-trained on publicly
available datasets. To address this, we propose a novel general planning
framework for UAVs to autonomously acquire informative training images for
model re-training. We leverage multiple acquisition functions and fuse them
into probabilistic terrain maps. Our framework combines the mapped acquisition
function information into the UAV's planning objectives. In this way, the UAV
adaptively acquires informative aerial images to be manually labelled for model
re-training. Experimental results on real-world data and in a photorealistic
simulation show that our framework maximises model performance and drastically
reduces labelling efforts. Our map-based planners outperform state-of-the-art
local planning. | Julius Rückin, Federico Magistri, Cyrill Stachniss, Marija Popović | 2023-02-07T09:41:21Z | http://arxiv.org/abs/2302.03347v3 | # An Informative Path Planning Framework for Active Learning in UAV-based Semantic Mapping
###### Abstract
Unmanned aerial vehicles (UAVs) are crucial for aerial mapping and general monitoring tasks. Recent progress in deep learning enabled automated semantic segmentation of imagery to facilitate the interpretation of large-scale complex environments. Commonly used supervised deep learning for segmentation relies on large amounts of pixel-wise labelled data, which is tedious and costly to annotate. The domain-specific visual appearance of aerial environments often prevents the usage of models pre-trained on a static dataset. To address this, we propose a novel general planning framework for UAVs to autonomously acquire informative training images for model retraining. We leverage multiple acquisition functions and fuse them into probabilistic terrain maps. Our framework combines the mapped acquisition function information into the UAV's planning objectives. In this way, the UAV adaptively acquires informative aerial images to be manually labelled for model re-training. Experimental results on real-world data and in a photorealistic simulation show that our framework maximises model performance and drastically reduces labelling efforts. Our map-based planners outperform state-of-the-art local planning.
Informative Path Planning, Active Learning, Bayesian Deep Learning, Semantic Segmentation and Mapping
## I Introduction
Unmanned aerial vehicles (UAVs) enable highly agile, low-cost operations in various aerial imaging applications [1, 2], such as precision agriculture [3, 4], wildlife conservation [2], and urban planning [5, 6, 7, 8]. Combined with recent advances in deep learning for semantic segmentation through fully convolutional neural networks (FCNs) [9, 10], deploying UAVs accelerates automated scene understanding in large-scale and complex aerial environments [11]. Classical deep learning-based semantic segmentation models often used in this context are usually trained on a static curated dataset in a supervised fashion only once before deployment. This leads to two major drawbacks: first, training a semantic segmentation model requires enormous amounts of pixel-wise labelled images, which is a repetitive and time-consuming process often executed by costly domain experts. Second, visual appearance can differ significantly between environments or change over time. Thus, a critical requirement for robot autonomy is the ability to learn about an environment by continuously improving the robot's semantic perception with minimal expert guidance.
In this work, we examine the problem of active learning (AL) in UAV-based semantic mapping. Our goal is to improve the robot's vision capabilities in initially unknown environments while minimising the total amount of human-labelled data. To this end, our approach exploits ideas from AL research and incorporates them into a new informative path planning (IPP) framework. The framework replans the UAV's path online as new observations are collected to actively target regions of informative training data. The newly gathered images are labelled by a human annotator and used to re-train an FCN, maximising its semantic segmentation performance.
Various AL methods for machine learning effectively reduce the requirements for human-labelled training data [13, 14, 15, 16, 17, 18, 19]. Recently, AL approaches for deep learning models are gaining attention [20, 21, 22, 23, 24, 25]. These works develop acquisition functions for selecting to-be-labelled training data to maximise model performance. However, they cannot be directly applied to robotic missions as they assume access to large pre-recorded unlabelled in-domain data pools. An open problem is how to leverage AL to improve robot perception with minimal expert guidance when operating in initially unknown environments. More recent AL works for aerial imagery consider the UAV to be a passive data collection device to record static data pools [2, 6]. In contrast, we aim to utilise the UAV's decision-making capabilities to improve its perception and, thus, its
Fig. 1: Our general planning framework for active learning in UAV-based semantic mapping deployed in a photo-realistic simulator [12] (top). We compute an acquisition function, e.g. model uncertainty, and predict semantic segmentation online (centre-right) and fuse both in terrain maps (bottom-right). Our map-based planners replan a UAV’s path (orange, bottom-left) to collect the most informative, e.g. most uncertain (yellow), images for network re-training. Our approach reduces the number of images that must be manually labelled to maximise semantic segmentation performance. |
2308.16077 | Shared Mobility in Berlin: An Analysis of Ride-Pooling with Car Mobility
Data | In face of the threat of a climate catastrophe and the resulting urgent need
for decarbonization together with the widespread emergence of the sharing
economy, shared pooled mobility has been suggested as an alternative to private
vehicle use. However, until now all of its real-life implementations have
served a niche market, adjacent to taxi services. To better understand this
discrepancy, as well as the potential of pooled mobility, we have here
simulated and analyzed pooled mobility on the street network of Berlin with car
trip data as input for ride requests. We measure the rate of sharable trips,
the relative travel time of passengers, the average occupancy of the vehicles,
the relatively driven distance compared to driving with a private vehicle. We
observe that for requests in the city center of Berlin it is possible to serve
all mobility requests currently done by car, with around 4700 vehicles. The
travel time is around 1.34 higher than with a private vehicle, the vehicle's
occupancy increases to 2.6. The driven distance is reduced by 65%. In the whole
area of Berlin we observe that a ride-pooling system with 10000 vehicles can
serve 60% of the trips. The travel time is 1.4 times higher than with a private
vehicle, the occupancy gets three and the driven distance is reduced by 40%. | Alexander Schmaus, Felix Creutzig, Nicolas Koch, Nora Molkenthin | 2023-08-30T14:59:22Z | http://arxiv.org/abs/2308.16077v1 | # Shared Mobility in Berlin: An Analysis of Ride-Pooling with Car Mobility Data
###### Abstract
In face of the threat of a climate catastrophe and the resulting urgent need for decarbonization together with the widespread emergence of the sharing economy, shared pooled mobility has been suggested as an alternative to private vehicle use. However, until now all of its real-life implementations have served a niche market, adjacent to taxi services. To better understand this discrepancy, as well as the potential of pooled mobility, we have here simulated and analyzed pooled mobility on the street network of Berlin with car trip data as input for ride requests. We measure the rate of sharable trips, the relative travel time of passengers, the average occupancy of the vehicles, the relatively driven distance compared to driving with a private vehicle. We observe that for requests in the city center of Berlin it is possible to serve all mobility requests currently done by car, with around 4700 vehicles. The travel time is around 1.34 higher than with a private vehicle, the vehicle's occupancy increases to 2.6. The driven distance is reduced by 65%. In the whole area of Berlin we observe that a ride-pooling system with 10000 vehicles can serve 60% of the trips. The travel time is 1.4 times higher than with a private vehicle, the occupancy gets three and the driven distance is reduced by 40%.
## 1 Introduction
The implementation of sustainable traffic is one of the key challenges of decarbonisation. The transportation sector emits 15% of the global greenhouse gas, out of which private vehicles are the largest emitter [1]. In Germany, the transport sector is responsible for 18% of all emissions, with private vehicles alone being responsible for 11% of all emissions [2]. Thus, decarbonisation is not possible without lowering the emissions from private vehicles. While most of the focus here lies on electrification of private vehicles, a reduction of private motorized mobility offers several additional benefits, such as the reduction of pollution, noise, and traffic congestions [3, 4].
In this analysis we thus focus on pooling similar rides as a means for overall traffic reduction. Ride-pooling offers a flexible and convenient alternative to line-based public transport. Several studies show that ride-pooling or shared pooled mobility could make a large contribution to lowering energy demand and increasing traffic sustainability [5, 6].
Furthermore, shared pooled mobility would increase the accessibility of public transport itself [7].
The implementation of ride-pooling services could be realized significantly faster than expanding public transportation systems, especially rail systems. From an infrastructural site it only requires streets and vehicles, which are both usable without any further development. From the software site apps and routing/pooling algorithms are required. Both are already developed by several ride-pooling operators, like MOIA in the city of Hamburg [8]. In contrast to other public transportation systems, which use the street networks, like bus systems, ride-pooling is capable of using the flexibility of vehicles. Thus, ride-pooling maintains one of the biggest advantages of private mobility. But, despite this, there are no signs of a wider usage yet. Instead, ride-pooling is currently mostly operating in the pooled taxi niche.
In Berlin specifically a ride-pooling service operated by the Berlin Verkehrsgesellschaft was active from 2018 to 2022. It was available in the eastern parts of the area inside of the Berlin Ringbahn and used 4.423 stations. During the four years of operation around 1,85 million passengers were transported, the proportion of shared trips was around 67% and the client satisfaction reached around 97% [9]. These numbers were negatively influenced by the Corona epidemic starting in the year 2020. After expansion of the exceptional permission (the so-called _Experimenterflausel_), the service was finished, although continuing operation of the service, even with an expansion to the whole area of Berlin, would have been possible. Instead, another on-demand ride-pooling service operating only in some of the eastern parts of Berlin [10].
Despite several promising tests and pilots, shared pooled mobility has not yet emerged as a widely utilized sustainable transport option. We can only speculate about the reasons behind this. Shared pooled mobility can only achieve acceptable delays in two scenarios. Either it operates close to a taxi service, in which small numbers of rides are occasionally pooled for a small reduction in fares. This is the niche, in which user pool and MOIA typically operate, with fares slightly below taxi fares and travel times slightly above direct travel times, it tends to attract customers, who don't want to drive, but find public transport too cumbersome. The value of this scenario in the context of sustainability is questionable, with emissions gains from sharing quickly being eaten up by losses due to deadheading. The other scenario, in which acceptable delays are realistic, is at very high demands, so that sharing becomes naturally possible with small delays. To achieve such high demands it becomes necessary to be competitive in price and convenience with personal cars. This is the scenario, where shared pooled mobility has the potential to positively impact sustainability. However, when it comes to data, many studies have to resort to taxi data [11, 12] or data directly from ride-pooling providers [13, 14]. This, however, likely underestimates the total demand and distorts the spatial distribution.
Here, we thus use logged trips of private cars in Berlin as our data basis for an analysis of ride-pooling feasibility. We use the origin and destination points of this dataset as requests for a ride-pooling simulation. The service is simulated for a range of fleet sizes of the shared pooled mobility service with a focus on commuter trips made between 7 and 8 am. As networks we use a street network covering the whole area of Berlin and a smaller network covering only the city center of Berlin.
2.Methods
### 2.1 Ride-Pooling Simulation
The concept of ride-pooling is to bundle similar car trips into one vehicle of a ride-pooling fleet. By this, the occupancy of the vehicles is increased, while the driven distance and the number of necessary vehicles decreases. This concept is visualized in Fig 1.
To analyze ride-pooling systems we use an agent-based ride-pooling simulation [15]. The street networks of Berlin, on which this simulation is executed, are created with OpenStreetMap [16]. We use two networks: The first network includes the whole area of Berlin and has 10405 stops (c.f. Fig 2a). In the city center of Berlin, the public transport fare zone A, we use a denser network with 4696 stops (c.f. Fig 2b).
Fig 1: Concept of ride-pooling. 1a) shows three individual car trips. To transport the five passengers, five vehicles are required. 1b) shows a possible pooling strategy. By this, only a single vehicle is required. The car trips are part of the INRIX Dataset and thus, real car trips from Berlin. The pooling strategy was determined by the ride-pooling simulation.
Fig 2: Stop networks used in this work. 2a) shows the network covering the whole area in Berlin with 10,405 nodes. 2b) shows the network in the city center of Berlin (4696 nodes). The background image showing the map of Berlin was downloaded from [17].
Before the simulation starts, an initial position is determined for every vehicle of the ride-pooling fleet, by uniformly drawing positions from the set of all stops. The effect of this strategy is discussed in chapter 4.3. Apart from the network and the initial positions, the following simulation parameter are important:
- - Fleet size or number of vehicles
- Maximum pick-up or waiting time
- Maximum delivery delay
- Average speed
The fleet size defines the number of vehicles the ride-pooling service. The number of seats (capacity) can be defined separately for each vehicle or a general number is used. The maximum pick-up time is defined as the longest permitted waiting time for a passenger. If the maximum pick-up time is exceeded by every vehicle, the request is rejected. Similarly, the maximum delivery delay is defined as the longest permitted excess of the pooled trip duration over the direct driving time. If it is exceeded for a ride, the request will also be rejected. Furthermore, an average speed of the vehicles is defined. The chosen parameters heavily influence the functionality and efficiency of a ride-pooling service.
During the execution, the simulation processes the requests one by one. For each request, the simulation determines for each vehicle how much additional distance the vehicle must drive to process the request, while maintaining the time restrictions. The request is then assigned to that vehicle, which could serve the requests with respectively minimal additional distance.
To run the ride-pooling simulation, service requests are required. In this paper we generate the service requests from real car-trip demand in Berlin. This is further explained in section 2.3, after the introduction of the dataset in section 2.2.
### 2.2 Dataset
In Berlin, around 11.9 million trips were made inside of the city (internal traffic of the city of Berlin) each day in the year 2018 [18]. Of these 11.9 million trips, 18% were made by a car, resulting in 2.14 million vehicle trips within Berlin every day [19]. The temporal distribution of car trips is subject to strong temporal fluctuations. These are shown in Fig 3.
Since we primarily study commuter trips in this paper, only trips between 7 and 8am are considered in the following. This corresponds to the time with the highest traffic volume in the morning in Berlin. 8.8% of all car trips are made in this time slot. Thus, 188,320 car trips are made between 7 and 8 am in Berlin.
The dataset we use in this work is made available by the commercial data provider INRIX. Originally it contains 34,208,544 unique data points, including car trips starting from Berlin, ending in Berlin or crossing Berlin. The data was collected in 2017. It contains GPS data from private and commercial vehicles. All trips with origin and/or destination outside of Berlin are excluded for this work.
To use the trips as requests in the ride-pooling simulation, the origin and destinations are mapped to the stops in one of the two ride-pooling stop networks used in this work (c.f. Fig 1(a). and Fig. 1(b)). For the origin and destination of every trip, the nearest stop in the respective network is searched. After that, all requests with the same origin and destination are removed. By this we end up with 769,650 trips inside of Berlin and 158,329 requests in the center of Berlin. By mapping the original origin and destination points to stops a walking time for each passenger is introduced. For the whole area of Berlin we get an average walking time of 255m, in the city center every customer has a walking time of around 137m.
In order to get near to the 188000 trips made between 7 and 8 am in Berlin, we divide the trips quarterly. Thus, we get four trip sets for the whole area and the city center of Berlin:
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Months** & **Abbreviation** & **Number of Trips** & **Number of Trips** \\ & & **Berlin** & **Berlin City Center** \\ \hline January, February, March & Q1 & 187329 & 37462 \\ April, May, June & Q2 & 193870 & 39570 \\ \hline \end{tabular}
Fig 3: Daily Fluctuation of car traffic in Berlin [19]. From 7 to 8am 8.8% of all car trips are made. This high number is formed by commuter traffic at this time.
\begin{tabular}{|l|l|l|l|} \hline July, August, September & Q3 & 192170 & 40830 \\ \hline October, November, December & Q4 & 196281 & 40467 \\ \hline \end{tabular}
_Tab. 1. Request sets formed from the INRIX dataset in order to represent daily traffic in Berlin and in the city center of Berlin between 7 and 8am. Only trips with origin and destination in Berlin or in the city center of Berlin are considered._
From the requests set containing all requests within Berlin, we determine that the overall average speed of the vehicles is 25.32 km/h. The average travel time for all requests is 15 minutes and 23 seconds. In the requests set containing the trips inside of the center of Berlin, we observe an average speed of 18.26 km/h for all requests and an average travel time of 14 minutes. These measurements are later used to define some of the parameters used in the ride-pooling simulation. An overview of the most important parameters for Berlin and the center of Berlin is given in Tab. 2.
\begin{tabular}{|l|l|l|} \hline & **Berlin** & **Berlin City Center** \\ \hline
**Average Speed [km/h]** & 25.3 km/h & 18.3 \\ \hline
**Duration [min]** & 15.38 & 14 \\ \hline \end{tabular}
_Tab. 2. Average values of the trips used to define some of the parameters of the ride-pooling simulation._
### Definition Simulation Parameters
With the dataset it is now possible to define the simulation parameters described in section 2.1.
Studies show that passengers are willing to accept 50% longer trip durations with public transport compared to using a private vehicle [20]. Thus, we set the maximum delivery delay to be the half of the average trip duration measured from the data. Busses, subways and metro Lines in Berlin mostly have a 10 minute interval. We copy this as a maximum waiting time for the ride pooling service. The speed of ride pooling vehicles is constrained by the speed of general traffic, which between 7 and 8 am is around 18 km/h in the city center and 25 km/h in Berlin (see Tab. 2). Further we assumed that walking to the station and walking from the station to the desired location takes on average one minute each in the city center of Berlin and two minutes in the whole area of Berlin. In Tab. 3 an overview of the selected and fixed parameters is given:
\begin{tabular}{|l|l|l|} \hline & **Berlin** & **Berlin City Center** \\ \hline
**Number of stops** & 10405 & 4696 \\ \hline
**Average vehicle speed [km/h]** & 25.3 & 18.3 \\ \hline
**Maximum waiting time [min]** & 6:00 & 6:00 \\ \hline \end{tabular}
### 2.4 Pooling Characteristics
To evaluate the efficiency and functionality of a simulated ride-pooling system and to make the results comparable to other research, we decided to measure the following characteristics:
* **Share of serviced requests**: Depending on system parameters and fleet configuration, typically a fraction of requests can not be served within the service quality constraints. The share of serviced requests measures how many of the original requests were successfully delivered at their destination within the selected constraints. The rejected requests are assumed to continue to use private vehicles in some calculations.
* **Relative travel time**: The relative travel time measures how long it takes passengers to get to the desired location using the ride-sharing service compared to traveling with their own car. It includes the driving time and the waiting time at the station and walking times from and to a station. We use the travel times specified in the dataset as the time measurement for the use of the private vehicle. For rejected requests the original travel time from the dataset is used.
* **Relative driven distance**: The relative driven distance measures the proportion of the actual driven distance when using private vehicles (from the dataset) and the distance driven by the ride-pooling vehicles in the simulation. If the value is smaller than one, less distance is driven with the ride-pooling service. Equal to the relative travel time, the original distance from the dataset is used for rejected requests.
* **Empty mileage share**: Proportion of the driven distance where the vehicle is empty compared to the complete distance driven by the ride-pooling vehicles. This measurement is independent of the driven distance of the car trips.
* **Average vehicle occupancy**: The average occupancy measures the average number of customers simultaneously in each vehicle.
* **Number of empty vehicles**: The number of vehicles of the fleet, which were not used during operation of the system.
The selection of values was influenced by [21].
## 3 Results
First, we consider the results of the simulations within the city center of Berlin. We ran simulations with different fleet sizes for every quarter. The measurements of the characteristics for the simulation results with the request set Q1 are shown in Fig. 4.
Figure 4: Results for Q1 with differently sized ride-pooling fleets. 4a) shows the proportion of serviced requests for different fleet sizes. We observe that the lowest number of vehicles, which is capable of serving all requests, is around 4500 vehicles. 4b) shows the progress of the average occupancy. 4c) shows the progress of relative travel time and 4d) the progress of the relative driven distance. 4e) shows the share of empty mileage. 4f) shows the number of unused vehicles.
We observe a change in the behavior of all six characteristics around a fleet size of 4500. At this point the fraction of serviced request approaches 100 % (Fig.4a), the slopes of the decline in occupancy and the empty mileage change abruptly (Fig.4b,e), relative travel time and distance reach constant values (Fig.4c,d) and the number of idle vehicles increases (Fig.4f).
We interpret the regime change as a saturation occurring when the _minimal required fleet size_ is reached. We determined this fleet size for every quarter, the results are shown in Tab. 4. At the minimal required fleet size, the decline in occupancy sharply changes as more and more vehicles remain empty (Fig. 4b). Fig. 4c) shows that the relative travel time increases for an increasing number of vehicles. This is because rejected trips are counted as using private vehicles and thus have a relative travel time of 1. As the fraction of pooled trips increases, so does the relative travel time, until it reaches the maximum permitted pooled travel time of 1.34. After the lowest number of vehicles capable of serving all requests is reached, the relative travel time remains stable. This is due to the implementation of the dispatcher algorithm, which prefers used vehicles over empty vehicles. The same effect holds for the relative driven distance. It decreases until finding the lowest number of vehicles, visible in Fig 4d). 4e) shows the progress of the empty mileage. We see that empty mileage does not play an important role. 4f) shows how many vehicles of the service were unused. The occurrence of this phenomenon that even if not all requests are serviced some vehicles stay empty, is explained in chapter 4.3.
For the lowest number of vehicles capable of serving all requests, the characteristics of all quarters are shown in Tab. 4.
Due to the very high simulation times on the whole area of Berlin, the results are very limited compared to the results from the city center of Berlin. For the request set Q1 and the whole area of Berlin we get the results visualized in Fig. 5.
Fig. 5 shows that the characteristics of a ride-pooling service in Berlin are similar to the characteristics in the city center of Berlin. Fig. 5a) prompts that also for the whole area of Berlin a lowest number of vehicles exists, which is capable of serving all requests. A further discussion of these results is given in chapter 4.2.
Figure 5: Progress of the characteristics in the whole area of Berlin for different fleet sizes. Due to the high simulation times we were not able to find a fleet size, which is capable of serving all requests.
4. Discussion
4.1. Characteristic Values Berlin City Center
Tab. 4. shows that the lowest number of vehicles capable of serving all car tips in the city center of Berlin is 4688.
Tab. 4 further shows that passengers take on average around 1.34 times as long to reach their destination with ride-pooling compared to private vehicle driving. For comparison, customers are 1.9 times slower compared to driving, when using public transportation[22].
The total driven distance is reduced by around 65%. This results in 65% less road traffic as well as CO2-emissions. The entire motorized mobility demand is thereby met with 4500 vehicles instead of using 40000 private vehicles, a reduction by almost 90 %. The number gets even higher if it is considered that the ride-pooling vehicles can be used over the whole day.
The vehicle occupancy is around 2.6 and, thus, significantly higher than the current occupancy of 1.6 [19]. This is despite assuming each request to only account for one customer. Thus, in practice some trips in the INRIX dataset will have more than one customer. Between the average vehicle occupancy and the relative driven distance exists the following correlation (if all requests are accepted or the rejected request are ignored in the calculation of the relative driven distance):
\[average\ vehicle\ occupancy\ =\ \frac{1}{relative\ driven\ distance}\]
With the average vehicle occupancy equal to 2.6 and relative driven distance of 0.35 this formula does not hold. The reason for this is that in the ride-pooling simulation only shortest paths between two stops are chosen as driven paths. In the dataset this is not necessarily the case. Here, drivers for example choose another route to avoid traffic congestions or construction sites. Further it could be assumed that the data is noisy, meaning for example that drivers choose not the shortest path because they want to get to some other intermediate targets, for example to drop their children at schools.
If we calculate the relative driven distance not with the distance from the dataset but instead use the shortest paths between all origin-destination pairs, we get a new relative driven distance equal to 0.38, meaning that now less distance is relatively saved. If we insert the two values in the correlation formula we clearly see that it now roughly holds. That the values still differ is due to rounding effects in the calculation.
Empty mileage doesn't play an important role in the scenario and is only around 1.75%. This is due to the fact that the number of vehicles is as low as possible. At the same time, important aspects such as travel to the depot are ignored. The number of unused vehicles is reflected in more details in the next chapter.
### 4.2 Characteristic Values Berlin
In Fig. 5 the results for the whole area of Berlin are shown. Due to the high simulation and data processings times, we were only capable of simulating fleet sizes up to 10000 vehicles. 10000 vehicles are not enough to accept all requests but are already capable of serving around 60%. The average occupancy increases to three, the driven distance is reduced by around 40%. The relative travel time for 1000 vehicles is around 1.4, and thus higher than in the city center of Berlin. Like in the city center of Berlin, empty mileage can be ignored.
### 4.3 Optimal Fleet Size
As mentioned in chapter 3 we determined the lowest number of vehicles capable of serving every request for every quarter in the city center of Berlin. However, Fig. 2f. and Tab. 4 show that at this number a high amount of vehicles is unused. Even if not all requests are serviced, like with 3000 vehicles, a few vehicles are unused (c.f. Fig. 2a) and Fig. 2f)).
The phenomenon can be explained by the fact that the starting points of the rejected requests and the position of the unused vehicles are so different that the time constraints cannot be met. This is visualized in Fig. 6.
Fig. 6 shows that the location of the empty vehicles are in the south of the city center of Berlin, while the origins of rejected trips are located in the northern parts of the network. Thus, if vehicles would drive from the south to the north to fetch customers, this would result in a waiting time restriction violation.
We furthermore see a correlation between the location of empty vehicles and nodes without any trip origin, describing why the empty vehicles stayed unused in the beginning of the simulation. This effect is visualized in Fig. 7.
Fig. 7 shows that the location of the empty vehicles are in the south of the city center of Berlin, while the origins of rejected trips are located in the northern parts of the network. Thus, if vehicles would drive from the south to the north to fetch customers, this would result in a waiting time restriction violation.
We furthermore see a correlation between the location of empty vehicles and nodes without any trip origin, describing why the empty vehicles stayed unused in the beginning of the simulation. This effect is visualized in Fig. 7.
Fig. 6: Location of empty vehicles and trip origins of rejected trips of a simulation with 4000 vehicles in the city center of Berlin. The empty vehicle locations are marked blue, while the trips origins of the rejected requests are marked yellow.
This leads to the assumption that the initial locations could be chosen more effectively than by drawing positions uniformly from all nodes. This would require a sophisticated rebalancing algorithm to choose proper initial conditions of the vehicles [23].
### 4.4. Distortion of Results due to Decay Phase of Simulation and Vehicle Speed
Rejected requests occur because time restrictions cannot be met, or because no vehicle has free seats. Nevertheless, the average occupancy of the vehicles is never six (number of seats per vehicle). On the one hand, this is due to the fact that the cars start empty and are filled up over time. On the other hand it is due to the decay phase at the end of the simulation. Since only trips that start and end between 7 and 8am are taken into account, the number of requests at the end is smaller than at the beginning. Thus, the cars empty slowly reaching the end of the simulation. The progress of the occupancy of a single vehicle from a simulation in the city center of Berlin, with a fleet size of 500 vehicles, is shown in Fig. 7.
Fig. 7: Locations of the unused vehicles (blue), nodes without any request origin (yellow) and nodes with unused vehicles and no request origin (red).
The start-up time and decay time is independent of the simulation duration. This means that in case of a longer simulation time, the average occupancy of the vehicles increases. If the test used to create Fig. 7 is repeated with two hours instead of one, the average occupancy of the vehicles increases from 3.12 to 3.6. All results shown in Fig. 4, Fig. 5 and Tab. 4 are influenced by this behavior.
Secondly the relative travel time is strongly influenced by the vehicle speed used in the simulation, it can easily be increased or decreased by changing the average speed. We used the average speed we determine from the dataset as the average speed of the simulation. But, this is not necessarily the possible average speed of a potential ride-pooling service covering the two areas in Berlin we used as networks.
### 4.5 Cost Assessment and Comparison of Costs
In this section, we compare the total costs for car trips and the total costs for a ride-pooling system that replaces the individual trips. We use in this section an average value of 40000 travelers in the city center of Berlin between 7 and 8 am (cf. Tab. 1).
The total costs to operate the ride-pooling system between 7 and 8 am are composed of the price for the required fuel and the wages of the drivers. From the simulation we get that on average 4688 vehicles, and thus 4688 drivers, are required between 7 and 8am. With an hourly wage of 18\(\epsilon\) we get total costs of 84,384\(\epsilon\) for the wages between 7 and 8am [24]. According to the simulation, the ride-pooling vehicles drive around 54,000 km during the morning peak. With an average capacity of 32.6kWh/100km and an average price of 30ct per kW/h (assuming that all ride-pooling vehicles are electric vehicles), we get additional costs of 5,281\(\epsilon\) for the required energy [25, 26]. Summarized, we get costs of around
Figure 7: Development of the occupancy for a single vehicle in a simulation with a fleet size of 500 cars.
90000\(\copy\) to operate the fleet between 7 and 8 am. If these costs are divided on all users, every trip in the morning would cost around 2.25$ to cover the operational costs, which is comparable to public transport fares. With 251 working days (value for Berlin in 2023 with a 5-day week), we get fare prices of 565$ for every passenger per year.
From the dataset we determine a total length of 138,500 km for the individual car trips between 7 and 8 am in the city center of Berlin. Here, we consider that most of the vehicles are combustion engine vehicles (we only considered diesel oil) with an average consumption of 7 liters per 100km [27]. With an average price of 1.94$ per liter, we get total costs 18,866$ for all trips, resulting in a price of 0.47$ on average for every driver [28]. For 251 working days we thus get yearly costs of 118$ for the trips in the morning. Therefore, using a private vehicle is 80% cheaper than using ride-pooling in this simplified scenario.
This proportion changes if we include the procurement costs. With an average price of 18,800$ per vehicle (only considering pre-owned vehicles), we get total costs of 752 Mil. \(\copy\) for the private vehicles of the 40,000 potential passengers [29].
As price for the vehicles we use 42.69$, the price of a vehicle model often used by ride-pooling operators [30]. For the 4688 vehicles of the fleet we thus get total costs of 200 million euros. Since the vehicles can be used throughout the whole day, only the share of 8.8% has to be covered by the users between 7 and 8 am [19]. This would correspond to costs of 17.6 million euros. Divided on the 40000 users, we get 440$ procurement costs per customer. Thus, the procurement costs customers have to cover are 98% lower for ride-pooling than for the private vehicles.
This comparison is of course highly simplified. For example, the assumption that the introduction of a ride-pooling service will cause customers to give up their private vehicles is rather unrealistic. Furthermore all maintenance costs are completely ignored and for the calculation of the average vehicle price only pre-owned vehicles were taken into consideration. Research into customer price sensitivity has shown that customers tend to underestimate procurement costs and mainly compare fares to fuel costs.. Nevertheless, the comparison provides a basic impression that, considering all costs, ride-pooling is not necessarily more expensive than the large-scale use of private vehicles. It is also obvious that most of the operational costs of a ride-pooling fleet are due to the wages of the drivers. Therefore, autonomous vehicles are an appealing approach to make ride-pooling systems cheaper and, hence, more widely usable.
## 5 Conclusion
We studied in this paper how many trips in the morning peak (between 7 and 8am) in Berlin and in the city center of Berlin could be serviced by a ride-pooling system. We further investigated how this influences the relative trip time of the passengers, the relatively driven distance, the share of empty mileage and the average occupancy of the vehicles. As a database we used real car trips, tracked in 2017, by the company INRIX. To study the ride-pooling service an agent-based ride-pooling simulation is used. We found that in the city center of Berlin it is possible to serve all former car trips with around 4700 vehicles. For the passengers this results in a 1.34 times higher travel time compared to traveling with their own vehicle. The driven distance reduced around 65%, while the average occupancy increased to 2.6. Empty mileage could be ignored, less than 2% of the distance the vehicles drove empty. For the whole area of Berlin we were limited by computational capacities due
to the high number of stops and requests. But we were able to show that also for the whole area of Berlin ride-pooling is capable of pooling a high amount of trips and, thus, reducing the driven distance while increasing the average occupancy of the vehicles. With 10000 vehicles in the fleet the driven distance could already be reduced by around 40%, while serving 60% of the former car trips. Simulations of more vehicles would be necessary to find a fleet size, capable of serving all requests in the larger area.
We conclude that the trip density in Berlin would be high enough to warrant efficient ride pooling with acceptable delays as well as competitive fares with human drivers. With autonomous vehicles shared mobility would thus amount to prices at or below the fuel costs of private driving.
As the main problem for the fleet size we recognized the initial positions of the vehicles. We were not able to find an optimal fleet size which is capable of serving all requests while having no unused vehicles. We conclude that rebalancing, in order to get optimal initial positions of the vehicles, is necessary. In the next step we want to compare our results to pure ride-pooling simulation studies in other cities, like Dublin [31].
## Acknowledgments
The authors gratefully acknowledge the European Regional Development Fund (ERDF), the German Federal Ministry of Education and Research and the Land Brandenburg for supporting this project by providing resources on the high performance computer system at the Potsdam Institute for Climate Impact Research.
Alexander Schmaus acknowledges support from the German Federal Environmental Foundation (Deutsche Bundesstiftung Umwelt)
|
2301.02509 | Primitive 4-generated axial algebras of Jordan type | We show that primitive 4-generated axial algebras of Jordan type are at most
81-dimensional. | Tom De Medts, Louis Rowen, Yoav Segev | 2023-01-06T13:47:09Z | http://arxiv.org/abs/2301.02509v1 | # Primitive 4-generated axial algebras of Jordan type
###### Abstract.
We show that primitive 4-generated axial algebras of Jordan type are at most 81-dimensional.
## 1. Introduction
Axial algebras were introduced in 2015 by Jonathan Hall, Felix Rehren and Sergey Shpectorov [14]. They are non-associative commutative algebras generated by _axes_, i.e., idempotents for which the left multiplication operator is semisimple and such that the resulting eigenspaces multiply according to a given _fusion law_ (see SS2 for precise definitions).
In the easiest interesting case, these multiplication operators admit precisely 3 eigenvalues 0, 1 and \(\eta\). A typical example is provided by _Jordan algebras_, where each idempotent gives rise to a _Peirce decomposition_ of the algebra. In this case, we have \(\eta=\frac{1}{2}\), and the fusion law is the following.
We call the axial algebras with a fusion law \(\Phi(\eta)\)_axial algebras of Jordan type_\(\eta\). Other than Jordan algebras themselves, there are other interesting examples of axial algebras of Jordan type (for arbitrary values of \(\eta\neq 0,1\)), namely the _Matsuo algebras_ arising from 3-transposition groups. In this case, the dimension of the algebra is equal to the size of the normal generating set of 3-transpositions of the group. (See Example 2.5 below for details.)
The classification of 3-transposition groups has a long history (see [1, 2] and the references therein). It is a highly non-trivial fact that finitely generated 3-transposition groups are finite. In fact, this is a consequence
\begin{table}
\begin{tabular}{c|c c c} \(*\) & 1 & 0 & \(\eta\) \\ \hline
1 & \(\{1\}\) & \(\emptyset\) & \(\{\eta\}\) \\
0 & \(\emptyset\) & \(\{0\}\) & \(\{\eta\}\) \\ \(\eta\) & \(\{\eta\}\) & \(\{\eta\}\) & \(\{1,0\}\) \\ \end{tabular}
\end{table}
Table 1. The Jordan fusion law \(\Phi(\eta)\)
of the classification of finite simple groups, and a direct proof of this fact would be very valuable. (See [1, Theorem (1.3), p. 153].)
One possible approach for such a direct proof is precisely via the corresponding Matsuo algebras. More generally, we ask the following question. (We refer to Definition 2.3 below for the precise meaning.)
**Question**.: _Let \(A\) be a primitive axial algebra of Jordan type. Assume that \(A\) is generated by a finite set of axes. Can we conclude that \(A\) is finite-dimensional?_
Notice that, by Corollary 2.8 below, a positive answer to this question would show, in particular, that finitely generated \(3\)-transposition groups are finite. In fact, for \(\eta\neq\frac{1}{2}\), it is equivalent.
It is natural to try to answer this question for an increasing number of axes. For \(2\)-generated primitive axial algebras of Jordan type, this is almost trivial: such algebras are at most \(3\)-dimensional. (In fact, much more can be said: [11, Theorem 1.1] gives a complete classification of such algebras.)
For \(3\)-generated algebras, this question was answered affirmatively in the recent paper [12]: such algebras are at most \(9\)-dimensional.
Our main result is the following.
**Main Theorem**.: _Primitive \(4\)-generated axial algebras of Jordan type \(\eta\) are at most \(81\)-dimensional, for any \(\eta.\) Moreover, this result is best possible._
To go from \(3\)-generated to \(4\)-generated primitive axial algebras of Jordan type is a large step that required substantial new ideas. In fact, in our new setup, it is almost a triviality to recover the earlier result from [12] that such \(3\)-generated algebras are at most \(9\)-dimensional. One of the key ideas is that we will almost never use the actual multiplication in the algebra, but instead, we use _sequences of Miyamoto involutions_ (see Definition 3.3 below). These sequences will allow us to formulate many "rewriting rules" that we can use to systematically deal with larger and larger expressions, until we eventually "wrap up" so that we can reduce every possible expression of length larger than \(6\). The precise meaning of this will be explained below and can be seen in Theorem 5.1, which is a more detailed version of our Main Theorem.
It is worth pointing out that going to the next step, primitive \(5\)-generated axial algebras of Jordan type, is expected to be increasingly more difficult, because the upper bound of the dimension will be at least \(3^{12}=531441\). (In fact, this is our conjectured upper bound.) In addition, one of the examples (of dimension \(306936\)) arises from the largest sporadic Fischer group \(\mathrm{Fi}_{24}\).
## 2. Primitive axial algebras of Jordan type
Throughout the paper, \(\mathbb{F}\) will be a commutative field with \(\mathrm{char}\,\mathbb{F}\neq 2\). All our algebras will be commutative but non-associative1\(\mathbb{F}\)-algebras.
For the definition of fusion laws and axial algebras, we rely on [1].
**Definition 2.1**.:
1. A _fusion law_ is a pair \((X,*)\), where \(X\) is a set and \(*\) is a map from \(X\times X\) to \(2^{X}\), where \(2^{X}\) denotes the power set of \(X\). A fusion law \((X,*)\) is called _symmetric_ if \(x*y=y*x\) for all \(x,y\in X\).
2. The _Jordan fusion law_ is the fusion law with \(X=\{0,1,\eta\}\) (where \(\eta\) is just a symbol) and with \(*\) given by Table 1 above.
**Definition 2.2**.: Let \(\Phi=(X,*)\) be a fusion law.
1. A _\(\Phi\)-decomposition_ of an algebra \(A\) is a direct sum decomposition \(A=\bigoplus_{x\in X}A_{x}\) (as vector spaces) such that \(A_{x}A_{y}\subseteq A_{x*y}\) for all \(x,y\in X\), where \(A_{Y}:=\bigoplus_{y\in Y}A_{y}\) for all \(Y\subseteq X\).
2. A _\(\Phi\)-decomposition algebra_ is a triple \((A,\mathcal{I},\Omega)\) where \(A\) is an \(\mathbb{F}\)-algebra, \(\mathcal{I}\) is an index set and \(\Omega\) is a tuple of \(\Phi\)-decompositions of \(A\) indexed by \(\mathcal{I}\). In other words, for each \(i\in\mathcal{I}\), we have a corresponding \(\Phi\)-decomposition \(A=\bigoplus_{x\in X}A_{x}^{(i)}\) of the algebra \(A\).
**Definition 2.3**.: Let \(\Phi=(X,*)\) be a fusion law with \(1\in X\subseteq\mathbb{F}\).
1. For each \(a\in A\), we write \(\operatorname{ad}_{a}\) for the left multiplication by \(a\), i.e., \(\operatorname{ad}_{a}\colon A\to A\colon x\mapsto ax\).
2. An element \(a\in A\) is called a _\(\Phi\)-axis_ if it is idempotent (i.e., \(a^{2}=a\)) and the decomposition of \(A\) into the eigenspaces for \(\operatorname{ad}_{a}\) is a \(\Phi\)-decomposition.
3. The algebra \(A\) is a _\(\Phi\)-axial algebra_ if it is generated by a set of \(\Phi\)-axes. This makes \(A\) into a \(\Phi\)-decomposition algebra (with \(\mathcal{I}\) identified with the given set of axes).
4. A \(\Phi\)-axial algebra \(A\) is _primitive_ if for each axis \(a\) of the generating \(\Phi\)-axes of \(A\), the \(1\)-eigenspace \(A_{1}^{(a)}\) is \(1\)-dimensional, i.e., is equal to \(\mathbb{F}a\).
5. An _axial algebra of Jordan type_\(\eta\) is a \(\Phi\)-axial algebra for the fusion law \(\Phi=\Phi(\eta)\) as in Table 1.
As we mentioned in the introduction, the two main sources of examples of axial algebras of Jordan type are (1) Jordan algebras, and (2) Matsuo algebras. We give some details.
**Example 2.4**.: Let \(J\) be a Jordan algebra over \(\mathbb{F}\), i.e., \(J\) is a unital commutative non-associative algebra such that \(a^{2}(ab)=a(a^{2}b)\) for all \(a,b\in J\). If \(e\in J\) is an idempotent, then it is an axis for the Jordan fusion law \(\Phi(\frac{1}{2})\); this is the famous _Peirce decomposition_ for Jordan algebras (see, e.g., [1, Chapter III]). In particular, if \(J\) is generated by idempotents, then it is an axial algebra of Jordan type \(\frac{1}{2}\).
**Example 2.5**.: Let \((G,D)\) be a _\(3\)-transposition group_, i.e., \(G\) is a group and \(D\subseteq G\) is a generating set of involutions, closed under conjugation in \(G\), such that the product of any two elements in \(D\) has order at most \(3\). Let
\(\eta\in\mathbb{F}\setminus\{0,1\}\) be arbitrary. Then the _Matsuo algebra_\(M_{\eta}(G,D)\) is the algebra with basis \(D\), with multiplication given by
\[de:=\begin{cases}e&\text{if $d=e$}\\ 0&\text{if $o(de)=2$}\\ \frac{\eta}{2}(d+e-f)&\text{if $o(de)=3$},\text{ where $f=d^{e}=e^{d}$ in $G$}.\end{cases}\]
By [14, Theorem 6.5], \(M_{\eta}(G,D)\) is a primitive axial algebra of Jordan type \(\eta\).
Axial algebras of Jordan type, and more generally any type of decomposition algebras where the fusion law admits a \(\mathbb{Z}/2\)-grading, admit many involutory automorphisms, the so-called _Miyamoto involutions_.
**Definition 2.6**.:
1. A _\(\mathbb{Z}/2\)-grading_ of a fusion law \((X,*)\) is a map \(\theta\colon X\to\mathbb{Z}/2\) such that \(x*y\subseteq\theta^{-1}(\theta(x)+\theta(y))\) for all \(x,y\in X\). For instance, the Jordan fusion law from Table 1 is \(\mathbb{Z}/2\)-graded with \(\theta(0)=\theta(1)=0\) and \(\theta(\eta)=1\).
2. If \((A,\mathcal{I},\Omega)\) is a \(\Phi\)-decomposition algebra for a \(\mathbb{Z}/2\)-graded fusion law \((X,*)\), then for each \(i\in\mathcal{I}\), we define a _Miyamoto involution_ \[\tau_{i}\colon A\to A\colon a_{x}\mapsto(-1)^{\theta(x)}a_{x},\quad\text{ when $a_{x}\in A_{x}^{(i)}$}.\] In other words, \(\tau_{i}\) fixes the \(0\)-graded elements and negates the \(1\)-graded elements with respect to the \(i\)-th decomposition of \(A\).
Corollary 2.8 below is an important motivation for the main result of our paper.
**Proposition 2.7**.: _Let \((G,D)\) be a \(3\)-transposition group. The following are equivalent:_
1. \(G\) _is finite._
2. \(D\) _is finite._
3. \(M_{\eta}(G,D)\) _is finite-dimensional._
Proof.: Of course, (a) implies (b), and (b) and (c) are equivalent because \(M_{\eta}(G,D)\) has dimension \(|D|\). In particular, the dimension of \(M_{\eta}(G,D)\) is independent of the choice of the base field \(\mathbb{F}\) and of \(\eta\in\mathbb{F}\), so to show that (c) implies (a), we may assume that \(\mathbb{F}\) is a finite field.
Then \(A:=M_{\eta}(G,D)\) is finite. By [13, p. 325] (which relies on [12, p. 92, Example (4)]), \(G/Z(G)\) is embedded in \(\operatorname{Aut}(A)\), so \(G/Z(G)\) is a finite group. By a theorem of Schur, [12, (33.9), p. 168], the derived subgroup \(G^{\prime}\) is finite. Since \(G/G^{\prime}\) is an abelian group generated by a finite number of involutions, it is finite, so we conclude that \(G\) is finite.
**Corollary 2.8**.: _The following are equivalent:_
1. _Every finitely generated_ \(3\)_-transposition group_ \((G,D)\) _is finite._
2. _Every primitive axial algebra_ \(A\) _of Jordan type_ \(\eta\neq\frac{1}{2}\) _generated by a finite set of axes_ \(X\) _is finite-dimensional._
Proof.: (a)\(\,\Rightarrow\,\)(b) Let \(A\) and \(X\) be as in (b). For \(x\in X\), let \(\tau_{x}\) be the Miyamoto involution associated with \(x.\) By [10, Theorem (5.4), p. 105], the group \(G=\langle\tau_{x}\mid x\in X\rangle\) is a \(3\)-transposition group. By the assumption, \(G\) is finite. By [10, Corollary (1.2), p. 81], \(A\) is spanned by \(\{x^{g}\mid x\in X,g\in G\}\), so \(A\) is finite-dimensional.
(b)\(\,\Rightarrow\,\)(a) Let \((G,D)\) be a finitely generated \(3\)-transposition group. Then \(G\) is generated by a finite number of elements from \(D\), hence the algebra \(M_{\eta}(G,D)\) is finitely generated. Thus, by the assumption, it is finite-dimensional. Proposition 2.7 then tells us that \(G\) is finite.
In order to get an idea about the complexity of the primitive \(4\)-generated axial algebras of Jordan type, it is useful to look at the list of \(4\)-generated \(3\)-transposition groups first. In particular, this will provide us with an example of such an algebra of dimension \(81\), which is precisely the upper bound that we will obtain in our main result.
**Theorem 2.9**.: _Let \((G,D)\) be a \(3\)-transposition group generated by \(4\) elements from \(D\) (but not by less than \(4\)). Then its central type is one of the following:_
1. \(W(A_{4})\)_, the Weyl group of type_ \(A_{4}\) _(with_ \(|D|=10\)_);_
2. \(W(D_{4})\)_, the Weyl group of type_ \(D_{4}\) _(with_ \(|D|=12\)_);_
3. \(3^{3}\colon\operatorname{Sym}(4)\) _(with_ \(|D|=18\)_);_
4. \(2^{1+6}\colon\operatorname{SU}_{3}(2)^{\prime}\) _(with_ \(|D|=36\)_);_
5. _Hall's_ \(3\)_-transposition group_ \([3^{10}]\colon 2\) _(with_ \(|D|=81\)_) or its affine quotient_ \(3^{3+3}\colon 2\) _(with_ \(|D|=27\)_)._
Proof.: The definition of central type, and the proof of this fact (together with the size of \(D\) in each case) can be found in [10, Proposition (4.2)], where the authors point out that this classification has been proven independently by Zara, Hall and Moori; the first written source seems to be Zara's (unpublished) thesis from 1984.
The unique \(3\)-transposition group in this list attaining the upper bound \(|D|=81\) is particularly interesting because it arises as a \(3\)-transposition subgroup of the sporadic Fischer groups \(\operatorname{Fi}_{23}\) and \(\operatorname{Fi}_{24}\). We give an explicit construction of the resulting Matsuo algebra, based on [1, SS4.1]. In fact, we had implemented this example on a computer to experiment with identities, which is how some of our ideas arose.
**Example 2.10**.: Let \(D\) be the \(4\)-dimensional vector space over the field \(\mathbb{F}_{3}\) (so \(|D|=81\)). We first set
\[(x_{1},x_{2},x_{3},x_{4})\bullet(y_{1},y_{2},y_{3},y_{4})\\ :=\big{(}x_{1}+y_{1},\ x_{2}+y_{2},\ x_{3}+y_{3},\ x_{4}+y_{4}+(x_{ 1}y_{2}-x_{2}y_{1})(x_{3}-y_{3})\big{)}\]
for all \(x_{i},y_{i}\in\mathbb{F}_{3}\). Next, we set
\[d\ast e:=(d\bullet e)\bullet(d\bullet e)\]
for all \(d,e\in D\). For any \(\eta\in\mathbb{F}\setminus\{0,1\}\)--recall that \(\mathbb{F}\) is still our arbitrary base field of characteristic different from \(2\)--we now define an \(\mathbb{F}\)-algebra with basis \(D\), and with multiplication given by
\[de:=\begin{cases}e&\text{ if }d=e\\ \frac{\eta}{2}(d+e-d*e)&\text{ if }d\neq e.\end{cases}\]
Then by combining [1] with Example 2.5, we see that this is precisely the Matsuo algebra corresponding to Hall's \(3\)-transposition group \([3^{10}]\colon 2\).
## 3. Method
From now on, we assume that \(A\) is a primitive axial algebra of Jordan type \(\eta\) generated by a finite set \(S\) of axes.
**Definition 3.1**.: For each \(i\geq 0\), we set
\[S[i]:=\langle\tau_{a_{1}}\tau_{a_{2}}\cdots\tau_{a_{\ell}}(b)\mid\ell\leq i,a_ {1},\ldots,a_{\ell},b\in S\rangle.\]
In particular, \(S[0]=\langle S\rangle\), and the \(S[i]\) form an ascending chain of subspaces of \(A\).
Our goal is to show that \(S[n]=A\) for some \(n\). The following proposition tells us that we can do this by showing that the ascending chain of the \(S[i]\) stabilizes.
**Proposition 3.2**.: _Assume that \(S[n]=S[n+1]\) for some \(n\). Then \(A=S[n]\)._
Proof.: Following [11, p. 81], we define the _closure_ of the set \(S\) of axes to be the smallest set \(C\) of axes of \(A\) containing \(S\) such that for each \(a\in C\), we have \(\tau_{a}(C)\subseteq C\). In fact, \(C=\{\tau_{a_{1}}\tau_{a_{2}}\cdots\tau_{a_{\ell}}(b)\mid\ell\geq 0,a_{1}, \ldots,a_{\ell},b\in S\}\); see, for instance, [12, Lemma 3.5]. It now suffices to observe that if \(S[n]=S[n+1]\), then \(S[n]=S[\ell]\) for all \(\ell\geq n\), hence \(S[n]=\langle C\rangle\). By [11, Cor. (1.2), p. 81], however, \(A\) is spanned by \(C\), and the result follows.
From now on, when we refer to an arbitrary _axis of \(A\)_, we will always mean an element of the closure \(C\) of \(S\) (which is indeed always an axis for the same fusion law).
The following two definitions will play a crucial role.
**Definition 3.3**.:
1. We let \[\llbracket a_{1},a_{2},\ldots,a_{\ell}\rrbracket:=\tau_{a_{1}}\tau_{a_{2}} \cdots\tau_{a_{\ell}}\] for all axes \(a_{1},\ldots,a_{\ell}\in A\).
2. For all \(x,y\in A\), we set \[x\ \equiv_{(i)}\ y\iff x-y\in S[i].\] Notice that \(x\ \equiv_{(i)}\ y\) implies \(x\ \equiv_{(j)}\ y\) for all \(j\geq i\), and also implies that \(\llbracket a_{1},\ldots,a_{\ell}\rrbracket x\ \equiv_{(i+\ell)}\llbracket a_{1}, \ldots,a_{\ell}\rrbracket y\) for all \(a_{1},\ldots,a_{\ell}\in S\).
**Remark 3.4**.: The notation \(\llbracket a_{1},\dots,a_{\ell}\rrbracket\) will also be used when the \(a_{i}\) are axes that are not necessarily contained in \(S\). Some care is needed with the use of the equivalence relations \(\ \equiv_{(i)}\) in such a situation, as these relations are always meant with respect to the given generating set \(S\).
By [10, Theorem 4.1], primitive axial algebras of Jordan type always admit a (necessarily unique) normalized symmetric Frobenius form.
**Definition 3.5**.:
1. A bilinear form \((\cdot,\cdot)\colon A\times A\to\mathbb{F}\) is called a _(normalized) Frobenius form_ on \(A\) if \((xy,z)=(x,yz)\) for all \(x,y,z\in A\) and, in addition, \((a,a)=1\) for each axis \(a\in A\).
2. It will be useful to introduce the notation \[\epsilon_{x,y}:=1-\tfrac{2}{\eta}(x,y)\] for all \(x,y\in A\).
**Proposition 3.6**.: _Let \(a\in A\) be an axis and \(x\in A\) be arbitrary. Then_
\[\tau_{a}(x)=x+\tfrac{2}{\eta}(a,x)a-\tfrac{2}{\eta}ax.\]
Proof.: This is [10, Lemma 3.3] combined with the statement from [10, Theorem 4.1] that \((a,x)=\varphi_{a}(x)\).
**Remark 3.7**.: In [10], their Lemma 3.3 is used, in fact, in the proof of their Theorem 4.1 (the existence of the Frobenius form). On the other hand, if we already _assume_ the existence of the Frobenius form to begin with, then there is an easy direct proof of Proposition 3.6 by simply decomposing \(x\) with respect to the eigenspaces for the axis \(a\).
Proposition 3.6 has the following immediate but useful consequences.
**Corollary 3.8**.: _Let \(a,b\in A\) be axes. Then:_
1. \(\llbracket a\rrbracket b-\llbracket b\rrbracket a=\epsilon_{a,b}(b-a)\)_._
2. _If_ \(a\in S\) _and_ \(x\in S[i]\)_, then_ \((-\tfrac{2}{\eta})ax\ \equiv_{(0)}\llbracket a\rrbracket x-x\ \equiv_{(i)} \llbracket a\rrbracket x\)_._
Proof.:
1. By Proposition 3.6, we have \[\tau_{a}(b)-\tau_{b}(a)=\big{(}1-\tfrac{2}{\eta}(a,b)\big{)}(b-a).\]
2. This follows immediately from Proposition 3.6.
We recall the following important fact, which we will be using over and over again, often without explicitly mentioning it.
**Proposition 3.9**.: _We have \(\llbracket a,b,a\rrbracket=\llbracket\tau_{a}(b)\rrbracket\) for all axes \(a,b\in A\)._
Proof.: This follows from [10, Lemma 5.1, p. 103] and the fact that \(\tau_{a}\in\operatorname{Aut}(A)\).
The following result is a first instance of how useful it is.
**Proposition 3.10**.: _Let \(a,b\in S\) and \(x\in A\). Then:_
1. \(\llbracket a,b,a\rrbracket x\ \equiv_{(1)}x-\tfrac{2}{\eta}\tau_{a}(b)x\)
* \(\llbracket a,b,a\rrbracket x-\llbracket b,a,b\rrbracket x\ \equiv_{(1)}\ \epsilon_{a,b}( \llbracket b\rrbracket x-\llbracket a\rrbracket x)\)_. In particular, if_ \(x\in S[i]\) _for some_ \(i\geq 0\)_, then_ \(\llbracket a,b,a\rrbracket x\ \equiv_{(i+1)}\ \llbracket b,a,b\rrbracket x\)_._
* \(\llbracket b,a,b,a\rrbracket x\ \equiv_{(2)}\ \llbracket a,b\rrbracket x+ \epsilon_{a,b}(x-\llbracket b,a\rrbracket x)\)_. In particular, if_ \(x\in S[i]\) _for some_ \(i\geq 2\)_, then_ \(\llbracket b,a,b,a\rrbracket x\ \equiv_{(i)}\ \llbracket a,b\rrbracket x- \epsilon_{a,b}\llbracket b,a\rrbracket x\)_._
Proof.:
* We apply Proposition 3.9 to \(x\) and use Proposition 3.6 on the right-hand side to get \[\llbracket a,b,a\rrbracket x=x+\tfrac{2}{\eta}(\tau_{a}(b),x)\llbracket a \rrbracket b-\tfrac{2}{\eta}\tau_{a}(b)x.\] (3.1) Since \(\llbracket a\rrbracket b\in S[1]\), the result follows.
* Interchanging \(a\) and \(b\) in (i) and subtracting gives, using Corollary 3.8(i), \[\llbracket a,b,a\rrbracket x-\llbracket b,a,b\rrbracket x\ \equiv_{(1)}\ (- \tfrac{2}{\eta})\epsilon_{a,b}(b-a)x.\] By Corollary 3.8(ii), however, \[(-\tfrac{2}{\eta})(bx-ax)\ \equiv_{(0)}\ (\llbracket b\rrbracket x-x)-( \llbracket a\rrbracket x-x)=\llbracket b\rrbracket x-\llbracket a \rrbracket x,\] and the result follows.
* This follows immediately by applying \(\tau_{b}\) on (ii).
**Lemma 3.11**.: _Let \(a,b,c\in S\). Then:_
* \(\llbracket a,b\rrbracket a=\epsilon_{a,b}a+b-\epsilon_{a,b}\llbracket a \rrbracket b\ \equiv_{(0)}\ -\epsilon_{a,b}\llbracket a\rrbracket b\)_._
* \(\llbracket a,b,a\rrbracket c=\alpha c-\alpha\llbracket a\rrbracket b+ \llbracket c,a\rrbracket b\) _where_ \(\alpha=\epsilon_{\tau_{a}(b),c}\in F\)_._
* \(\llbracket a,b,c\rrbracket a\ \equiv_{(0)}\delta\llbracket a\rrbracket b- \epsilon_{a,c}\llbracket a,b\rrbracket c+\llbracket c,a\rrbracket b\) _for some_ \(\delta\in F\)_._
Proof.:
* By Corollary 3.8(i), \[\llbracket a,b\rrbracket a=\llbracket a\rrbracket(\llbracket b\rrbracket a- \llbracket a\rrbracket b+\llbracket a\rrbracket b)=\epsilon_{a,b}\llbracket a \rrbracket(a-b)+b.\]
* Let \(\alpha=\epsilon_{\tau_{a}(b),c}\). By substituting \(\tau_{a}(b)\) for \(a\) and \(c\) for \(b\) in Corollary 3.8(i), we get \[\llbracket\tau_{a}(b)\rrbracket c-\llbracket c,a\rrbracket b=\alpha(c- \llbracket a\rrbracket b).\] The result now follows from Proposition 3.9.
* By Corollary 3.8(i), we have \[\llbracket a,b,c\rrbracket a =\llbracket a,b,a\rrbracket c+\llbracket a,b\rrbracket\left( \llbracket c\rrbracket a-\llbracket a\rrbracket c\right)\] \[=\llbracket a,b,a\rrbracket c+\epsilon_{a,c}\llbracket a,b \rrbracket(a-c),\] so (iii) follows from (i) and (ii).
## 4. Rewriting rules
In this section, we will gradually build up "rewriting rules" that will allow us to simplify certain expressions. As the length of the expressions increases, the proofs become more and more involved.
**Proposition 4.1**.: _Let \(a,b,c,d\in S\). Then:_
* \(\llbracket a\rrbracket b\ \equiv_{(0)}\ \llbracket b\rrbracket a\)_._
* \(\llbracket a,b\rrbracket a\in S[1]\)
3. \(\llbracket a,b,a,a\rrbracket c\ \equiv_{(1)}\ \llbracket c,a\rrbracket b\).
4. \(\llbracket a,b,c\rrbracket a\ \equiv_{(1)}\ \llbracket c,b\rrbracket a- \epsilon_{a,c}\llbracket a,b\rrbracket c\)
5. \(\llbracket a,b,a,c\rrbracket d\ \equiv_{(1)}\ \llbracket c,d,c,a\rrbracket b\).
6. \(\llbracket a,b,c,d,b\rrbracket a\ \equiv_{(2)}\ \llbracket b,d,c,b \rrbracket a-\epsilon_{a,\tau_{b}(d)}\llbracket a,b,c\rrbracket d\ \equiv_{(3)}\ \llbracket b,d,c,b \rrbracket a\).
7. \(\llbracket a,b,c,a,b\rrbracket d\ \equiv_{(3)}\ \llbracket b,a,d,b,a \rrbracket c\).
Proof.:
1. This follows from Corollary 3.8(i).
2. By (i), we have \(\llbracket a,b\rrbracket a\ \equiv_{(1)}\ \llbracket a,a\rrbracket b=b\). (Of course, this also follows from Lemma 3.11(i).)
3. This follows from Lemma 3.11(ii).
4. This follows from Lemma 3.11(iii) and (i).
5. We have \[\llbracket a,b,a,c\rrbracket d-\llbracket c,d,c,a\rrbracket b=\llbracket\tau_{a }(b)\rrbracket\tau_{c}(d)-\llbracket\tau_{c}(d)\rrbracket\tau_{a}(b),\] which is contained in \(\langle\tau_{a}(b)-\tau_{c}(d)\rangle\leq S[1]\) by Corollary 3.8(i).
6. We have \[\llbracket a,b,c,d,b\rrbracket a=\llbracket a,\tau_{b}(c),\tau_{b}(d) \rrbracket a.\] Now let \(S^{\prime}=\{a,\tau_{b}(c),\tau_{b}(d)\}\) and apply (iv) with respect to this set \(S^{\prime}\) in place of \(S\). Notice that \(S^{\prime}[1]\leq S[2]\), because \[\llbracket\tau_{b}(c)\rrbracket a=\llbracket b,c,b\rrbracket a\in S[2]\quad \text{(by (iii))},\] \[\llbracket\tau_{b}(c)\rrbracket\tau_{b}(d)=\llbracket b,c,b,b\rrbracket d =\llbracket b,c\rrbracket d\in S[2],\] so we see that indeed \(\llbracket x\rrbracket y\in S[2]\) for all \(x,y\in S^{\prime}\). Hence \[\llbracket a,\tau_{b}(c),\tau_{b}(d)\rrbracket a\ \equiv_{(2)}\ \llbracket\tau_{b}(d),\tau_{b}(c) \rrbracket a-\epsilon_{a,\tau_{b}(d)}\llbracket a,\tau_{b}(c)\rrbracket \tau_{b}(d)\] \[=\llbracket b,d,c,b\rrbracket a-\epsilon_{a,\tau_{b}(d)} \llbracket a,b,c\rrbracket d\] so we conclude that indeed \[\llbracket a,b,c,d,b\rrbracket a\ \equiv_{(2)}\ \llbracket b,d,c,b \rrbracket a-\epsilon_{a,\tau_{b}(d)}\llbracket a,b,c\rrbracket d\ \equiv_{(3)}\ \llbracket b,d,c,b \rrbracket a.\]
7. We start from \[\llbracket b,a,c,a,b\rrbracket d =\llbracket\tau_{b}\tau_{a}(c)\rrbracket d\] \[=\llbracket d\rrbracket\tau_{b}\tau_{a}(c)+\epsilon_{d,\tau_{b} \tau_{a}(c)}(d-\tau_{b}\tau_{a}(c))\] \[=\llbracket d,b,a\rrbracket c+\epsilon_{d,\tau_{b}\tau_{a}(c)}(d- \llbracket b,a\rrbracket c)\] \[\equiv_{(0)}\ \llbracket d,b,a\rrbracket c-\epsilon_{d,\tau_{b} \tau_{a}(c)}\llbracket b,a\rrbracket c.\] In particular, \(\llbracket b,a,c,a,b\rrbracket d\in S[3]\). Moreover, applying \(\llbracket b,a\rrbracket\) to this equivalence yields \[\llbracket b,a,b,a,c,a,b\rrbracket d\ \equiv_{(2)}\ \llbracket b,a,d,b,a \rrbracket c-\epsilon_{d,\tau_{b}\tau_{a}(c)}\llbracket b,a,b,a\rrbracket c\] \[\equiv_{(2)}\ \llbracket b,a,d,b,a\rrbracket c,\] (4.1) where the last equivalence holds because by (iii) and (iv), we have \[\llbracket b,a,b,a\rrbracket c\ \equiv_{(2)}\ \llbracket b,c,a\rrbracket b\in S[2].\]
We now apply Proposition 3.10(iii) with \(x=\llbracket c,a,b\rrbracket d\in S[3]\), which gives
\[\llbracket b,a,b,a,c,a,b\rrbracket d \equiv_{(3)}\ \llbracket a,b,c,a,b\rrbracket d-\epsilon_{a,b} \llbracket b,a,c,a,b\rrbracket d\] \[\equiv_{(3)}\ \llbracket a,b,c,a,b\rrbracket d.\]
The claim follows by combining this with (4.1).
For our next rewriting rule in Proposition 4.4, we first need the following lemma.
**Lemma 4.2**.: _Let \(a,b,c,d\in S\) and let \(S^{\prime}=\{a,b,c,\tau_{a}(d)\}\). Then \(S^{\prime}[3]\subseteq S[4]\)._
Proof.: Let \(\llbracket x,y,z\rrbracket w\) be any element with \(x,y,z,w\in S^{\prime}\). Of course, if none of these four elements is equal to \(\tau_{a}(d)\), then \(\llbracket x,y,z\rrbracket w\in S[3]\subseteq S[4]\), and if all four elements are equal to \(\tau_{a}(d)\), then \(\llbracket x,y,z\rrbracket w=\tau_{a}(d)\in S[1]\subseteq S[4]\).
**Case 1**. _Suppose that only one of these four elements is equal to \(\tau_{a}(d)\)._
If \(w=\tau_{a}(d)\), then \(\llbracket x,y,z\rrbracket w=\llbracket x,y,z,a\rrbracket d\in S[4]\). If \(z=\tau_{a}(d)\), then, by Proposition 4.1(iii),
\[\llbracket x,y,z\rrbracket w=\llbracket x,y,a,d,a\rrbracket w\in S[4].\]
If \(y=\tau_{a}(d)\), then, by Proposition 4.1(v),
\[\llbracket x,y,z\rrbracket w=\llbracket x,a,d,a,z\rrbracket w\ \equiv_{(2)} \ \llbracket x,z,w,z,a\rrbracket d.\]
If \(x=a\) or \(z=a\), then \(\llbracket x,y,z\rrbracket w\in S[4]\). If \(w=a\), then \(\llbracket x,y,z\rrbracket w\in S[4]\), by Proposition 4.1(ii). We may thus assume that \(z=b\) and \(w=c\). If \(x=b\), then we see that \(\llbracket x,y,z\rrbracket w\in S[4]\). If \(x=c\), then
\[\llbracket x,y,z\rrbracket w\ \equiv_{(2)}\ \llbracket c,b,c,b,a\rrbracket d\in S [3],\]
by Proposition 3.10(iii).
If \(x=\tau_{a}(d)\), then, assuming without loss that \(y=b\),
\[\llbracket x,y,z\rrbracket w=\llbracket a,d,a,b,z\rrbracket w.\]
If \(z=a\), then by Proposition 4.1(iii), \(\llbracket x,y,z\rrbracket w\in S[4]\). Hence we may assume \(z=c\), and by Proposition 4.1(ii) we may assume that \(w=a\). In this case, Proposition 4.1(iv) shows that \(\llbracket x,y,z\rrbracket w\in S[4]\).
**Case 2**. _Suppose that three of the four elements \(x,y,z,w\) are equal to \(\tau_{a}(d)\)._
If \(x=y=z=\tau_{a}(d)\), then of course \(\llbracket x,y,z\rrbracket w=\llbracket\tau_{a}(d)\rrbracket w=\llbracket a,d,a\rrbracket w\in S[2]\). For the other cases, we simply observe that
\[\llbracket\tau_{a}(d),x,\tau_{a}(d)\rrbracket\tau_{a}(d) =\llbracket\tau_{a}(d),x\rrbracket\tau_{a}(d)=\llbracket a,d,a,x,a \rrbracket d\in S[4],\] \[\llbracket\tau_{a}(d),\tau_{a}(d),x\rrbracket\tau_{a}(d) =\llbracket x,a\rrbracket d\in S[2],\] \[\llbracket x,\tau_{a}(d),\tau_{a}(d)\rrbracket\tau_{a}(d) =\llbracket x,a\rrbracket d\in S[2].\]
**Case 3**. _Exactly two of the four elements \(x,y,z,w\) are equal to \(\tau_{a}(d)\)._
We have
\[\llbracket\tau_{a}(d),\tau_{a}(d),z\rrbracket w =\llbracket z\rrbracket w\in S[1],\] \[\llbracket x,\tau_{a}(d),\tau_{a}(d)\rrbracket w =\llbracket x\rrbracket w\in S[1],\text{ and }\] \[\llbracket x,y,\tau_{a}(d)\rrbracket\tau_{a}(d) =\llbracket x,y,a\rrbracket d\in S[3].\]
If \(x=w=\tau_{a}(d)\) and \(y,z\in\{a,b,c\}\), then, by (vi),
\[\llbracket\tau_{a}(d),y,z\rrbracket\tau_{a}(d)=\llbracket a,d,a,y,z,a \rrbracket d\ \equiv_{(4)}\ \llbracket a,a,z,y,a\rrbracket d\in S[3].\]
Next, if \(x=w=\tau_{a}(d)\) and \(y,z\in\{a,b,c\}\), then, by Lemma 3.11(ii) (with \(\tau_{a}(d)\) in place of \(a\)), \(\llbracket\tau_{a}(d),y,\tau_{a}(d)\rrbracket w\in S[4]\).
Finally, if \(y=w=\tau_{a}(d)\) and \(x,z\in\{a,b,c\}\), then, by Lemma 3.11(i)
\[\llbracket x,\tau_{a}(d),z\rrbracket\tau_{a}(d)\in S[4].\qed\]
The following corollary will play an important role in the proof of Proposition 4.5.
**Corollary 4.3**.: _Let \(a,b,c,d\in S\) and let \(T=\{a,\tau_{a}(b),\tau_{a}(c),d\}\). Then \(T[4]\subseteq S[6]\)._
Proof.: Let \(S^{\prime}=\{a,b,c,\tau_{a}(d)\}\) as in Lemma 4.2 and notice that \(T=\tau_{a}(S^{\prime})\), i.e., \(T\) is obtained from \(S^{\prime}\) by applying \(\tau_{a}\) on each element. By Proposition 3.9, for all \(x_{1},\ldots,x_{k},y\in S^{\prime}\) we have
\[\llbracket\tau_{a}(x_{1}),\ldots,\tau_{a}(x_{k})\rrbracket\tau_{a}(y)= \llbracket a,x_{1},\ldots,x_{k},a\rrbracket\tau_{a}(y)=\tau_{a}(\llbracket x _{1},\ldots,x_{k}\rrbracket y),\]
so \(T[i]=\tau_{a}(S^{\prime}[i])\) for all \(i\).
By Lemma 4.2, we have \(S^{\prime}[3]\subseteq S[4]\). Now
\[S^{\prime}[4] =\llbracket a\rrbracket S^{\prime}[3]\cup\llbracket b\rrbracket S ^{\prime}[3]\cup\llbracket c\rrbracket S^{\prime}[3]\cup\llbracket\tau_{a}(d) \rrbracket S^{\prime}[3]\] \[\subseteq S[5]\cup\llbracket a,d,a\rrbracket S[4],\]
and hence
\[T[4]=\llbracket a\rrbracket S^{\prime}[4]\subseteq\llbracket a\rrbracket S[5 ]\cup\llbracket d,a\rrbracket S[4]\subseteq S[6].\qed\]
**Proposition 4.4**.: _Let \(a,b,c,d\in S\). Then_
\[\llbracket a,b,c,a,b,c\rrbracket d\ \equiv_{(4)}\ \llbracket b,c,a,b,c,a \rrbracket d\ \equiv_{(4)}\ \llbracket c,a,b,c,a,b\rrbracket d.\]
Proof.: Let \(S^{\prime}=\{a,b,c,\tau_{a}(d)\}\). By Lemma 4.2, we have \(S^{\prime}[3]\subseteq S[4]\). We can thus apply Proposition 4.1(vii) with respect to \(S^{\prime}\) to get
\[\llbracket c,b,\tau_{a}(d),c,b\rrbracket a\ \equiv_{(4)}\ \llbracket b,c,a,b,c \rrbracket\tau_{a}(d),\]
hence
\[\llbracket c,b,a,d,a,c,b\rrbracket a\ \equiv_{(4)}\ \llbracket b,c,a,b,c,a \rrbracket d. \tag{4.2}\]
On the other hand, we apply \(\llbracket c,b,a,d\rrbracket\) to the equivalence in Lemma 3.11(iii) (with \(b\) and \(c\) interchanged) to get
\[\llbracket c,b,a,d,a,c,b\rrbracket a\\ \equiv_{(4)}\ \delta\llbracket c,b,a,d,a\rrbracket c-\epsilon_{a,b} \llbracket c,b,a,d,a,c\rrbracket b+\llbracket c,b,a,d,b,a\rrbracket c\]
for some \(\delta\in F\). Now \([\![c,b,a,d,a]\!]c\in S[4]\) by Proposition 4.1(iii). Also, by Proposition 4.1(v) and Proposition 3.10(iii), we have
\[[\![c,b,a,d,a,c]\!]b\ \equiv_{(3)}\ [\![c,b,c,b,c,a]\!]d\in S[4].\]
Thus, by Proposition 4.1(vii),
\[[\![c,b,a,d,a,c,b]\!]a\ \equiv_{(4)}\ [\![c,b,a,d,b,a]\!]c\ \equiv_{(4)}\ [\![c,a,b,c,a,b]\!]d. \tag{4.3}\]
Combining (4.2) and (4.3), we see that
\[[\![b,c,a,b,c,a]\!]d\ \equiv_{(4)}\ [\![c,a,b,c,a,b]\!]d.\]
It now suffices to cyclically permute \(a,b,c\) to also get the other equivalence.
We now come to the final and most challenging rewriting rule, which will effectively put a bound on the dimension of \(4\)-generated primitive axial algebras of Jordan type.
**Proposition 4.5**.: _Let \(a,b,c,d\in S\). Then \([\![d,a,b,c,a,b,c]\!]d\in S[6]\)._
Proof.: Let
\[T=\{\tau_{d}(a),\tau_{d}(b),c,d\}.\]
By Corollary 4.3, we have \(T[4]\subseteq S[6]\). By Proposition 4.4 applied to \(T\), this implies that
\[[\![\tau_{d}(a),\tau_{d}(b),c,\tau_{d}(a),\tau_{d}(b),c]\!]d\ \equiv_{(6)}\ [\![c,\tau_{d}(a),\tau_{d}(b),c,\tau_{d}(a),\tau_{d}(b)]\!]d. \tag{4.4}\]
We will proceed in two steps: We first show that
\[[\![\tau_{d}(a),\tau_{d}(b),c,\tau_{d}(a),\tau_{d}(b),c]\!]d\ \equiv_{(6)}\ [\![d,a,c,b,a,c,b]\!]d, \tag{4.5}\]
and then we show that
\[[\![c,\tau_{d}(a),\tau_{d}(b),c,\tau_{d}(a),\tau_{d}(b)]\!]d\in S[6]. \tag{4.6}\]
Interchanging the role of \(b\) and \(c\), it will then follow from (4.4), (4.5) and (4.6) that \([\![d,a,b,c,a,b,c]\!]d\in S[6]\).
**Step 1**. _Proof of (4.5)._
By Lemma 3.11(i) applied on \([\![d,c]\!]d\), we have
\[[\![\tau_{d}(a),\tau_{d}(b),c,\tau_{d}(a),\tau_{d}(b),c]\!]d\\ =[\![d,a,b,d,c,d,a,b,d,c]\!]d\\ =[\![d,a,b,d,c,d,a,b]\!]c+\epsilon_{c,d}[\![d,a,b,d,c,d,a,b]\!]d \\ -\epsilon_{c,d}[\![d,a,b,d,c,d,a,b,d]\!]c.\]
Now let \(\gamma=-\epsilon_{c,\tau_{d}(b)}\); then by Proposition 4.1(vi) and Proposition 4.1(v), we have
\[[\![d,a,b,d,c,d,a,b,d]\!]c \equiv_{(6)}\ [\![d,a,b,d,d,b,a,d]\!]c+\gamma[\![d,a,b,d,c,d,a]\!]b\] \[\equiv_{(0)}\ \gamma[\![d,a,b,d,c,d,a]\!]b\] \[\equiv_{(4)}\ \gamma[\![d,a,b,a,b,a,d]\!]c\in S[6],\]
by Proposition 3.10(iii).
Also, by Proposition 4.1(iv), we have
\[\llbracket d,a,b,d,c,d,a,b\rrbracket d\] \[\equiv_{(6)}\ \llbracket d,a,b,d,c,b,a\rrbracket d-\epsilon_{b,d} \llbracket d,a,b,d,c,d,a\rrbracket b\] \[\equiv_{(6)}\ \llbracket d,a,b,d,c,b,d\rrbracket a-\epsilon_{b,d} \llbracket d,a,b,a,b,a,d\rrbracket c\quad\text{(by 4.1(i) and 4.1(v))}\] \[\equiv_{(6)}\ \llbracket d,a,d,b,a,d,b\rrbracket c-\epsilon_{b,d} \llbracket d,a,b,a,b,a,c\rrbracket d\quad\text{(by 4.1(vii) and 4.1(i))}\] \[\in S[6],\]
by Proposition 4.4 and Proposition 3.10(iii).
Finally, by Proposition 3.10(ii),
\[\llbracket d,a,b,d,c,d,a,b\rrbracket c\] \[\equiv_{(6)}\ \llbracket d,a,b,c,d,c,a,b\rrbracket c\] \[\equiv_{(6)}\ \llbracket d,a,b,c,d,b,a\rrbracket c-\epsilon_{b,c} \llbracket d,a,b,c,d,c,a\rrbracket b\quad\text{(by 4.1(iv))}\] \[\equiv_{(6)}\ \llbracket d,a,b,c,d,b,c\rrbracket a-\epsilon_{b,c} \llbracket d,a,b,a,b,a,c\rrbracket d\quad\text{(by 4.1(i) and 4.1(v))}\] \[\equiv_{(5)}\ \llbracket d,a,c,b,a,c,b\rrbracket d\quad\text{(by 4.1 (vii) and 3.10(iii)).}\]
This proves (4.5).
**Step 2**. _Proof of (4.6)._
By Lemma 3.11(iii), there exists \(\delta\in F\) with
\[\llbracket c,\tau_{d}(a),\tau_{d}(b),c,\tau_{d}(a),\tau_{d}(b) \rrbracket d\] \[=\llbracket c,d,a,b,d,c,d,a,b\rrbracket d\] \[\equiv_{(6)}\ \delta\llbracket c,d,a,b,d,c,d\rrbracket a-\epsilon_{b,d} \llbracket c,d,a,b,d,c,d,a\rrbracket b\] \[+\llbracket c,d,a,b,d,c,b,d\rrbracket a.\]
Now by Proposition 4.1(iii), \(\llbracket c,d,a,b,d,c,d\rrbracket a\in S[6]\). By Proposition 4.1(v) and Proposition 3.10(iii), we have
\[\llbracket c,d,a,b,d,c,d,a\rrbracket b\ \equiv_{(5)}\ \llbracket c,d,a,b,a,b,a,d \rrbracket c\in S[6].\]
Finally, by Proposition 4.1(vii) and Proposition 4.4, we also have
\[\llbracket c,d,a,b,d,c,b,d\rrbracket a \equiv_{(6)}\ \llbracket c,d,a,d,b,a,d,b\rrbracket c\] \[\equiv_{(6)}\ \llbracket c,d,d,b,a,d,b,a\rrbracket c= \llbracket c,b,a,d,b,a\rrbracket c\in S[6].\]
This proves (4.6) and thus finishes the proof of this proposition.
## 5. 4-generated primitive axial algebras of Jordan type
We are now ready to prove our main result. Although it requires some care to write down the proof, the hard work has already been done in Propositions 4.1, 4.4 and 4.5.
**Theorem 5.1**.: _Assume that \(A\) is generated by a set \(S=\{a,b,c,d\}\) of \(4\) axes. Then \(A=S[6]\) and \(A\) is at most \(81\)-dimensional._
_More precisely, let \(G\) be the group \(\operatorname{Sym}(S)\) of all permutations of \(S\). Define2_
Footnote 2: There is some obvious abuse of notation here: a priori, the group \(G\) does not act on \(A\), so when we write an expression like \(\{\llbracket a,b,c\rrbracket\}^{G}\), we really mean \(\{\llbracket a^{\rho},b^{\rho},c^{\rho}\rrbracket(d^{\rho})\mid\rho\in G\}\).
\[\Gamma_{0} =\{a\}^{G},\] \[\Gamma_{1} =\{\llbracket a\rrbracket b\}^{G},\] \[\Gamma_{2} =\{\llbracket a,b\rrbracket c\}^{G},\] \[\Gamma_{3} =\{\llbracket a,b,c\rrbracket d\}^{G},\] \[\Gamma_{4} =\{\llbracket a,b,a,c\rrbracket d,\ \llbracket a,b,c,a \rrbracket d\}^{G},\] \[\Gamma_{5} =\{\llbracket a,b,c,a,b\rrbracket d\}^{G},\] \[\Gamma_{6} =\{\llbracket a,b,c,a,b,c\rrbracket d\}^{G}.\]
_Then for each \(i\in\{1,\ldots,6\}\), we have \(S[i]=\langle\Gamma_{0},\ldots,\Gamma_{i}\rangle\). In particular, \(A=\langle\Gamma_{0},\ldots,\Gamma_{6}\rangle\)._
_Moreover, there is some redundancy in these spanning sets: The dimension of each of the \(S[i]\) is at most \(4\), \(10\), \(22\), \(34\), \(61\), \(73\) and \(81\), respectively._
Proof.: For each \(i\leq 6\), let \(T[i]\) be the subspace of \(A\) spanned by \(\Gamma_{0},\ldots,\Gamma_{i}\). Obviously, we have \(T[i]\leq S[i]\) for each \(i\). We will show recursively that for each \(i\leq 6\), \(S[i]=T[i]\), and that \(S[7]=S[6]\). We will, at the same time, compute the maximal possible dimension of each \(T[i]\). Notice that for each \(i\), the subspace \(S[i+1]\) is spanned by \(S[i]\) and all elements obtained by applying the four operations \(\llbracket a\rrbracket\), \(\llbracket b\rrbracket\), \(\llbracket c\rrbracket\) and \(\llbracket d\rrbracket\) on the elements of \(S[i]\). In order to go from \(S[i]=T[i]\) to the next step \(S[i+1]\), it will suffice, by \(G\)-symmetry, to apply these four operations on the given representative of the set \(\Gamma_{i}\).
1. [label=\(i=0\).]
2. Obviously, \(T[0]=\langle a,b,c,d\rangle=S[0]\), and \(\dim S[0]\leq 4\).
3. We have \(\llbracket a\rrbracket a=a\in S[0]\), whereas applying any of the other three operations \(\llbracket b\rrbracket\), \(\llbracket c\rrbracket\), \(\llbracket d\rrbracket\) on the representative \(a\in\Gamma_{0}\) results in an element of \(\Gamma_{1}\), so \(S[1]\leq T[1]\). By Proposition 4.1(i), we have \(\llbracket b\rrbracket a\ \equiv_{(0)}\ \llbracket a\rrbracket b\), so the \(12\) possible elements of \(\Gamma_{1}\) come in pairs that are linearly dependent modulo \(S[0]\). Hence \(\dim S[1]\leq 4+12/2=10\).
4. We have \(\llbracket a,a\rrbracket b=b\in S[0]\) and \(\llbracket b,a\rrbracket b\in S[1]\) by Proposition 4.1(ii). On the other hand, \(\llbracket c,a\rrbracket b\) and \(\llbracket d,a\rrbracket b\) belong to \(\Gamma_{2}\) and hence to \(T[2]\). Hence \(S[2]\leq T[2]\). By Proposition 4.1(i), we have \(\llbracket a,b\rrbracket c\ \equiv_{(1)}\ \llbracket a,c\rrbracket b\), so the \(24\) possible elements of \(\Gamma_{2}\) come in pairs that are linearly dependent modulo \(S[1]\). Hence \(\dim S[2]\leq 10+24/2=22\).
* We have \(\llbracket a,a,b\rrbracket c=\llbracket b\rrbracket c\in S[1]\), and we have \(\llbracket b,a,b\rrbracket c\in S[2]\) by Proposition 4.1(iii) and \(\llbracket c,a,b\rrbracket c\in S[2]\) by Proposition 4.1(iv). On the other hand, \(\llbracket d,a,b\rrbracket c\) belongs to \(\Gamma_{3}\) and hence to \(T[3]\). Hence \(S[3]\leq T[3]\). By Proposition 4.1(i), we have \(\llbracket a,b,c\rrbracket d\ \equiv_{(2)}\ \llbracket a,b,d\rrbracket c\), so the \(24\) possible elements of \(\Gamma_{3}\) come in pairs that are linearly dependent modulo \(S[2]\). Hence \(\dim S[3]\leq 22+24/2=34\).
* We have \(\llbracket a,a,b,c\rrbracket d=\llbracket b,c\rrbracket d\in S[2]\). On the other hand, \(\llbracket b,a,b,c\rrbracket d\) and \(\llbracket c,a,b,c\rrbracket d\) belong to \(\Gamma_{4}\) and hence to \(T[4]\). Finally, \(\llbracket d,a,b,c\rrbracket d\ \equiv_{(3)}\llbracket d,a,b,d\rrbracket c\in\Gamma_{4}\), so \(\llbracket d,a,b,c\rrbracket d\) belongs to \(\langle S[3],\Gamma_{4}\rangle\leq T[4]\). Hence \(S[4]\leq T[4]\). By Proposition 4.1(v), the \(24\) possible elements of \(\{\llbracket a,b,a,c\rrbracket d\}^{G}\) come in \(8\)-tuples that are pairwise linearly dependent modulo \(S[3]\): \(\llbracket a,b,a,c\rrbracket d\ \equiv_{(1)}\ \llbracket c,d,c,a \rrbracket b\ \equiv_{(3)}\ \llbracket c,d,c,b\rrbracket a\ \equiv_{(1)}\ \llbracket b,a,b,c \rrbracket d\) \(\equiv_{(3)}\ \llbracket b,a,b,d\rrbracket c\ \equiv_{(1)}\ \llbracket d,c,d,b \rrbracket a\ \equiv_{(3)}\ \llbracket d,c,d,a\rrbracket b\ \equiv_{(1)}\ \llbracket a,b,a,d \rrbracket c\). On the other hand, there are no such equivalences between the \(24\) possible elements of \(\{\llbracket a,b,c,a\rrbracket d\}^{G}\). Hence \(\dim S[4]\leq 34+24/8+24=61\).
* First, because \(\llbracket a,b,a,c\rrbracket d\) is \(3\)-equivalent to an element beginning with any of the generators \(a,b,c,d\), we see that applying any of the four operators \(\llbracket a\rrbracket\), \(\llbracket b\rrbracket\), \(\llbracket c\rrbracket\), \(\llbracket d\rrbracket\) on this element will result in an element already contained in \(S[3]\). Next, we apply these operators on \(\llbracket a,b,c,a\rrbracket d\). Of course, we again have \(\llbracket a,a,b,c,a\rrbracket d\in S[3]\). Next, by Proposition 3.10(ii) and Proposition 4.1(iii), we have \(\llbracket b,a,b,c,a\rrbracket d\ \equiv_{(3)}\ \llbracket a,b,a,c,a \rrbracket d\in S[4]\), and by Proposition 4.1(vi), we have \(\llbracket d,a,b,c,a\rrbracket d\in S[4]\). Finally, \(\llbracket c,a,b,c,a\rrbracket d\in\Gamma_{5}\). Hence \(S[5]\leq T[5]\). By Proposition 4.1(vii), the \(24\) possible elements of \(\Gamma_{5}\) come in pairs that are linearly dependent modulo \(S[4]\). Hence \(\dim S[5]\leq 61+24/2=73\).
* We have \(\llbracket a,a,b,c,a,b\rrbracket d=\llbracket b,c,a,b\rrbracket d\in S[4]\). By Proposition 3.10(ii) and Proposition 4.1(v), we have \(\llbracket b,a,b,c,a,b\rrbracket d\ \equiv_{(4)}\ \llbracket a,b,a,c,a \rrbracket d\ \equiv_{(3)}\ \llbracket a,b,b,d,b,a\rrbracket c\in S[4]\). Next, \(\llbracket c,a,b,c,a,b\rrbracket d\in\Gamma_{6}\), and finally, by Proposition 4.1(vii), we also have \(\llbracket d,a,b,c,a,b\rrbracket d\ \equiv_{(4)}\ \llbracket d,b,a,d,b,a \rrbracket c\in\Gamma_{6}\). Hence \(S[6]\leq T[6]\). By Proposition 4.4, the \(24\) possible elements of \(\Gamma_{6}\) come in triples that are pairwise linearly dependent modulo \(S[5]\). Hence \(\dim S[6]\leq 73+24/3=81\).
* We have \(\llbracket a,a,b,c,a,b,c\rrbracket d\in S[5]\), and by Proposition 4.4, it follows that also \(\llbracket b,a,b,c,a,b,c\rrbracket d\) and \(\llbracket c,a,b,c,a,b,c\rrbracket d\) belong to \(S[5]\). Finally, by Proposition 4.5, we also have \(\llbracket d,a,b,c,a,b,c\rrbracket d\in S[6]\). We conclude that \(S[7]=S[6]\), and therefore \(A=S[6]\) |
2309.01622 | Concepts is All You Need: A More Direct Path to AGI | Little demonstrable progress has been made toward AGI (Artificial General
Intelligence) since the term was coined some 20 years ago. In spite of the
fantastic breakthroughs in Statistical AI such as AlphaZero, ChatGPT, and
Stable Diffusion none of these projects have, or claim to have, a clear path to
AGI. In order to expedite the development of AGI it is crucial to understand
and identify the core requirements of human-like intelligence as it pertains to
AGI. From that one can distill which particular development steps are necessary
to achieve AGI, and which are a distraction. Such analysis highlights the need
for a Cognitive AI approach rather than the currently favored statistical and
generative efforts. More specifically it identifies the central role of
concepts in human-like cognition. Here we outline an architecture and
development plan, together with some preliminary results, that offers a much
more direct path to full Human-Level AI (HLAI)/ AGI. | Peter Voss, Mladjan Jovanovic | 2023-09-04T14:14:41Z | http://arxiv.org/abs/2309.01622v1 | # Concepts is AI You Need: A More Direct Path to AGI
###### Abstract
Little demonstrable progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago. In spite of the fantastic breakthroughs in Statistical AI such as AlphaZero, ChatGPT, and Stable Diffusion none of these projects have, or claim to have, a clear path to AGI. In order to expedite the development of AGI it is crucial to understand and identify the core requirements of human-like intelligence as it pertains to AGI. From that one can distill which particular development steps are necessary to achieve AGI, and which are a distraction. Such analysis highlights the need for a Cognitive AI approach rather than the currently favored statistical and generative efforts. More specifically it identifies the central role of concepts in human-like cognition. Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
AGI, Cognitive AI, Adaptive AI, Human-Level AI, HLAI, Cognitive Architecture, Third Wave of AI, Intelligence, Concepts, Generalization.
Requirements of General Intelligence
We expect an AGI to be capable of performing any cognitive task, especially novel ones, at a level comparable to a human being [1]. Thus, a key feature of general intelligence is the ability to learn new knowledge and skills. As far as AGI design is concerned, it is much more important to be able to _acquire_ knowledge than simply _having_ it.
Moreover, AGI has core requirements as to _what_ to learn, and _how_ to learn as follows:
* Be able to learn real-world 4D data that could be noisy, incomplete, or even wrong.
* Such data includes new entities and action sequences, plus their relationships.
* Knowledge must be interpreted, encoded, and evaluated _conceptually_. This means that entities, actions, and generalizations are represented by their (scalar) attributes. In other words, as vectors with a schema. This is essential to facilitate (dis)similarity comparisons, and for forming higher-level abstract concepts.
* The ability to learn complex data such as images and movement _interactively_. This implies having input senses and output actuators of some kind, plus mechanisms to select and extract particular input data (selective attention) [2].
* The system must be able to accumulate new knowledge and skills _incrementally_, integrating with existing _short- and long-term memory_. This requires a robust knowledge representation such as an integrated, high-performance knowledge graph.
* Input needs to be interpreted _contextually_, taking into account prior input and knowledge as well as current goals and priorities.
* Learning must be _life-long_ and _adaptive_ with the ability to change or invalidate existing knowledge and to adjust to new situations and environments.
Most learning should be _autonomous_ (unsupervised or self-supervised), without a human in the loop.
* The system must operate in _real-time_, and function adequately with limited resources [3].
* Human-like intelligence covers a wide range of _learning modes_, including: instance or one-shot; clustering and association by time and/or space; generalization; aping; stimulus-response; reinforcement; random and structured exploration; human guided; via instructions; as well as zero-shot (implicit inference); explicit reasoning (figuring things out) and study (read, view).
Effective AGI designs must not only implement all of these learning requirements, but also embody methods for action control, reasoning mechanisms, and metacognitive control.
Cognitive AI Essential requirements of AGI cannot be met by logic or statistical methods alone [4] - they demand a cognitive approach. A DARPA presentation makes a useful distinction with 'The Three Waves of AI' [5]:
* GOFAI, expert systems, Deep Blue.
* DL/ML/RL/Transformers, AlphaZero, ChatGPT.
* Cognitive architectures.
The Cognitive AI approach is _not_ just a mashup of the first two 'Waves', though it freely incorporates insights gained from those earlier methods. It is typically implemented as a cognitive architecture. We describe Cognitive Architectures as systems that encompass and embody all of the essential structures
Figure 1: Timeline of growing capabilities of AI systems.
required for a human-level mind [1]. It also considers how these structures and functions need to work together effectively and function intelligently in diverse, dynamic environments [6].
Senses and Actuators
We perceive objects and actions via our senses. Subconscious, lower-level processes 'package' input data streams into digestible objects that we become aware of. One could argue that an AGI needs to fully integrate such preprocessing with higher-level cognition. This view certainly has merit; higher-level context influences lower-level focus and selection, and recognition. However, both theoretical consideration as well as practical experience indicate that effective AGI can be constructed with separate pre-processing mechanisms for visual, tactile, and sound input [7].
An insightful perspective to consider is that both Helen Keller, with severely limited sense perception, and Stephen Hawking, with little dexterity, were able to deliver outstanding intellectual contributions. We could call this the 'Helen Hawking' model of AGI - a powerful cognitive system with very limited sense acuity and dexterity.
From a practical point of view, we posit that a highly effective AGI could be limited to something like PC desktop visual input supplemented by text or sound, plus the ability to manipulate mouse and keyboard. Screen input would potentially provide a real-time window to the real world. A limiting factor may be the lack of direct 3D or depth perception, that blind people obviously obtain via touch and sound.
From Percepts to Concepts
Returning to the nature of what we (and potential AGIs) perceive, it is crucial to note that the objects and actions are essentially vectors composed of numerous scalar features. All knowledge, actions, and skills that we learn and utilize cognitively can be seen as vectors plus their relevant relationships.
For example, facial features can be represented by a number of features - whether via simple, traditional low-dimensional distance measures, or via modern complex machine learning features [8]. Similarly, actions are represented by various scalar dimensions - e.g., a ball bouncing, rolling, or floating with values for frequency, amplitude, speed, etc. [9].
Vector representation forms the basis not only for lower-level cognitive functions such as similarity measures, but also ultimately for our uniquely human ability to form highly abstract concepts, and to be able to reason with them.
The utility of vector encoding _and hierarchies_ is amply demonstrated by the power of LLMs [10]. However, this Statistical AI approach suffers three important limitations:
1. These vectors are based on co-occurrence or prediction-relevance in training data, and not on real-world ontological features.
2. They have fixed dimensionality rather than one most appropriate to each concept. These limitations ultimately hamper robust ongoing learning and reasoning.
3. Vectors in LLMs are established during training and do not change during 'inference' - while interacting with users.
The proposed Cognitive AI approach does not inherently suffer from these issues. It provides for variable-size vectors that are dynamically adaptive, and are more directly grounded with real-world features.
### Knowledge Representation
Core AGI requirements dictate the need for a long-term memory store of vectors representing things like entities, concepts, action sequences, and various relationships. Graph-like data stores are ideally suited for this purpose. They provide the flexibility of encoding a myriad of different complex structures and relationships.
However, performance considerations rule out the use of external graph databases because all recognition, learning, and cognitive functions need to constantly reference and update the graph. Only a custom, fully integrated, memory-based knowledge-graph system can provide the speed required to operate in real-time. Recent benchmark tests have shown a 1000-fold difference in access time between these two approaches (Table 1).
Figure 2: Entity and activity features from senses are stored or recognized as percept vectors. These in turn are integrated into entity (‘Rover’), concept (dog), and abstract category levels (animal).
Traditionally, cognitive architectures have been implemented in a very modular fashion which, generally speaking, is good engineering practice [11]. For AGI, however, we need extremely tight integration between the various cognitive mechanisms. Context, memory, pattern matching, learning, generalization, inference, exploration, action and metacognition constantly interact in complex ways.
A powerful way to achieve this is to have a hyper-optimized graph-based vector datastore act as a foundational substrate for all cognitive functions. Not only does this vector graph serve as long-term memory, but it can also double as short-term memory via suitable activation mechanisms. Such a system has to be carefully designed and built from the ground up to ensure full integration and good performance. Off-the-shelf components or separate modules won't do.
\begin{table}
\begin{tabular}{c||c|c|c} \hline \hline \multicolumn{1}{c||}{Gally/Series} & \multicolumn{1}{c||}{AGC KG} & \multicolumn{1}{c}{Need Graph PE} & \multicolumn{1}{c}{AGC KG} \\ \hline
**1** & \(\sim\) 0 ms & 6 ms & \(\sim\)\(\infty\) \\
**10** & \(\sim\) 0 ms & 11 ms & \(\sim\)\(\infty\) \\
**100** & \(\sim\) 0 ms & 83 ms & \(\sim\)\(\infty\) \\
**1.000** & 1 ms & 838 ms & \(\sim\)838x \\
**100.000** & 49 ms & 73,809 ms & \(\sim\)1500x \\
**1.000.000** & 446 ms & 747,017 ms & \(\sim\)1670x \\ \end{tabular}
\end{table}
Table 1: Given a data sample for a Graph DB, the table illustrates how quickly AIGO KG and Neo4j Graph DB can find a node (contact the authors for the experiment details).
Figure 3: Traditional, Modular Cognitive Architecture Design.
Metacognition and Emotions
The majority of our cognition is subconscious - it does not involve explicit mental control or supervision. However, what sets human intelligence apart is that we are able to think about, and direct our thinking. This distinction was well articulated by Kahneman's System1/System2 model [12]. It is important to note that the distinction is not binary, but rather transitions from one to the other seamlessly.
Higher-level, or System 2, cognition is one aspect of overall awareness and control. The other pertains to the availability of what one could call 'cognitive emotions', mental states such as surprise, certainty, confusion, and boredom [13]. These signals affect both conscious as well as subconscious cognition. They can also be controlled to some extent; much more so in Als than humans.
Both of these powerful mechanisms need to be an integral part of any workable AGI design.
AGI Curriculum and Benchmarks
One of the fundamental principles of Cognitive AI is that it should be built with a minimum of hard-coded or fixed functionality. It should be as adaptive as possible, both as far as new knowledge and skills are concerned, as well as being able to adapt to new information and circumstances.
Additionally, its hierarchical, conceptual knowledge base should be as accurate and grounded as possible. These requirements combine to put a large burden on the _quality_ of training - especially for its foundational knowledge. Unlike LLMs, Cognitive AI doesn't inherently need massive amounts of uncurated training data, instead it needs a carefully designed curriculum to create a robust hierarchy of knowledge and skills.
A related difficulty is the design of tests and benchmarks. Existing SOTA benchmarks are not appropriate for evaluating early-stage AGI designs [14]; an early general AI would not be expected to do well on specialized narrow tasks, or on tasks that require a wide range of knowledge.
Figure 4: Outline of Fully Integrated Cognitive Architecture using a Knowledge-Base Substrate.
Performance tests can also not be too generic, they have to be designed to measure progress in relation to the specific AGI theory involved [15], to the types of sense and activators used, as well as the chosen curriculum.
To minimize the risk of designing tests aligned with what the system _can_ do, rather than what it _should_ be able to do, these benchmarks should ideally be developed by an external party that does however have a very good understanding of the overall setup and theory (see 'Benchmarks for Proto-AGI', in preparation at the time of writing this article).
### Practical Implementations
The 'Aigo' project (originally, 'a2i2' Adaptive A.I Inc) has over the past 20 years produced a number of AGI development prototypes (as well as several 'industrial grade' commercial versions) using this approach1. All of these systems utilize a graph-based knowledge substrate into which all cognitive subsystems are integrated. Early models incorporated several sense inputs (vision, sound, touch, etc.) operating in a simulated environment, while later commercial versions focused on speech and text IO (input/output). These conversation-focused implementations demonstrate powerful real-time contextual learning, memory capabilities, as well as reasoning and question answering2.
Footnote 1: Aigo.ai Web Chat Demo: [https://www.youtube.com/watch?v=FLOPdS9tvQg](https://www.youtube.com/watch?v=FLOPdS9tvQg) (Accessed 19.08.2023).
Footnote 2: Aigo.ai Elder Companion Demo: [https://www.youtube.com/watch?v=VVumDGISRng](https://www.youtube.com/watch?v=VVumDGISRng) (Accessed 17.08.2023).
One of our benchmarks (conducted at the end of August 2023) was to test the ability to learn novel facts and answer questions about them. We compared AIGO with Chat GPT-4 (8.000 tokens context window) and with Claude 2 (100.000 tokens context length). The AIGO system was pretrained with only a rudimentary real-world ontology of a few thousand general concepts such as person, animal, red, and small. Chat GPT-4 and Claude 2 were used in their standard form and not constrained in any way.
The test involved first feeding 419 natural language statements to each of the three systems. These were simple facts, _some_ of which related to each other (e.g., Tina wants a dog and a cat. Actually, Tina only wants a cat). Finally, we asked 737 questions and scored the answers. We evaluated the responses based on a reasonable human standard. If the response pertains to the topic, answers correctly based on the correct source of information, and is grammatically sound, we consider the answer correct.
The AIGO system scored 88.89%, whereas Claude 2 only managed 35.33%. Chat GPT-4 was unable to perform the test. It scored less than 1%. Details and analysis of the experiment are available elsewhere3 and from the authors.
Footnote 3: Aigo.ai Benchmark: [http://tinyurl.com/2x59ma4d](http://tinyurl.com/2x59ma4d) (Accessed 31.08.2023).
### A Roadmap to AGI
The current Aigo baseline system offers excellent knowledge-graph performance, deep contextual parsing and understanding, real-time adaptive learning, and integrated inference. It does however no longer support multi-modal input or output, or have low-level, integrated vector support.
A recently launched Aigo development project revisits multi-model IO and aims to eliminate various existing rule-systems, and to significantly reduce the amount of code. The curriculum and assessments are specially crafted to foster the system's increasing autonomy in knowledge and skill acquisition. The
vast range of currently available LLMs, along with data resources such as Wikipedia, will significantly aid in the development of curated knowledge acquisition.
A roadmap outline to upgrading the system to fully meet the core requirements of AGI includes the following activities:
* Re-integrating multimodal vector pattern learning and matching into the knowledge graph.
* Adding real-time and background abstraction/concept formation.
* Training system to do basic question-answering.
* Training and developing advanced language capabilities.
* Adding multi-modal action and action learning mechanisms.
* Training semi-autonomous incremental knowledge acquisition and validating using multiple sources.
At this point the system will have basic human-level (HLAI) 'High School' capabilities. Further iterative enhancements include:
* Semi-autonomous acquisition of conversation requirements (language-based assistant).
* Developing advanced visual sensing and output (mouse) dexterity.
* Advanced space and time representation and modeling.
* Extending metacognition, focus-and-selection, and other control mechanisms.
* Advanced multi-modal learning, reasoning and problem solving (PC-based assistant).
* Significantly scaling up the amount of embedded common knowledge and high-level reasoning skills.
This will bring the system to 'Graduate' level. Ongoing improvements in autonomous learning, high-level reasoning (including Theory-of-Mind) as well as capacity and performance enhancements will bring Aigo up to full AGI capabilities.
Figure 5: The current architecture’s rules systems, as well as various functions currently hard-coded, need to be re-implemented as concept structures (‘Brain’). It will more fully integrate this functionality with overall cognition, and also make it fully adaptive.
### Conclusion
Progress towards AGI has been much slower than expected or necessary. A key reason for this is the lack of focus on what human-like cognition really requires, thus missing key properties of high-level intelligence. Crucial features, such as autonomous, incremental, real-time learning and adaptation, cannot be adequately addressed by Statistical AI; they require a Cognitive AI approach. We detail some of these often-overlooked features, and specifically highlight the need for effective _conceptual_ knowledge representation. We introduce 'Aigo', a high-performance, highly integrated cognitive architecture that has over the past 20 years been utilized both for AGI research, as well as for advanced commercial 'Conversational AI' applications. This architecture is being extended to meet all of the core requirements of AGI in order to achieve human-level adaptive autonomous intelligence.
|
2304.09282 | Leveraging Deep Learning Techniques on Collaborative Filtering
Recommender Systems | With the exponentially increasing volume of online data, searching and
finding required information have become an extensive and time-consuming task.
Recommender Systems as a subclass of information retrieval and decision support
systems by providing personalized suggestions helping users access what they
need more efficiently. Among the different techniques for building a
recommender system, Collaborative Filtering (CF) is the most popular and
widespread approach. However, cold start and data sparsity are the fundamental
challenges ahead of implementing an effective CF-based recommender. Recent
successful developments in enhancing and implementing deep learning
architectures motivated many studies to propose deep learning-based solutions
for solving the recommenders' weak points. In this research, unlike the past
similar works about using deep learning architectures in recommender systems
that covered different techniques generally, we specifically provide a
comprehensive review of deep learning-based collaborative filtering recommender
systems. This in-depth filtering gives a clear overview of the level of
popularity, gaps, and ignored areas on leveraging deep learning techniques to
build CF-based systems as the most influential recommenders. | Ali Fallahi RahmatAbadi, Javad Mohammadzadeh | 2023-04-18T20:40:10Z | http://arxiv.org/abs/2304.09282v1 | # Leveraging Deep Learning Techniques on Collaborative Filtering Recommender Systems
###### Abstract
With the exponentially increasing volume of online data, searching and finding required information have become an extensive and time-consuming task. Recommender Systems as a subclass of information retrieval and decision support systems by providing personalized suggestions helping users access what they need more efficiently. Among the different techniques for building a recommender system, Collaborative Filtering (CF) is the most popular and widespread approach. However, cold start and data sparsity are the fundamental challenges ahead of implementing an effective CF-based recommender. Recent successful developments in enhancing and implementing deep learning architectures motivated many studies to propose deep learning-based solutions for solving the recommenders' weak points. In this research, unlike the past similar works about using deep learning architectures in recommender systems that covered different techniques generally, we specifically provide a comprehensive review of deep learning-based collaborative filtering recommender systems. This in-depth filtering gives a clear overview of the level of popularity, gaps, and ignored areas on leveraging deep learning techniques to build CF-based systems as the most influential recommenders.
Recommendation Systems, Deep Learning Architectures, Collaborative Filtering, Survey 2021
1
[1]Department of Computer Engineering, Karij Banach, Islamic Azad University, Karij, Iran.; [email protected]
[1]Department of Computer Engineering, Karij Banach, Islamic Azad University, Karij, Iran.; [email protected]
[1]Department of Computer Engineering, Karij Banach, Islamic Azad University, Karij, Iran.; [email protected]
[1]Department of Computer Engineering, Karij Banach, Islamic Azad University, Karij, Iran.; [email protected]
[1]Department of Computer Engineering, Karij Banach, Islamic Azad University, Karij, Iran.; [email protected]
[1]Department of Computer Engineering, Karij Banach, Islamic Azad University, Karij, Iran.
methods because of the recent development in artificial intelligence and computation power. Strength capabilities of deep learning-based approaches compared with traditional techniques for solving complicated problems caused significant attention to employing deep learning in various domains such as image processing [2], speech recognition [3], data mining [4], business [5], natural language processing [6], and information filtering systems like recommendation engines [7].
The recommender system's primary intention is to predict the user's tendency toward an item; the item can be a company's product, stock in a stock market, a friend in a social network [8], a movie, or a photo on a website, etc [9]. Based on this concept, many researchers proposed various recommenders with diverse functionalities to suggest books, films, music, hotels, friendships, ps, etc. [10; 11].
### 1 Traditional Approaches to Build a Recommender System
Recommender systems can mainly be categorized into three types: Content-based, Collaborative filtering, and Hybrid recommenders [12]. We illustrated this categorization in _Fif._**1**, which shows an overview of the three mentioned criteria. In the next following paragraphs, we will explain and pinpoint some notable aspects of each type of recommender system.
#### 1.1.1 Content-based
In this technique, the recommender engine considers the features of what users chose in the past to make suggestions for other related items of the dataset. Textual descriptions and tags are typical resources for implementing a content-based recommender [13; 14]. For instance, in a content-based book recommender system, if a user likes a book about deep learning, the system will first analyze the book's available textual properties, such as title, author, genres, complete text, etc. Then, various techniques like keyword-based vector-space structure will be used to find the other most similar books in the dataset with the highest similarity matching score with the chosen deep-learning book [15].
#### 1.1.2 Collaborative filtering
The fundamental hypothesis of Collaborative Filtering is that users who had similar tastes and behaviors in the past will also have similar activities in the future. This strategy uses ratings or other measurable user's activities such as positive/negative comments, like/dislike, etc., to find similar users and then provide recommendations based on their similarities. Among all of the mentioned techniques, collaborative
Figure 1: Recommender systems categorization based on their techniques.
filtering is the most well-known and popular method for implementing a recommender system [16; 17]. We describe the Collaborative Filtering technique in detail in the following sections. There are two main approaches to build a Collaborative Filtering system which are memory-based and model-based. The memory-based approach utilizes user rates to calculate the correlation among users or items. In a model-based approach, the primary step is to use the dataset to learn a proposed model. In the next step, the model is applied for making predictions. Matrix Factorization is the most common algorithm in building model-based Collaborative Filtering systems [18].
#### 1.1.3 Hybrid recommenders
The critical factor in building an efficient recommender system is to improve accuracy and provide more personalized suggestions. Some studies tried to create hybrid systems based on a mixture of other techniques to benefit from the basic methods such as collaborative filtering and content-based and overcome their drawbacks [19; 20]. Recent achievements in deep learning and neural networks also provided new opportunities for building hybrid systems that handle large amounts of data on complex networks [21; 22]. Hybridization can be done in different types. For instance, the system can provide recommendations based on various features generated by basic recommenders; this method is known as feature combination. Another approach is Switching; the recommender system switches within different ways to provide recommendations [23].
### 1.2 Related Studies
In the past few years, some studies such as Khan et al. [7], Da'u and Salim [9], Zhang et al [10], Batmaz et al. [16] were conducted to review and survey deep learning-based recommender systems. However, to the best of our knowledge, not a single study specifically focused on leveraging deep learning techniques on collaborative filtering recommenders as the most common technique to build recommender systems [24]. In the following lines, we introduce mentioned researches more by explaining their notable aspects.
Khan et al. [7] provided a comprehensive survey about deep learning-based rating prediction approaches. The authors reviewed different algorithms and architectures by concentrating on rating prediction systems; however, the main difference between the study and our research is providing an in-depth review by focusing on collaborative filtering recommenders built based on deep learning architectures. By emphasizing presenting a systematic literature review (SLR), Da'u and Salim [9] provided a survey about building a recommender system based on deep learning techniques. Zhang et al [10] mainly focused on the taxonomical classification of reviewed studies and their approaches. Batmaz et al. [16], to help future researchers interested in the topic, categorized reviewed publications based on four dimensions: deep learning models and architecture, possible solutions for the challenges, recommender application domains, and purposive properties.
To provide a thorough study, we also reviewed acclaimed surveys about deep learning architectures, which were not limited to the subject of recommendation systems. For instance, Shrestha and Mahmood [25] proposed a review of deep learning algorithms and architectures by focusing on mathematical concepts of enhancing training operations. Although the study is not written on recommender systems, the authors flawlessly explained details about the structure of different deep learning architectures.
As a contribution, unlike past researches about using deep learning architectures in recommender systems that covered different techniques generally, in this study, we specifically provide a comprehensive review of deep learning-based collaborative filtering recommender systems to guide and assist new researchers interested in the area.
The rest of the paper is organized as follows: Section 2 provides the preliminaries of recommender systems include traditional approaches and fundamental challenges. Deep learning-based recommender systems are discussed in Section 3. Section 4 discusses the details of our results from some of the essential views applied to the topic. In Section 5, we present our conclusions and future work.
## 2 Background
This section describes the collaborative filtering technique with more details as the main focus of this study and the most commonly utilized method to build recommender systems. Moreover, in the following, we explain fundamental challenges ahead of implementing an efficient, accurate recommender.
### Collaborative Filtering Recommenders in Detail
In comparison with the content-based approaches, in collaborative filtering, the system can provide recommendations based on the similarity of user's activities (or items' characteristics) without the necessity of analyzing the items [26]. Fig. 2, shows a user-item rating matrix. This is a sample scenario of how a collaborative filtering recommender engine predicts a specific user's rates for an item. In this example, ratings are between one to five. We selected Alice as the target user. The system aims to predict Alice's possible ratings for items that she did not rate. Items can be imagined as movies that she did not watch. Then recommend the items with the highest rates to her.
The following paragraphs clarify the method step by step until proving the recommendation. The first step is calculating the similarity between Alice and the other three users. There are different metrics [27] for calculating similarity in recommender systems, such as:
#### 2.1.1 | Jaccard similarity
In the below formula, the Jaccard Similarity of users \(p\) and \(q\) is calculated from the number of items rated both by the users \(p\) and \(q\) divided by the union of rated items by the users \(p\) and \(q\). The fraction's numerator can be defined as the total number of the co-rated items between the users \(p\) and \(q\). The denominator also can be defined as the total number of the rated items by both the users, \(p\) and \(q\)[28].
Figure 2: A view of the User-Item matrix.
_Table 1_ shows the result of calculating the Jaccard similarity between Alice and other users in the dataset.
The above results indicate that Alice has the most similarity with Jim. However, the Jaccard similarity's main drawback is that the metric ignores how much two users are similar and only counts the number of co-rated items, whether the ratings are identical or the opposite [29].
#### 2.1.2 \(|\) Cosine similarity
Cosine similarity measures similarity by calculating the cosine angle between the two rating vectors given by two targeted users. The smaller value of angle represents higher similarity and vice versa [30]. Cosine similarity is calculated as follows:
\[\text{Cos}_{\text{p,q}}=\frac{\overline{\text{R}_{\text{p}}.\text{R}_{\text{q }}}}{\left|\overline{\text{R}_{\text{p}}}\right|\left|\overline{\text{R}_{ \text{q}}}\right|} \tag{2}\]
In the Cosine similarity formula Eq. (2), R_p_ and R_q_ respectively represent rating vectors of user \(p\) and \(q\). In the numerator of the fraction, ".", indicates the dot product of two vectors.
To employ the Cosine similarity for the mentioned example, there should be some values for unrated items. The simplest way to complete this step is by adding zero to the empty cells. Fig. 3, shows the user-item matrix after adding zero values to the empty cells to calculate the Cosine similarity between Alice and other users.
Table 2 shows the result of calculating the Cosine similarity between Alice and other users in the dataset
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Bob** & **Jim** & **Kate** \\ \hline
**Alice** & 1/5 & 2/4 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Result of the Jaccard similarity between Alice and other users.
Figure 3: User-Item matrix after adding zero values.
In contrast to the Jaccard similarity, the results of the Cosine similarity indicate that Alice decides more similarly to Bob than Jim. By looking at the rating scores, it can be concluded Cosine similarity results are more realistic. However, treating missing ratings the same as the negative rates is a disadvantage of the Cosine similarity. In the example, we set all the empty cells with zero; in other words, we assigned an uncertain rate for unrated items, which can be utterly wrong. To clarify the problem, if, in a movie recommender, the system sets zero for unrated movies, the user may give it a high rating after watching it.
A solution to overcome this problem is to use the Centered Cosine. The Centered Cosine's main idea is to normalize ratings by subtracting each rate from the average of those ratings for the target user. Based on the explained situation, the concept in Centered Cosine similarity can be considered similar to the Pearson Correlation Coefficient.
#### 2.1.3 Pearson correlation coefficient
Pearson Correlation Coefficient (PCC) [31] is one of the most widespread and notable popular similarity measure recommenders [32]. _Eq. (3)_ shows the PCC formula.
\[\text{PCC}_{a,b}=\frac{\sum_{i=1}^{1_{a,i}}\left(\text{r}_{a,i}- \ \overline{r}_{b}\right)\left(\text{r}_{b,i}-\overline{r}_{b}\right)}{\sqrt{ \sum_{i=1}^{1_{a}}\left(\text{r}_{a,i}-\ \overline{r}_{b}\right)^{2}}\sqrt{\sum_{i=1}^{1_{b}}\left(\text{r}_{b,i}- \ \overline{r}_{b}\right)^{2}}} \tag{3}\]
In the PCC formula, _Eq. (3)_, \(r_{a,i}\)indicates the rating score for item i, from the target user a. \(r_{b,i}\) denotes the rating score for the same item from the user b. \(\overline{r}_{\mathbf{a}}\) and \(\overline{r}_{\mathbf{b}}\) mean the average rating of user \(a\), and user \(b\) based on all rated items by each user.
Fig. 4, shows the modified version of the user-item matrix after subtracting each rate from the average of ratings in that row. The final result of _Eq. (
\begin{table}
\begin{tabular}{l l l l} \hline & **Bob** & **Jim** & **Kate** \\ \hline
**Alice** & 0.93 & 0.75 & 0 \\ \hline \end{tabular}
\end{table}
Table 2: Result of the Cosine similarity between Alice and other users.
Figure 4: The modified User-Item matrix after subtracting the rates of each row from its average.
By using the presented results in _Table 3_, the difference between Alice and Jim is more straightforward. So, Bob is the most similar user to Alice, and as can be seen, PCC captures the intuition better.
Typically, the second step of the collaborative filtering is selecting Top-N most similar users to the target user. The concept is known as the k-nearest neighbor method, and the main idea is to categorize other users' similarity values based on a predefined threshold [33]. Neighbors who, their score is greater than the threshold will be assigned to the Top-N group [34; 35]. The variable \(N\) can be set with different values. It had to be mention that all the members of this group must have rated the target item. In the above example, based on the PCC's result, it can be concluded that Bob is the most similar user to Alice in the dataset.
The final step is predicting the target user's rate for the target item. There are different formulas to do this step. However, to complete the example, we chose the average rating prediction model shown in _Eq. (4)_ to provide predictions.
\[\mathbf{r}_{\text{x,i}}=\frac{1}{\mathbf{k}\sum_{y\in\text{N}^{\prime}\text{ x}^{i}}} \tag{4}\]
In the equation _Eq. (4)_, \(\mathbf{r}_{\text{x,i}}\) is the predicted rate for item \(i\) from user \(x\), which calculates the average rating of all the users in the selected neighborhood for item \(i\). Also, the set \(N\) consists of users who rated item \(i\) and are similar to user \(x\). Obviously, based on the total number of users in the presented example, Bob is the only similar user to Alice after selecting the most Top-N similar users. So, the predicted scores for unrated items by Alice will be similar to Bob's rates.
### Fundamental Challenges
Recommender systems are faced with different challenges. The increasing population of online users and numerous items caused difficulties such as sparsity, cold start, and the Grey sheep problem [36; 37]. This section outlines the mentioned challenges in making an efficient and accurate recommender system.
#### 2.2.1 Sparsity
Among the mentioned techniques for building a recommender, the collaborative filtering method has a high dependency on the user's interaction with the system. However, in most datasets, there is a lack of sufficient data about users and items such as rates, comments, reviews, likes, dislikes, etc. Solving the sparsity problem was an appealing subject for many studies [38; 39]. While explicit trust relationships are used in many studies as a reliable approach to alleviating the data sparsity problem, some studies introduced propagation of trust and distrust values as a more practical solution to explore the unstated relationships between users. As a result of these activities, a collaborative filtering recommender has more data to calculate similarities and provide suggestions [40].
\begin{table}
\begin{tabular}{l l l l} \hline \hline & **Bob** & **Jim** & **Kate** \\ \hline
**Alice** & 0.97 & - 0.57 & \(0\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Result of the Pearson Correlation Coefficient similarity between Alice and other users.
#### 2.2.2 Cold start
Cold start happens when a new user or a new item recently joined a system. There is not enough information about the user's activities in the past or enough interactions and feedback about the new item in this situation. The cold start has an extensive negative effect in collaborative filtering-based recommender systems, since the recommender engine does not have enough information or feedback to calculate similarity [41]. Considering other origins of information such as social networks or contextual and demographic data is an efficient solution to overcome the cold start problem. In this regard, Linked Open Data and DBpedia are two valuable resources to gain more information about users or items [42].
#### 2.2.3 \(|\) Grey sheep
The problem of Grey Sheep users is a severe challenge of collaborative filtering recommender systems. The "gray sheep" term refers to the users who are not similar to the majority of other users. This issue makes it difficult for recommender engines, especially the collaborative filtering ones, to calculate the similarity between users and provide accurate suggestions [43; 44]. Researchers also tried to propose modern solutions to overcome the Grey Sheep problem with the development of machine learning techniques. For instance, using clustering algorithms is an effective solution to identify Grey Sheeps. Another approach is extracting content-based features from the Grey Sheep user's profile to improve recommendations' accuracy [45].
## 3 \(|\) Deep learning-based Recommender Systems
In recent years studying the influence of deep learning in different areas attained substantial interest. Likewise, in recommender systems, employing deep-learning techniques helped the experts enhance previous achievements and provide more accurate and precise results by prevailing over the fundamental challenges such as data sparsity, cold start, and grey sheep users [45; 46].
### \(|\) Main Deep Learning Architectures in Collaborative Filtering Recommender Systems
In contrast to the past studies about applying deep learning architectures in recommender systems that made a general overview of different deep learning approaches, in this section, we expressly present a comprehensive analysis of deep learning-based collaborative filtering recommender systems.
#### 3.1.1 \(|\) Restricted boltzmann machines
Restricted Boltzmann machine (RBM) is a special kind of Boltzmann machine. The RBM makes it possible to detect patterns in the input data by reconstructing them automatically. RBM is a network was built from two layers. Layers, respectively, are named visible and hidden layers. Each node in the first layer has a link with all the nodes in the hidden layer. The model is considered restricted because there is no connection between the nodes in the same layer [47]. _Fig. 5_, shows an illustration of the RBM architecture.
-Louppe [48] used parallel computing techniques with shared memory, distributed computing, and method ensembles for providing RBM-based collaborative filtering systems. The author's experimental results indicate that parallel computing can be an effective solution to improve the provided suggestion's accuracy.
Georgiev and Nakov [49] introduced a joined user-based and item-based collaborative filtering in a unified framework based on RBM. Moreover, the researchers employed real data in the first layer of the RBM architecture instead of multinomial variables. The authors also investigated the probability of mixing the RBM-based method's knowledge and the actual information.
Liu et al. [50] proposed a hybrid model based on RBM architecture and a collaborative filtering approach. The authors used items' categories as the system's input to enhance the system performance and increase the result's accuracy.
Zheng et al. [51], introduced a collaborative filtering Neural Autoregressive Distribution Estimation model named CF-N\(\lambda\)DE, which provides recommendations using RBM architecture. Authors showed that leveraging a deep learning network such as RBM can enhance the traditional and basic collaborative filtering approaches.
Jia et al. [52] proposed a collaborative-based RBM recommender system for exhibition managements and participators in social events. The introduced recommendation framework mixes the data from various references and builds a relationship among the online knowledge and users who participated in the target event.
Du et al. [53] introduced an item-based RBM method for collaborative filtering and applied the deep multilayer RBM network structure to overcome the sparsity problem. The authors considered every item as a separated RBM while each machine has similar properties such as weights and biases. The parameters are learned layer by layer in the deep network. They also used the batch gradient descent algorithm with minibatch to boost the convergence speed.
Wu et al. [54], to enhance the recommendations, considered trust relationships in recommender systems. The authors utilized explicit trust values and user's ratings as input data of the machine and proposed a social recommendation technique based on RBM.
Figure 5: A Restricted Boltzmann Machine architecture.
#### 3.1.2 Autoencoders
Autoencoders are a neural network that takes an unlabeled set of inputs and reconstructs accurate results after encoding them. The system acts as a feature extraction engine and decides which data features are the most important. Autoencoders are generally a shallow network and consist of three layers: input, hidden, and output layers. A RBM can be considered as a two-layers Autoencoder. The system has two general steps, which are encoding and decoding. Typically, the features used to encode input for the hidden layer, also used for decoding and provide results in the output layer. The process of the forward propagation from the input layer toward the output layer and the backward propagation in a reverse path is repeated continuously to achieve acceptable accuracy. Fig. 6,
shows an illustration of the RBM architecture [55].
Ying et al. [62] proposed a mixed model of a pair-wise recommender system that considers unexplicit feedback and collaborative filtering concept. The authors employed Stacked Denoising Autoencoders (SDAE) to select the item description's characteristic features. The system utilized a Bayesian framework to combine rates and other data about items.
Wei et al. [63] proposed a recommender system based on collaborative filtering and SDAE architecture to overcome the cold-start problem. The inputs of the model are the item's textual properties and user choices, and activities. An extended version of this research is proposed in [64].
Suzuki and Ozaki [65] employed users' ratings as inputs for an autoencoder architecture and computed the similarity between users in hidden layers. The decoded output of the system is a predicted rating used to provide recommendations.
Li and She [66] proposed a Bayesian generative multimedia recommender system based on a collaborative variational autoencoder. The system indicated both rating and content to explore the implicit connection between users and items.
Liang et al. [67] proposed a collaborative filtering recommender for unexplicit feedback based on the non-linear probabilistic. Technically the authors used variational autoencoders (VAEs) and a generative structure with multinomial probability and Bayesian reasoner for parameter calculation.
Li et al. [68] used different supplemental data such as item information, product tags, and shopping records to solve the data sparsity problem. The authors employed the autoencoder structure for every information source separately to have a better performance.
### Other Deep Learning Architectures in Recommender Systems
To make the study more comprehensive and provide a better understanding of the subject, in the following paragraphs, we present a brief explanation about the other deep learning architectures in recommender systems without limiting the references only to the collaborative filtering approaches.
#### 3.2.1 Recurrent neural networks (RNN)
Recurrent Neural Networks (RNN) are proper solutions for problems related to the changes in data patterns over time. The RNN has a feedback module that makes prediction possible for future input data. From the technical point of view, in a feed-forward neural network, signals flow in only one direction from input to output, one layer at a time. In RNN, the output of a layer is added to the next input and feedbacks to the same layer, which is typically the only layer in the entire network. The sequential pattern and the feature of changes in the hidden layer based on the information opened up RNN to various applications [69]. Fig. 7, shows an illustration of the RNN architecture.
While there are studies about using RNN in different fields such as recommender systems, however, due to our comprehensive research, compared to the RBM and Autoencoders, fewer studies employed the RNN to build a collaborative filtering system. Ko et al. [70], Proposed a collaborative RNN recommender system that uses a combination of contextual information with latent factors of user preferences to provide more accurate recommendations.
#### 3.2.2 \(|\) Convolutional neural networks (CNN)
Convolutional Neural Network (CNN) is one of the most dominant deep learning architectures in image processing and machine vision space [71; 72]. From a technical view, CNN is a kind of feed-forward neural network. However, in contrast to the typical neural networks, the convolution operation is used instead of ordinary matrix multiplication in CNN, the system convolves the input data, usually an array from pixels of an image, and analyzes the file, pixel by pixel, to detect edges and extract visual features. The depth of the network and matrixes dimensions vary due to the chosen architecture by the expert [73]. _Fig. 8,_ shows a sample of the CNN architecture.
Zhang et al. [74], to solve the sparsity problem in collaborative filtering recommenders, introduced Collaborative Knowledge Base Embedding (CKE). The system works based on a hybrid CNN and Autoencoders architectures model to identify images' visual features. The authors also leveraged knowledge-based approaches to consider the dataset's content and textual information to provide a more robust user-item interaction space.
Figure 8: A sample of the Convolutional Neural Network architecture.
Figure 7: A Recurrent Neural Network architecture.
Low et al. [75] proposed a CNN collaborative filtering recommender. The system uses the matrix factorization concept and creates connections between users and items.
Lee et al. [76] proposed a collaborative-based recommender engine that uses audio-visual features to calculate the similarity between videos. The system works based on the CNN architecture to extract visual points and can be highly profitable in video-sharing platforms.
He et al. [77] suggested utilizing a CNN to develop Neural Collaborative Filtering (NCF). The authors named their model ConvNCF. In the proposed structure to present the relation between users and items, the outer product was used alternatively of the dot product so that the system could catch the high-order similarities between defined aspects.
#### 3.2.3 Multilayer perceptron (MLP)
Multilayer Perceptron (MLP) is one of the basic architectures of neural networks. The simple MPL consists of three layers which are input, hidden, and output layers. Each node in a MLP network is known as a perceptron. Obviously, the basic structure of MLP can not be considered as a deep neural network, while employing multiple hidden layers is a technique to develop the architecture and increase its potentials. The scheme of a MLP network is shown in _Fig. 9_. A MLP is an option to convert a linear technique of recommender system into a none-linear system. The MPL-based systems usually are used for supervised models. As a result of being a feed-forward network and having backpropagation, the system continuously modifies the weights and biases to improve the accuracy and achieve the expected result [78].
Alizadeh [79] proposed a hybrid recommendation system based on the MLP network and collaborative filtering concept. The authors addressed the cold start problem by leveraging the artificial neural network and content-based technique to utilize mutual information from users and items in the dataset. He et al. [80] used MLP as a neural network architecture to build a collaborative filtering recommender system. The authors considered the user-item interaction function, and by leveraging deep learning advantages, improved the basic matrix factorization and collaborative filtering and techniques.
Figure 9: A schema of a Multilayer Perceptron architecture with multiple hidden layers.
#### 3.2.4 | Deep belief networks (DBN)
Combining RBMs together builds a more robust model which can solve the problems efficiently. The model is known as a Deep Belief Network (DBN). In a DBN, the hidden layer of each RBM is the visible layer of the next RBM; this relation continues in the same way for the next RBMs [81]. The last layer in a DBN can be used for clustering or classification. The conceptual architecture of a DBN network is shown in _Fig. 10_.
DBN generally combines both supervised and unsupervised learning steps. Compared with other deep learning architecture, DBNs need less labeled input data; this feature made DBNs a successful solution to implement real-world applications. Moreover, as DBN benefits from a deep network and multiple hidden layers, the system can provide more accurate results, especially compared to shallow nets [82]. Zhao et al. [83], to tackle the sparsity challenge in collaborative filtering recommenders, introduced a hybrid system. The authors employed DBNs to discover user's characteristics. K-nearest neighbor technique is also used to select proper users and execute predictions.
#### 3.2.5 | Attentional networks
The attention concept in computer science is formed based on the human ability to concentrate on a specific section of characteristics to perceive the target segments' value. With recent development in deep learning, Attentional Networks have become one of the popular topics in image processing, speech recognition, Natural Language Processing, etc., and also recommender systems [84; 85].
Bahdanau et al. [86] proposed a model machine translation base on the sequence-to-sequence encoder-decoder approach. _Fig. 11_, shows a graphical view of the proposed model
Figure 11: Graphical view of proposed attention-aware model for translation [86]
Figure 10: A basic structure of Deep belief networks.
In _Fig. 11_, the system's input is a sentence, and the output is its translation. To increase the result's similarity with human translation, the system tries to emphasize more on some specific input parts. \(X_{t}\) to \(X_{T}\) and \(b_{t}\) to \(b_{T}\) inside the rectangles show recurrence step activations. The \(a_{t,t}\) to \(a_{t,T}\) show how much attention is considered for the related input. Then the sum of the weights provides results boxes on top of the figure. \(S_{t}\) indicates the first time that generated \(y_{t}\), etc.
Tay et al. [87] presented a memory-based Attentional networks architecture for collaborative metric learning. The authors introduced their model as LRMI. (Latent Relational Metric Learning). In the proposed system, user-item interactions were used as the information resource for the attention module. Jhamb et al. [88] implemented a contextual recommender system based on autoencoder neural network architecture and context-driven attention. The authors used the Attentional network to encode the contextual features into the hidden presentation of the user's characteristics.
#### 3.2.6 | Generative adversarial networks (GAN)
A Generative Adversarial Networks (GAN) typically consists of two main neural networks: generator and discriminator. The generator produces new samples of information, and the discriminator validates the generated data for its authenticity. GAN architecture became so popular in image processing deep learning-based systems [73; 89]. Tong et al. [90] proposed a Collaborative Generative Adversarial Network (CGAN) to build a recommender system. The authors used an auto-encoder as a generator module that takes features from user activities about items. Moreover, adversarial training was employed to enhance system efficiency and productivity.
_Fig. 12_, shows a GAN which produces fake images. The network has two inputs: a dataset of real images (top input) and a D-dimensional noise vector (bottom input). Generator Component produces fake images. In the next step, samples from real images and fake images become validate by the discriminator.
## 4 | Discussion
In this section, we discuss our work results and analyze the details from some of the most important views that can be applied to the topic. From _Table 4_, it is observed that Autoencoders and Restricted Boltzmann Machine (RBM) are the most popular deep learning architecture for building collaborative filtering-based recommender systems; and other approaches such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Deep belief networks (DBN) Multilayer Perceptron (MLP), Attentional networks and Generative adversarial networks (GAN) have not received enough attention through the reviewed studies. Although, there could be different reasons for the popularity of Autoencoders and RBM, based on the provided data in _Table 5_, one of the critical differences is their training type which is Unsupervised for both. Moreover, their potentials in common applications such
Figure 12: A fundamental structure of the Generative adversarial network.
as feature extraction and reducing dimensions are highly compatible with collaborative filtering recommender systems structure.
_Table 5_ provides a condensed review and comparison of the different deep learning architectures. It has to be mention that some values, such as examples of the Common Applications presented in the table, also could be implemented in hybrid applications. Likewise, while RBMs are considered as generative models and "Unsupervised" chose as their "Training Type", they can have components of the discriminative model and do their training phase in a supervised system [25].
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Deep learning Architecture** & **Publication** & **Dataset** \\ \hline \multirow{8}{*}{**Restricted Boltzmann (RBM)**} & \multirow{8}{*}{**Machines**} & Louppe. [48] & Netflix \\ \cline{3-4} & & Georgiev and Nakov. [49] & MovieLens \\ \cline{2-4} & & Liu et al. [50] & MovieLens \\ \cline{2-4} & & Zheng et al. [51] & MovieLens, Netflix \\ \cline{2-4} & & Jia et al. [52] & Custom, Renren, Mectup \\ \cline{2-4} & & Du et al. [53] & MovieLens \\ \cline{2-4} & & Wu et al. [54] & MovieLens \\ \hline \multirow{8}{*}{**Autoencoders**} & \multirow{8}{*}{**Autoencoders**} & Ouyang et al. [56] & MovieLens \\ \cline{2-4} & & Li et al. [57] & MovieLens, Book-Crossing, Advertising \\ \cline{2-4} & & Sedhain et al. [58] & MovieLens, Netflix \\ \cline{2-4} & & Strub and Mar. [59] & MovieLens, Jester \\ \cline{2-4} & & Wang et al. [60] & Netflix, GiteULike \\ \cline{2-4} & & Wang et al. [61] & Netflix, GiteULike \\ \cline{2-4} & & Ying et al. [62] & CiteULike \\ \cline{2-4} & & Wei et al. [63][64] & Netflix \\ \cline{2-4} & & Suzuki and Ozaki. [65] & MovieLens \\ \cline{2-4} & & Li and She. [66] & CiteULike \\ \cline{2-4} & & Liang et al. [67] & MovieLens, Netflix, Million Song \\ \cline{2-4} & & Li et al. [68] & MovieLens, OfflinePay \\ \hline
**Recurrent Neural Networks (RNN)** & Ko et al. [70] & Brightkite, Last\(\mathrm{FM}\) \\ \hline \multirow{8}{*}{**Convolutional Neural Networks (CNN)**} & Zhang et al. [74] & MovieLens, IntentBooks \\ \cline{2-4} & & Lo et al. [75] & MovieLens, Pinterest \\ \cline{2-4} & & Lee et al. [76] & MovieLens, YouTube \\ \cline{2-4} & & He et al. [77] & Yelp, Gowalla \\ \cline{2-4} & & Divan and Alizadeh. [79] & MovieLens, Netflix \\ \cline{2-4} & & He et al. [80] & MovieLens, Pinterest \\ \hline
**Deep Belief Networks (DBN)** & Zhao et al. [83] & MovieLens \\ \hline \multirow{8}{*}{**Attentional networks**} & Tay et al. [87] & MovieLens, Netflix, IMDb, LastFM, Books, Delicious, Meetup, Twitter \\ \cline{2-4} & & Jhamb et al. [88] & MovieLens, Meetup \\ \cline{1-1} \cline{2-4} & & \\ \cline{1-1} \cline{2-4} & **Generative Adversarial Networks (GAN)** & Tong et al. [90] & MovieLens, Netflix \\ \hline \hline \end{tabular}
\end{table}
Table 4: Literature on using deep learning architectures in collaborative filtering recommender systems.
We also classified papers based on their datasets. _Table 6_, shows datasets that researchers frequently used for building deep learning collaborative filtering recommender systems. The values of this table demonstrate that MovieLens [91] is the most common dataset in building collaborative filtering-based recommender systems that used deep learning architectures. MovieLens is a result of the GroupLens research project that the University of Minnesota did. It is a movie recommender website that users can rate the movies from 1, which means the worst score to 5, which means the maximum satisfaction. Two versions of the dataset were used in the reviewed studies, entitled MovieLens 100k and MovieLens 1M. Respectively, MovieLens 100k includes 100,000 ratings for 1,682 movies from 943 users, and MovieLens 1M includes 6,040 users who rated 1,000,000 ratings for 3,952 movies.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline
**Deep learning Architecture** & **Training Type** & **Training Algorithm** & **Common Applications** \\ \hline
**Restricted Boltzmann Machines (RBM)** & Unsupervised & Gradient Descent & Feature Extraction \\ \hline
**Autoencoders** & Unsupervised & Backpropagation & Encoding; Reducing Dimensions \\ \hline
**Recurrent Neural Networks (RNN)** & Supervised & Gradient Descent / Backpropagation & Natural Language Processing; Translating Languages \\ \hline
**Convolutional Neural Networks (CNN)** & Supervised & Gradient Descent / Backpropagation & Image Processing \\ \hline
**Multilayer Perceptron (MLP)** & Supervised & Gradient Descent / Backpropagation & Stochastic Solution; Fitness Approximation \\ \hline
**Deep Belief Networks (DBN)** & Supervised & Gradient Descent & Classification; Anomaly Detection \\ \hline
**Attentional Networks** & Supervised & Gradient Descent / Backpropagation & Image Processing; Speech Recognition, Natural Language Processing \\ \hline
**Generative Adversarial Networks (GAN)** & Unsupervised & Backpropagation & Generating Data; Reconstruction Data and Images \\ \hline \hline \end{tabular}
\end{table}
Table 5: Deep learning architectures comparison.
_Fig. 13,_ shows a pie chart that presents another view of the given values in _Table 6._ However, to make a better presentation, datasets with the frequency of one, aggregated in one group, so the chart is divided into seven parts. Again, it is clear that the MovieLens dataset is the most common dataset in building deep learning collaborative filtering recommenders. Another categorization in this work was done based on the publishing date of the reviewed papers; _Fig. 14,_ shows the diversity of these studies based on their publishing date.
As shown in _Fig. 14,_ the year 2016 has the most significant number of published papers about collaborative filtering-based recommender systems built based on deep levering architectures. Respectively, 2018 and 2015 are the second and third ones. As a consequence of these values, it can be concluded that there is a
\begin{table}
\begin{tabular}{l l} \hline
**Title** & **Frequency** \\ \hline MovieLens & 22 \\ Netflix & 10 \\ CiteULike & 4 \\ Mectup & 3 \\ Pinterest & 2 \\ LastFM & 2 \\ YouTube & 1 \\ Twitter & 1 \\ IMDb & 1 \\ Jester & 1 \\ Book-Crossing & 1 \\ Brightkite & 1 \\ Books & 1 \\ IntentBooks & 1 \\ GowaYelpila & 1 \\ Delicious & 1 \\ OfflinePay & 1 \\ Million Song & 1 \\ Renren & 1 \\ Advertising & 1 \\ Customized datasets & 1 \\ \end{tabular}
\end{table}
Table 6: Frequency of used datasets in the reviewed papers.
direct relationship between the studies about recommender systems and achievements in providing efficient deep learning architecture in recent years.
### Why Deep Learning Architectures for Recommendation?
With the tremendous extension in the volume of online data, handling the user's requests with traditional information retrieval and decision support systems has become highly challenging. However, as deep learning techniques are efficient in combining multiple sources of information and explore hidden features and patterns from them, big data caused increasing the popularity of these systems in recent years [16].
Another noticeable strength of employing deep neural networks, especially in collaborative filtering-based recommender systems, is the capability in transforming the multi-dimensional user-item matrixes with sparsity into a smaller matrix with more data. For instance, Unger et al. [92] employed autoencoders to decrease the dimensions of environmental features and overcome the sparsity in context-aware recommendation systems.
Extracting visual features by convolutional neural networks as complemental data to user's history and ratings is an adequate technique to solve the cold start problem [93]. Shin et al. [94] proposed a blog recommender system. The authors to deal with the cold-start problem combined derived features from textual data and pictures by CNN.
As mentioned in section 2.2.3, identifying grey sheep users is another challenge ahead of building profitable recommender systems; due to the advantages of neural networks for clustering data based on different features. For instance, Rabba [95] proposed a system that detects grey sheep users based on unsupervised learning clustering techniques.
## 5 Conclusion
Nowadays, practical filtering information and providing personalized recommendations have become increasingly critical, notably in online-based industries such as e-commerce or customer services. By increasing the interest in applying deep learning in different fields, enhancing recommender systems' performance by utilizing these kinds of approaches has also become increasingly pervasive.
In this study, we provided an extensive review of utilizing and leveraging deep learning architectures in collaborative filtering recommender systems. However, in contrast to the subject's prior works that reviewed different techniques generally, we specifically provided a comprehensive review of deep learning-based collaborative filtering recommender systems. We chose collaborative filtering as the most common technique in building recommender systems and tried to clarify its relationship with deep learning architectures. Moreover, another analysis was done based on the research' datasets. Due to the results, the MovieLens dataset is the most popular dataset to make a collaborative filtering recommender system that was build based on deep learning architectures.
Due to the results, Autoencoders and Restricted Boltzmann Machine (RBM) are the most popular ones, and other architectures such as Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), Deep Belief Networks (DBN), Attentional networks and Generative Adversarial Networks (GAN), gained fewer attention from researchers. In future work, we are interested in study the results of applying deep learning in varied fields of recommenders and compare the degree of influence from different perspectives, such as revenue, engagement, etc. |
2303.02370 | Self-Supervised Learning for Place Representation Generalization across
Appearance Changes | Visual place recognition is a key to unlocking spatial navigation for
animals, humans and robots. While state-of-the-art approaches are trained in a
supervised manner and therefore hardly capture the information needed for
generalizing to unusual conditions, we argue that self-supervised learning may
help abstracting the place representation so that it can be foreseen,
irrespective of the conditions. More precisely, in this paper, we investigate
learning features that are robust to appearance modifications while sensitive
to geometric transformations in a self-supervised manner. This dual-purpose
training is made possible by combining the two self-supervision main paradigms,
\textit{i.e.} contrastive and predictive learning. Our results on standard
benchmarks reveal that jointly learning such appearance-robust and
geometry-sensitive image descriptors leads to competitive visual place
recognition results across adverse seasonal and illumination conditions,
without requiring any human-annotated labels. | Mohamed Adel Musallam, Vincent Gaudillière, Djamila Aouada | 2023-03-04T10:14:47Z | http://arxiv.org/abs/2303.02370v3 | # Self-Supervised Learning for Place Representation Generalization across Appearance Changes
###### Abstract
Visual place recognition is a key to unlocking spatial navigation for animals, humans and robots. While state-of-the-art approaches are trained in a supervised manner and therefore hardly capture the information needed for generalizing to unusual conditions, we argue that self-supervised learning may help abstracting the place representation so that it can be foreseen, irrespective of the conditions. More precisely, in this paper, we investigate learning features that are robust to appearance modifications while sensitive to geometric transformations in a self-supervised manner. This dual-purpose training is made possible by combining the two self-supervision main paradigms, i.e. contrastive and predictive learning. Our results on standard benchmarks reveal that jointly learning such appearance-robust and geometry-sensitive image descriptors leads to competitive visual place recognition results across adverse seasonal and illumination conditions, without requiring any human-annotated labels1.
Footnote 1: This work was funded by the Luxembourg National Research Fund (FNR), under the project reference BRIDGES2020/IS/14755859/MEET-A/Aouada, and by LMO ([https://www.lmo.space](https://www.lmo.space)).
## 1 Introduction
Visual Place Recognition (VPR) is central for localizing - _i.e._ estimating the pose of - a camera in a scene [35, 19], with applications ranging from autonomous driving to augmented reality. In practice, VPR is most often framed as an image retrieval task in which the goal is, given a _query_ image, to retrieve one or several images depicting the same place - likely under different conditions - from a _reference_ database [25]. Changes in conditions can correspond to a variety of factors such as changes in viewpoint, presence of occluding and/or dynamic objects and changes in seasonal, illumination or weather conditions. Therefore, recognizing places under different conditions is a challenging task, yet essential for enabling the deployment of more reliable vision-based applications in the real world.
Research in neuroscience has shown that biological intelligence in place recognition lies on a strong ability to create abstract representations of observed places so that they can be foreseen and recognized under different circumstances [51]. At the root of such mechanism are mental representations of places called _cognitive maps_[25]. In particular, a key role of cognitive maps is to facilitate generalization of sparse knowledge (_e.g._, a place seen only during day-time) to novel experiences (_e.g._, night-time) [51]. Therefore, this generalization capability requires achieving a sufficient level of abstraction in the place representation so that it is not required to be re-learned from scratch when non-critical visual information changes [51].
Figure 1: **ACM-Net Training Strategy:** Three views are generated from an input image. The Appearance Module in green (top) maps the original and appearance-augmented views into close representation vectors \(\{\mathbf{z}_{0},\mathbf{z}_{1}\}\). The Geometry Module in blue (bottom) predicts the transformation \(\phi\) applied between the original and third views.
State-of-the-art VPR methods have focused on achieving invariance to both environmental conditions and viewpoint changes in image representations, the latter being for recognizing places observed under unprecedented angles [11, 24]. However, we argue that such viewpoint invariance may be detrimental in the process of distinguishing between different places. Moreover, recent works have shown that favoring a more general equivariance in image representations may be more beneficial than seeking _only_ invariance [50, 7, 30].
Unlike supervised learning techniques that eventually end up learning shortcuts from a finite set of labelled data [14] and therefore hardly generalize to unseen conditions, Self-Supervised Learning (SSL) strategies seem closer to the human way of learning [23]. In practice, they are designed to obtain image representations that are sensitive and/or robust to given image transformations without requiring any type of manual annotation. While only a few works have investigated SSL for VPR [12, 42], we herein propose to combine the two main SSL paradigms, _i.e._, Contrastive Learning (CL) [6] and Predictive Learning (PL) [20], to obtain image representations that are both robust to appearance changes and sensitive to geometric transformations. By doing that, our goal is to learn features suitable for visual place recognition under appearance changes.
In this paper, we propose ACM-Net, an _Artificial Cognitive Mapping Network_ for learning abstract place representations generalizable to unseen conditions. More precisely, we leverage self-supervised learning for addressing the lack of knowledge about testing conditions when training the model on reference images with low appearance variability. The place representation abstraction is achieved by contrastive learning: feeding the model with appearance augmentations and teaching it to bring representations of the same place close to each other. To ensure discriminative representations between different places and regularize the CL-based training, we apply geometric transformations to reference images and use a predictive learning framework to classify the representation based on the applied transformation.
**Contributions.** Our contributions are two-fold:
(1) A novel model for Visual Place Recognition under extreme condition changes, ACM-Net, that leverages both contrastive and predictive self-supervised learning approaches.
(2) An evaluation confirming the competitiveness of ACM-Net compared to state-of-the-art approaches on standard benchmarks featuring different conditions (day/night, weather, seasons), among which the very challenging Alderley Dataset [27].
**Paper organization.** The rest of the paper is organized as follows. Relevant work on SSL and VPR is reviewed in Section 2. ACM-Net is presented in Section 3, while experimental evaluation demonstrating the validity of our approach is reported in Section 4. Section 5 concludes the paper and presents future works.
## 2 Related Work
### Self-Supervised Learning
Self-supervised methods aim at learning visual features from large-scale unlabeled images. They are of high interest when experiencing a wide variety of real scenarios and environmental conditions such as in autonomous driving. To learn visual features from unlabeled data, a pretext task is often designed for the network to solve so that it is trained by optimizing an objective function related to the task [20]. The objective function can be applied on network predictions (predictive learning) or directly in the representation space to constrain its topology (contrastive learning). SSL is therefore a way to provide image representations with some desired properties such as sensitivity and robustness to given transformations.
**Predictive Learning.** PL allows for indirectly incorporating inductive biases into image representations based on some subsequent network output [20]. Related pretext tasks range from image colorization [55] to jigsaw puzzle solving [31] and include rotation prediction [15]. In the latter, Gidaris _et al._ propose a pretext task consisting in predicting the angle of a 2D rotation applied to an image [15]. Built on the intuition that a network cannot recognize the rotation that was applied to the image if it is not aware of the concept of the depicted object, the learned features are relevant for a downstream image classification task.
**Contrastive Learning.** CL acts directly on image representations by applying a contrastive loss that takes into accounts cross-relations between batch elements. A general framework for contrastive learning of visual representations, named SimCLR [6], has recently been introduced. This simple framework requires neither specialized architectures [2] nor memory banks [54, 28]. The method, indeed, consists first in sampling two different data augmentations from the same family of augmentations. Then each augmentation is applied to an original image to obtain two correlated views. A base encoder and a projection head are then trained using a contrastive loss that we also leverage in one branch of ACM-Net, to maximize agreement between representations of these two views and minimize agreement with views originating from different images. Since training convergence of CL models may be difficult to achieve, and thus to regularize the training, ScatSimCLR [21] additionally regresses the augmentation parameters for each view. In ACM-Net, the CL training is regularized by adding a separate PL branch.
Combining Predictive and Contrastive Learning.CL aims at inducing invariance to some content-preserving transformations while being distinctive to such content changes. On the other side, PL is mostly used to incorporate sensitivity, and ideally equivariance, to given transformations into representations. While some authors have pointed out the richer information contained in more equivariant representations in comparison with more invariant ones [50, 7], some others have demonstrated that encouraging the network to be invariant to certain transformations while equivariant to other transformations is more efficient than seeking only one of the two properties [33, 49]. For instance, Winter _et al_. [53] have proposed an AutoEncoder-based framework to learn representations that are both robust and sensitive to rotations. Specifically, an encoder maps a rotated image to a more invariant latent representation from which the decoder infers the original image without rotation. In parallel, a second branch seeks equivariance by predicting the rotation angle. Similarly, Feng _et al_. [10] propose to learn features robust to the rotation of the input picture by separating the features into two parts: one part serving for rotation prediction (so-called equivariant features), and one part on which a contrastive loss is applied to penalize discrepancies originating from different rotations (invariant features). Inspired by these methods, our proposed ACM-Net seeks invariance to appearance augmentations through CL and sensitivity to image rotations through PL to be relevant for VPR downstream task.
### Self-Supervised Learning for Visual Place Recognition
As mentioned in Section 2.1, SSL seems particularly suitable for VPR due to its natural way to circumvent the lack of representativity in training data inherent to unpredictable test-time conditions and scenarios. Despite this, only a few methods have been developed to this day. For instance, Tang _et al_. [42] have proposed to disentangle appearance-related and place-related features using a generative adversarial network with two discriminators. However, this type of method may suffer from unstable training. SeqMatchNet [12] is a CL-based method that leverages sequences of video frames in the contrastive loss to robustify image representations for VPR. This work argues that such sequential information is available in most practical cases, and extending our work to image sequences may be considered in future work.
From a larger perspective, Mithun _et al_. [29] use sets of corresponding images (_i.e._ depicting the same place under different conditions) as an additional form of supervision to improve image representations for VPR. On a closely related topic, Thoma _et al_. [43] propose to relax the hard constraints on geo-tags used for weakly-supervised training of image representations. By contrast with both previous works, we don't use any labels and generate in a self-supervised manner pairs of corresponding images. Venator _et al_. [46] learn appearance-invariant local descriptors through SSL to match query and retrieved images. This can be considered as a post-processing step for our method.
## 3 Proposed ACM-Net
Our main goal is to allow the model to learn features that are robust to extreme appearance changes while meaningful for the VPR task. We are, thus, interested in abstracting the image representation enough so that it is sensitive to the geometric information characterizing the place depicted in the picture but agnostic on the environmental conditions under which the place is observed. To achieve that, we incorporate sensitivity and robustness inductive biases into image representations through self-supervised learning strategies.
### Problem Formalization
Following the traditional approach [25], we frame the VPR problem as an image retrieval task, where, given a query image \(\mathbf{q}\) depicting a place \(\mathscr{P}_{\mathbf{q}}\), a representation _a.k.a._ descriptor \(\mathbf{z}_{\mathbf{q}}\) of that image is computed. It is then compared to the descriptors \(\{\mathbf{z}_{i}\}_{i=1..N_{R}}\) of reference images \(\{\mathbf{x}_{i}\}_{i=1..N_{R}}\), where \(N_{R}\) is the size of the reference database. The comparison is done using a given similarity metric (_e.g._, cosine similarity). This inference stage is illustrated in Figure 2.
During the training, the model only has access to reference images that we assume unlabelled. Moreover, the environmental conditions under which the query image is acquired are not necessarily similar to the ones featured in the reference database, making the problem very challenging, even sometimes for human eyes.
### Preliminaries: Sensitivity & Robustness
Our method aims at extracting image features that are robust to appearance and sensitive to geometry at the same time. In mathematical terms, these correspond to the notions of invariance and equivariance, respectively.
From a formal perspective, given \(\mathfrak{G}\) a generic group of transformations and \(\mathfrak{g}\) an element of \(\mathfrak{G}\), we denote by \(\phi^{(\mathrm{I})}_{\mathfrak{g}}\) and \(\phi^{(\mathrm{O})}_{\mathfrak{g}}\), respectively, the actions of \(\mathfrak{g}\) into the input and output spaces of a function \(\mathcal{F}:\mathbb{I}\rightarrow\mathbb{O}\). Therefore, the following definitions hold:
**Definition 1**: \(\mathcal{F}\) _is invariant to \(\mathfrak{G}\) if and only if_
\[\forall\mathfrak{g}\in\mathfrak{G},\forall\mathbf{x}\in\mathbb{I},\quad \mathcal{F}(\phi^{(\mathrm{I})}_{\mathfrak{g}}\mathbf{x})=\mathcal{F}(\mathbf{ x}). \tag{1}\]
**Definition 2**: \(\mathcal{F}\) _is equivariant to \(\mathfrak{G}\) if and only if_
\[\forall\mathfrak{g}\in\mathfrak{G},\forall\mathbf{x}\in\mathbb{I},\quad \mathcal{F}(\phi^{(\mathrm{I})}_{\mathfrak{g}}\mathbf{x})=\phi^{(\mathrm{O})}_ {\mathfrak{g}}\mathcal{F}(\mathbf{x}). \tag{2}\]
Note that invariance is a special case of equivariance when \(\phi^{(\mathsf{O})}_{\mathsf{\theta}}=\mathcal{I}\), the identity mapping, \(\forall\mathsf{\theta}\in\mathfrak{G}\).
In practice, considering an encoder model \(\mathcal{E}\) for extracting features from an image \(\mathbf{x}\), we seek robustness to any appearance transformation \(\mathcal{T}_{A}\):
\[\forall\mathcal{T}_{A},\forall i\in[1;N_{R}],\quad\mathcal{E}(\mathcal{T}_{A} \mathbf{x}_{i})\approx\mathcal{E}(\mathbf{x}_{i}), \tag{3}\]
and, at the same time, sensitivity to a certain group of geometric transformations \(\mathfrak{G}_{G}\):
\[\forall\mathcal{T}_{G}\in\mathfrak{G}_{G},\forall i\in[1;N_{R}],\quad\mathcal{ E}(\mathcal{T}_{G}\mathbf{x}_{i})\approx\mathcal{T}_{G}^{\prime}\mathcal{E}( \mathbf{x}_{i}), \tag{4}\]
where \(\mathcal{T}_{G}^{\prime}\approx\mathcal{T}_{G}\). The different possible groups of transformations are investigated in Section 4.
### Model Architecture
Our pipeline exploits both CL for encouraging invariance to appearance changes and PL for encouraging equivariance to geometric image augmentations. This hybrid approach is consistent with the _Equivariant Contrastive Learning_ framework proposed in [7]. The overall architecture of the proposed ACM-Net is presented in Figure 2.
At training time, ACM-Net is composed of two branches sharing the weights of an encoder model \(\mathcal{E}\). The first branch, denoted _Appearance Module_, takes as inputs the original image \(\mathbf{x}_{i}\) and an augmented version with modified appearance \(\mathcal{T}_{A}\mathbf{x}_{i}\), then applies a contrastive learning loss in the representation space to bring the two descriptors closer. The second branch, denoted _Geometry Module_, uses rotated versions of the original image, \(\mathrm{R}(n^{\circ})\mathbf{x}_{i}\), and predicts the angle of the rotation \(n\).
**Appearance Module.** The first branch, divided into two sub-branches (see Figure 1), is similar to SimCLR [6] with shared encoder \(\mathcal{E}\) and MultiLayer Perceptron (MLP) \(\mathcal{P}_{A}\) mapping between the image domain and the latent representation space where the contrastive loss is applied. Given original images \(\mathbf{x}_{i}\) along with their augmented versions \(\mathcal{T}_{A}\mathbf{x}_{i}\), the weights of the two networks are learned using a contrastive loss. This loss, formalized in Section 3.4, ensures that the descriptor of each version,, is similar to the descriptor of its corresponding view, \(\mathcal{E}(\mathcal{P}_{A}(\mathcal{T}_{A}\mathbf{x}_{i}))\), while distant from the other descriptors. The intuition behind this module is to force the encoder model \(\mathcal{E}\) to learn features agnostic on the conditions (e.g. illumination, weather, season) under which the place was initially observed.
**Geometry Module.** The second branch is made of the same shared encoder \(\mathcal{E}\) and a prediction MLP \(\mathcal{P}_{G}\) to classify different rotated versions of the original image \(\mathrm{R}(n^{\circ})\mathbf{x}\) according to the rotation angle \(n\). Leveraging a classical cross-entropy loss, the goal of this module is to force the encoder model \(\mathcal{E}\) to learn geometry-aware features that are relevant for place recognition.
Combined together, the use of the two modules aims at disentangling appearance and geometry of input images in their representation to allow for visual place recognition under appearance changes.
The architecture used at test time to compute image descriptors is the encoder \(\mathcal{E}\) followed by projector network \(\mathcal{P}_{A}\) (see Figure 2, right part).
Figure 2: Overview of ACM-Net. **Training Stage:** from an original image \(\mathbf{x}_{1,0}\), augmented versions with a modified appearance \(\mathbf{x}_{1,1}\) and different orientations \((\mathbf{x}_{1,0^{\circ}},\mathbf{x}_{1,90^{\circ}},\mathbf{x}_{1,180^{\circ}},\mathbf{x}_{1,270^{\circ}})\) are generated. Representations of the first two images are brought closer thanks to a contrastive learning framework to achieve appearance robustness. In parallel, original and rotated images are passed through a classification network sharing the same encoder to predict the applied transformation and achieve geometric sensitivity. Note that our method does not rely on any manual annotation. **Inference Stage:** The representations from query and reference images are compared based on similarity measure then the closest \(k\) reference images constitute the image retrieval output.
### Model Loss
_Note: For the sake of clarity, we herein introduce more specific notations for denoting images and their augmented/rotated versions._
To guide our model towards both its invariance and equivariance objectives, we use a combination of contrastive and predictive losses.
Given a random batch of \(N\) reference images \(\mathcal{B}=\{\mathbf{x}_{i,0}\}_{i=1..N}\) corresponding to \(N\) different places, we apply one random appearance transformation to each image. By so doing, we create \(N\) additional images \(\{\mathbf{x}_{i,1}\}_{i=1..N}\). These \(2N\) images constitute the contrastive batch \(\mathcal{B}_{C}=\{\mathbf{x}_{i,j}\}_{i=1..N,j\in\{0,1\}}\) that is fed into the _Appearance_ Module. Furthermore, we also apply rotations of \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\) and \(270^{\circ}\) to each original image. As a result, we create the predictive batch of \(4N\) images \(\mathcal{B}_{P}=\{\mathbf{x}_{i,j^{\circ}}\}_{i=1..N,j\in\Theta_{4}}\), where \(\Theta_{4}=\{0,90,180,270\}\). \(\mathcal{B}_{P}\) is fed into the Geometry Module.
Contrastive loss.The contrastive batch \(\mathcal{B}_{C}\) contains \(N\)_positive_ pairs of images \((\mathbf{x}_{i,0},\mathbf{x}_{i,1})\) depicting the same place, the rest being _negative_ pairs corresponding to different places. We use NT-Xent loss [6] that leverages positive samples, and is based on the cosine similarities between the obtained image representations \(\mathbf{z}_{..}=\mathcal{P}_{A}(\mathcal{E}(\mathbf{x}_{..}))\), expressed as
\[\mathrm{s}(\mathbf{z}_{i,j},\mathbf{z}_{k,l})=\frac{\mathbf{z}_{i,j}\cdot \mathbf{z}_{k,l}}{\|\mathbf{z}_{i,j}\|\|\mathbf{z}_{k,l}\|}, \tag{5}\]
where \(\cdot\) is the dot product.
Specifically, the contrastive loss is defined as
\[\mathcal{L}_{C}=\frac{1}{2N}\sum_{i=1}^{N}\ell_{0\to 1}(i)+\ell_{1\to 0}(i), \tag{6}\]
where
\[\ell_{a\to b}(i)=-\mathrm{log}\frac{\mathrm{exp}(\mathrm{s}(\mathbf{z}_{i,a}, \mathbf{z}_{i,b})/\tau)}{\sum_{k=1}^{N}\mathds{1}_{k\neq i}\sum_{j=0}^{1} \mathrm{exp}(\mathrm{s}(\mathbf{z}_{i,a},\mathbf{z}_{k,j})/\tau)}, \tag{7}\]
with \(\tau\) denoting a temperature parameter that controls the strength of penalties on pairs of non-corresponding images [47] and \(\mathds{1}_{k\neq i}\) being equal to 1 if \(k\neq i\), and 0 otherwise.
The contrastive loss aims at making representations of the same place under different conditions similar to each other, while forcing representations of different places to be different.
Predictive loss.The predictive batch \(\mathcal{B}_{\mathcal{P}}\) contains four rotated views of each place. The task of this branch is to predict the rotation angle for each of the \(4N\) pictures. We frame this as a classification problem with 4 classes corresponding to \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\) and \(270^{\circ}\) rotation angles. The predictive loss is therefore the standard cross-entropy loss:
\[\mathcal{L}_{P}=-\sum_{i=1}^{N}\sum_{j\in\Theta_{4}}\mathrm{c}(\mathbf{x}_{i, j})\cdot\mathrm{log}(\widetilde{\mathbf{z}}_{i,j}), \tag{8}\]
where \(\widetilde{\mathbf{z}}_{i,j}=\mathrm{Softmax}(\mathcal{P}_{G}(\mathcal{E}( \mathbf{x}_{i,j})))\in\mathbb{R}^{4}\) is the prediction, \(\mathrm{log}()\) the element-wise natural logarithm, \(\cdot\) the dot product and \(\mathrm{c}(\mathbf{x}_{i,j})\in\mathbb{R}^{4}\) the groundtruth with elements equal to 0 except the \(n\)th element equal to 1 if the true rotation is \((n-1)\times 90^{\circ}\).
Overall loss.The final loss is the combination of the contrastive loss for appearance robustness and predictive loss for geometry sensitivity:
\[\mathcal{L}=\mathcal{L}_{C}+\lambda.\mathcal{L}_{P}, \tag{9}\]
Figure 3: Examples of augmentations leveraged by ACM-Net. Top row (a): an original input batch from Oxford RobotCar v2 dataset, (b) pixel-level augmentations for appearance changes, (c) random rotations applied on the original image.
where \(\lambda\) is a weighting factor to balance the two terms.
## 4 Experimental Evaluation
### Datasets
**The Nordland dataset [40]:** records a 728 km long train journey connecting the cities of Trondheim and Bodo in Norway. It contains four long traversals, once per season, with diverse visual conditions. The dataset has 35768 images per season with one-to-one correspondences between them. We follow the dataset partition proposed by Olid _et al_. [32] with test set made of 3450 photos from each season.
**The Alderley dataset [27]:** records an 8 km travel along the suburb of Alderley in Brisbane, Australia. The dataset contains two sequences: the first one was recorded during a clear morning, while the second one was collected on a stormy night with low visibility, which makes it a very challenging benchmark. The dataset contains 14607 images for each sequence and each place have 2 images. We train our approach on the day sequence and test on the night sequence.
**The Oxford RobotCar Seasons v2 dataset [44]:** is based on the RobotCar dataset [26], which depicts the city of Oxford, UK. It contains images acquired from three cameras mounted on a car. There are 10 sequences corresponding to 10 different traversals carried out under very different weather and seasonal conditions. The rear camera images of the _overcast-reference_ traversal (6954 images) are used as a basis for reference training images, to which we add 1906 rear camera images from other traversals following the _v2_ train/test split. These additional images cover different environmental conditions but only a subset of places (not full traversals). The test set contains 1872 images from all traversals except _overcast-reference_, without overlap with training images.
### Evaluation
The evaluation on both Nordland and Alderley datasets uses the recall R@N measure, which consists in the proportion of successfully localized query images when considering the first \(N\) retrievals. If at least one of the top \(N\) reference images is within a tolerance window around the query's ground truth correspondence, the query image is deemed succesfully localized. The tolerance window is set to two frames distant from the query before and after, so that the window contains 5 pictures. Following the common approach for NordLand [3, 17, 16], images of the winter sequence are used as queries, while the summer sequence is used as reference.
For RobotCar-Seasons v2, we follow the PatchNetVLAD [16] approach and utilize the 6-DoF pose of the best-matched reference picture as prediction of the query's pose. Since we don't compute any pose, our image retrieval method is not comparable with pose estimation methods such as MegLOC [34].
### Implementation details
**Encoder model \(\mathcal{E}\).** We use ResNet50 [18] as the backbone, with pre-training on ImageNet using the Timm library [52]. The last classification layer is discarded so that the model is only used for the feature extraction.
**Rotation predictor \(\mathcal{P}_{G}\).** We use a simple 1-layer perceptron with layer normalization and ReLU activation.
**Projector \(\mathcal{P}_{A}\)** We use a simple 1-layer perceptron with batch normalizations and ReLu activation. The dimension of the output (_i.e_., image descriptor) is 1024.
\begin{table}
\begin{tabular}{c|c} \hline Data Augmentation Type & Probability \\ \hline Planckian Jitter & 0.8 \\ Color Jiggle & 0.5 \\ Plasma Brightness & 0.5 \\ Plasma Contrast & 0.3 \\ Gray scale & 0.3 \\ Box Blur & 0.5 \\ Channel Shuffle & 0.5 \\ Motion Blur & 0.3 \\ Solarize & 0.5 \\ \hline \end{tabular}
\end{table}
Table 1: List of data augmentations applied to the images on-the-fly during training. We also set a probability for each one of them.
\begin{table}
\begin{tabular}{l|c} \hline Method & Alderley Day/Night \\ \hline NetVLAD [1] & 3.35 \\ CIM [9] & 7.82 \\ Patch-NetVLAD [16] & 7.99 \\ Seqlam [27] & 9.90 \\ Retrained NetVLAD [41] & 15.8 \\ AFD [41] & 21.0 \\ \hline ACM-Net (Ours) & **25.2** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative results on Nordland dataset. Best results are in **bold**. Second best results are in **italic**.
\begin{table}
\begin{tabular}{l|c} \hline Method & Alderley Day/Night \\ \hline NetVLAD [1] & 3.35 \\ CIM [9] & 7.82 \\ Patch-NetVLAD [16] & 7.99 \\ Seqlam [27] & 9.90 \\ Retrained NetVLAD [41] & 15.8 \\ AFD [41] & 21.0 \\ \hline \end{tabular}
\end{table}
Table 3: Quantitative results on Alderley dataset. Best result is in **bold**.
Appearance Augmentations.Following domain generalization approaches, our model leverages numerous pixel-level data augmentations to trigger appearance invariance bias in the model. The list of pixel-level augmentations for appearance modification is provided in Table 1, while examples of such augmentations are provided in Figure 3. The chosen set of variations empirically achieved good performance whereas other tested combinations were less favourable. We use the Kornia [37] library for self-supervised data augmentation.
Geometric Augmentations.Our training strategy encourages information about rotations to be retained in the image representation rather than guaranteeing strict equivariance. However, in practice, we observe that the average cosine similarity between representations of rotated views (referred to as _equivariant measure_ in [7]) tends to 0 (_i.e._, 90\({}^{\circ}\) angle) when the dedicated module is added (see Table 4). Moreover, the choice of this particular group of geometric transformations is the outcome of experimentations whose results are presented in Figure 4. In particular, it shows that the best performance is achieved with the cyclic group of 90\({}^{\circ}\)rotations, compared to the groups of 2D affine transformations, 2D projective transformations, and 2D rotations.
Model training.The model is trained for 1000 epochs using Adam optimizer [22] and a batch size of 64. Although contrastive learning usually requires larger batch size [5], using Adam optimizer allowed us to obtain good results with a smaller batch size. A learning rate of 0.003 had the best performance with this optimizer. The temperature parameter \(\tau\) is set to 0.01 and the loss factor \(\lambda\) is set to 1 in our experiments.
Inference.Prior to the inference stage, we pass the set of reference images to the Appearance Invariant Module of the trained model: \(\mathcal{E}\rightarrow\mathcal{P}_{A}\to L2-\mathrm{normalization}\) and thus build a reference descriptor bank. A k-Nearest Neighbor search based on cosine similarity to find the closest references to the query image.
### Results
Tables 2, 3 and 5 show the results of ACM-Net along with other approaches on the three previously described datasets: partitioned Nordland, Alderley Day/Night and RobotCar-Seasons datasets.
The results demonstrate that our method outperforms, by a large margin, standard baselines such as NetVLAD [1] and even local feature-based methods such as SuperGlue [38]. It outperforms Patch-NetVLAD [16] on Nordland dataset (Table 2) and competes with it on Robotcar Seasons v2 (Table 5), despite the fact that Patch-NetVLAD leverages multi-scale descriptors whereas we rely on a single global descriptor. Only the transformer-based architecture TransVPR [48] presents a higher performance as compared to ACM-Net. We note, however, that our model is based on simple ConvNet and MLP elements that can be upgraded to improve the performance. Finally, it is worth noting that we achieve state-of-the-art results on the very challenging Alderley dataset (Table 3).
Qualitative results are presented in Figure 5 (Nordland dataset) and 6 (Alderley). More qualitative results are included in supplementary materials. One can see examples of queries and best retrieved images, along with GradCAM [39] activations. These visualizations demonstrate that ACM-Net, even if trained without any labels, was able to learn features meaningful for outdoor localization tasks such as skylines for instance.
We focused our study on learning global visual representations that are robust to appearance changes and suitable for VPR. Our results demonstrate that it is possible to learn a model relying on constrastive self-supervision for robustness to appearance changes while being able to perceive the geometric structure of the input image by enforcing geometric prediction.
sentation even more robust to condition variations so that extreme cases, such as those depicted in Nordland or Alderley datasets, can be overcome. However, directly using our descriptors on datasets featuring strong viewpoint variations between reference and query images (for a given place) may lead to limited performance. It is worth noting that this assumption has not been tested yet. Furthermore, seeking equivariance to more generic camera motions (not only roll angle variations) may be beneficial to learn more geometry-aware features.
## 5 Conclusions
In this paper, we introduced a novel method for visual place recognition under strong appearance changes. To achieve that, our self-supervised ACM-Net has the advantage of not relying on any form of human supervision. In practice, it learns appearance-robust and geometry-sensitive features that can then be directly used as abstract place representations for visual place recognition. Extensive experimental validation demonstrates the validity and efficiency of our approach. Future work will focus on seeking equivariance to 3D geometric transformations via view synthesis.
Figure 5: Visual Grad-CAM activation of input query winter image, along with retrieved summer image from the Nordland dataset.
Figure 6: Visual Grad-CAM activation of input query night image, along with retrieved day image from the Alderley dataset.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \multicolumn{1}{c}{} & \multicolumn{3}{c}{day conditions} & \multicolumn{3}{c}{night conditions} \\ \cline{2-11} \multicolumn{1}{c|}{} & dawn & duk & OC-semor & OC-semor & OC-semor & rain & snow & sun & night & night \\ \cline{2-11} \multicolumn{1}{c|}{} & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 \\ \cline{2-11} \multicolumn{1}{c|}{} & deg & 2 / 5 / 10 & 2 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 & 25 / 50 / 50 / 50 & 25 / 50 / 50 / 50 / 50 & 25 / 50 |
2308.10406 | Toward a global phase diagram of the fractional quantum anomalous Hall
effect | Recent experiments on the twisted semiconductor bilayer system $t$MoTe$_2$
have observed integer and fractional quantum anomalous Hall effects, which
occur in topological moir\'e bands at zero magnetic field. Here, we present a
global phase diagram of $t$MoTe$_2$ throughout the filling range $0< n\leq 1$
substantiated by exact diagonalization calculations. At a magic angle, we find
that the system resembles the lowest Landau level (LLL) to a remarkable degree,
exhibiting an abundance of incompressible fractional quantum anomalous Hall
states and compressible anomalous composite Fermi liquid states. Away from the
magic angle, particle-hole symmetry is strongly broken. Some LLL-like features
remain robust near half-filling, while others are replaced, predominantly by
charge density waves near $n=0$ and anomalous Hall Fermi liquids near $n=1$.
Among LLL-like phases, we find the anomalous composite Fermi liquid at
$n=\frac{1}{2}$ to be most robust against deviations from the magic angle.
Within the band-projected model, we show that strong particle-hole asymmetry
above the magic angle results from interaction-enhanced quasiparticle
dispersion near $n=1$. Our work sets the stage for future exploration of
LLL-like and beyond-LLL phases in fractional quantum anomalous Hall systems. | Aidan P. Reddy, Liang Fu | 2023-08-21T01:06:15Z | http://arxiv.org/abs/2308.10406v2 | # Toward a global phase diagram of the fractional quantum anomalous Hall effect
###### Abstract
Recent experiments on the twisted semiconductor bilayer system \(t\)MoTe\({}_{2}\) have observed integer and fractionally quantized anomalous Hall effects, which occur in topological moire bands at zero magnetic field. Here, we present a global phase diagram of \(t\)MoTe\({}_{2}\) throughout the filling range \(0\leq n\leq 1\), substantiated by exact diagonalization calculations. At a magic angle, we find that the system resembles the lowest Landau level (LLL) to a remarkable degree, exhibiting an abundance of incompressible fractional quantum anomalous Hall states and compressible anomalous composite Fermi liquid states. Away from the magic angle, particle-hole symmetry is strongly broken. Some LLL-like features remain robust near half filling, while others are replaced, predominantly by charge density waves near \(n=0\) and anomalous Hall Fermi liquids near \(n=1\). Among LLL-like phases, we find the anomalous composite Fermi liquid at \(n=\frac{1}{2}\) to be most robust against deviations from the magic angle. Within the band-projected model, we show that strong particle-hole asymmetry above the magic angle results from interaction enhanced quasiparticle dispersion near \(n=1\). Our work sets the stage for future exploration of LLL-like and beyond-LLL phases in fractional quantum anomalous Hall systems.
## I Introduction
Twisted transition metal dichalcogenide homobilayers (\(t\)TMDs) host topological moire bands that exhibit spin-valley locking and spin/valley contrasting Chern numbers [1]. Owing to band topology and narrow bandwidth, small-twist-angle bilayer MoTe\({}_{2}\) and WSe\({}_{2}\) are predicted to support integer and fractional quantum anomalous Hall (QAH) states [2; 3; 4; 5]. These are chiral topological states that spontaneously break time-reversal symmetry and exhibit integer and fractionally quantized anomalous Hall conductance \(\sigma_{\rm AH}=Ce^{2}/h\) at zero magnetic field respectively. Recently, the appearance of integer QAH states in \(t\)WSe\({}_{2}\) was evidenced by electronic compressibility measurements that show incompressible states with \(C=1\) persisting down to zero magnetic field at filling factors \(n=1\) and \(3\)[6]. In \(t\)MoTe\({}_{2}\), optically detected Landau fan diagrams reveal signatures of integer as well as fractional QAH states, with \(C=-1\), \(-\frac{2}{3}\) and \(-\frac{3}{5}\) respectively [7; 8].
Very recently, for the first time, the fractionally quantized anomalous Hall effect was observed through transport measurements on \(t\)MoTe\({}_{2}\)[9]. This observation provides convincing evidence of a topological phase hosting fractionally charged quasiparticles at zero magnetic field, opening a new frontier in topological physics and quantum materials research.
Following the experimental breakthrough, recent theoretical works have studied various FQAH states in \(t\)MoTe\({}_{2}\) at specific odd-denominator filling fractions [10; 11; 12; 13]. Using numerical exact diagonalization, our recent work [10] showed that the FQAH state at \(n=\frac{2}{3}\) appears robustly over a broad range of twist angles, eventually succumbing to a metal as the twist angle increases. In contrast, the FQAH state at \(n=\frac{1}{3}\) competes with an insulating charge density wave state that is favored at experimentally studied twist angles \(\theta>\sim 2.3^{\circ}\). These conclusions are strongly supported by recent experiments (see below) [8].
The formation of FQAH states in \(t\)TMDs results from (1) exchange-induced spontaneous spin/valley polarization and (2) strong correlation in spin/valley polarized Chern bands at partial filling. Broadly speaking, when a Chern band is sufficiently flat, its wavefunctions sufficiently resemble those of the lowest Landau level (LLL) [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], and the topological gap to higher bands is sufficiently large, the system can be approximately mapped to a the partially filled Landau level through the band-projected Hamiltonian. If this mapping holds faithfully, the existence of FQAH states follows straightforwardly from the well-known fractional quantum Hall states in Landau levels. The key question is: to what extent is the phase diagram of \(t\)MoTe\({}_{2}\) as a function of band filling similar to the Landau level case? And in what aspect is it fundamentally different?
In this work, we map out the global phase diagram for \(t\)TMDs throughout the filling range \(n\leq 1\). We find that, in the vicinity of a "magic angle", the system closely resembles the lowest Landau level (LLL), an abundance of FQAH states appear at Jain sequence filling fractions, and the phase diagram is nearly symmetric about \(n=\frac{1}{2}\), which hosts an anomalous composite Fermi liquid state (ACFL) [25; 13], see Fig. 1(a). At larger twist angles, particle-hole symmetry is strongly broken, leading to a coexistence of phases that are familiar and others that are foreign to the LLL. While the \(n=\frac{1}{2}\) ACFL state and some of the FQAH states survive beyond the magic angle, we find a charge density wave state at \(n=\frac{1}{3}\) and a time-reversal-breaking Fermi liquid phase in the filling factor range \(\frac{2}{3}<n<1\), see Fig. 1(b). The general trend is that above the magic angle, our system bears a closer resemblance to the LLL near half filling than away from it.
Our findings demonstrate the remarkable robustness of ACFL state at \(n=\frac{1}{2}\) with respect to the twist angle and establish its central role as the parent of adjacent FQAH states in the phase diagram. Our work also reveals the "anomalous Hall Fermi liquid" phase exhibiting
an unquantized, filling-dependent anomalous Hall conductivity, which has no counterpart in the LLL. We discuss the origin of similarities and differences between the many-body phase diagram of topological bands in \(t\)TMDs and that of the LLL. The observable consequences of our phase diagram for fractional quantum anomalous Hall systems are also described and compared to recent experiments on \(t\)MoTe\({}_{2}\).
## II Charge gap phase diagram of twisted TMD homobilayers
The resemblance of the system's many-body phase diagram near the magic angle to that of the LLL has a deep origin. The continuum model for AA-stacked, K-valley TMD homobilayers describes holes in a moire-periodic scalar potential and a layer "Zeeman" field that couples to the layer pseudospin and carries a skyrmion texture [1]. In the vicinity of a magic angle, several LLL-like features appear in the lowest moire band at the single-particle level: the bandwidth nearly vanishes and the Berry curvature become nearly uniform [2]; the deviation from the so-called trace condition is minimized [5; 10; 25]; and the general bound on the topological band gap is closest to being saturated [26]. Based on our large-scale density functional theory (DFT) calculations, it is found that \(\theta_{m}\approx 2^{\circ}\) for twisted bilayer MoTe\({}_{2}\)[10] and \(\theta_{m}\approx 1.5^{\circ}\) for twisted bilayer WSe\({}_{2}\)[2]. We note that the use of different DFT parameters [1; 11; 27] results in some variation in the magic angle.
At the many-body level, our previous exact diagonalization (ED) calculations indeed find that the energy gaps of the \(n=\frac{1}{3}\) and \(\frac{2}{3}\) FQAH states are both maximized near a magic angle \(\theta_{m}\)[10]. Here, we present a comprehensive ED study of \(t\)TMDs, both near and above the magic angle, throughout the filling range \(n\leq 1\), finding a plethora of incompressible and compressible states. A defining physical observable of an incompressible state is a finite charge gap in the thermodynamic limit. Here we study the charge gap \(\Delta_{c}(N)=\mu_{N}^{+}-\mu_{N}^{-}\) with \(\mu_{N}^{\pm}=\pm(E_{GS}(N\pm 1)-E_{GS}(N))\) where \(N\) is the number of holes and \(E_{GS}(N)\) is the ground state energy at fixed \(N\). Note that the charge gap is distinct from the neutral gap, the difference between the ground and first excited state energies at fixed particle number. Our results are obtained by exact diagonalization on a finite-size torus of the continuum model for \(t\)MoTe\({}_{2}\) projected to the lowest moire band using the continuum model parameters of Ref. [10] and a Coulomb interaction \(V(r)=\frac{e^{2}}{e^{r}}\). Given that previous studies have shown the ground state over a wide range of fillings \(n\leq 1\) to be fully spin/valley polarized [4; 10], all ED calculations in this work are performed in the fully spin/valley polarized sector. Our model and methods are described in the Supplementary Material of Ref. [10].
In Fig. 1, we show the charge gaps at the Jain sequence filling fractions \(n=\frac{P}{2p+1}\) where \(p=1,2,3\) and their particle-hole conjugates at two twist angles \(\theta=2^{\circ}\), \(2.7^{\circ}\)
Figure 1: Schematic phase diagrams of \(t\)MoTe\({}_{2}\) with respect to \(n\), the number of holes per moire unit cell, at angles near (a) and greater than (b) the magic angle \(\theta_{m}\). (c,d) Charge gap at several Jain sequence filling fractions at corresponding representative twist angles and two interaction strengths. In (c,d), blue denotes FQAH, red denotes CDW, and black denotes undetermined. Data for fractions with denominator 3, 5, and 7 are obtained on systems with 27, 25, and 28 moiré unit cells respectively (see Supplementary material). (F/IQAH: fractional/integer quantum anomalous Hall, ACFL: anomalous composite Fermi liquid, CDW: charge density wave, AHFL: anomalous Hall Fermi liquid.)
and two interaction strengths \(\epsilon^{-1}=0.1,0.2\). At \(\theta=2^{\circ}\), representative of the system near the magic angle \(\theta_{m}\), the charge gap is positive at all \(p\), decreases with increasing \(p\), and is nearly particle-hole symmetric under \(n\to 1-n\). By inspecting the many-body spectra, we confirm that these incompressible states are all FQAH states (see Supplementary Material). Remarkably, the decreasing charge gaps of FQAH states as the filling approaches \(\frac{1}{2}\) closely resembles the Jain sequence of fractional quantum Hall states in the LLL, despite being at zero magnetic field.
In contrast, at \(\theta=2.7^{\circ}\), representative of a broad range of angles \(\theta>\theta_{m}\), the charge gap exhibits strong particle-hole asymmetry, see Fig. 1(d). The largest charge gap is found at \(n=\frac{1}{3}\). However, the \(n=\frac{1}{3}\) state here is an insulating charge density wave rather than an FQAH state [5; 10]. On the other hand, the \(n=\frac{2}{3}\) state, which has a smaller charge gap, is an FQAH state with the topological order of the particle-hole conjugate of the \(\frac{1}{3}\) Laughlin state in the LLL. Compared to that at \(n=\frac{2}{3}\), other FQAH states are more fragile when \(\theta\) exceeds \(\theta_{m}\). For \(\epsilon^{-1}=0.1\), the charge gap at \(n=\frac{4}{7}\) become negative and that at \(n=\frac{3}{5}\) is positive but very small. When interaction strength increases to \(\epsilon^{-1}=0.2\), the charge gaps at both filling fractions increase and are both positive. Compared to the FQAH state at \(n=\frac{2}{3}\), stronger interactions are necessary to stabilize FQAH states at these filling fractions when \(\theta>\theta_{m}\). Extensive charge gap data, many-body spectra, and discussion thereof is presented in the Supplemental Material.
The charge gap phase diagram shows that, throughout a wide range of twist angles, our system closely resembles the LLL near \(n=\frac{1}{2}\) in that it hosts a sequence of incompressible FQAH states analogous to the Jain sequence in the LLL. This motivates us to study the compressible state at \(n=\frac{1}{2}\).
## III Anomalous Composite Fermi Liquid at Half Filling
Previously, we showed that metallic states analogous to the composite Fermi liquids of the LLL but at zero magnetic field exist at \(n=\frac{1}{2}\) and \(n=\frac{3}{4}\), which we dubbed "anomalous composite Fermi liquids" (ACFL) [13]. Here, we extend our earlier study to a larger system (with 28 moire unit cells) and a broader range of angles beyond \(\theta=2^{\circ}\). Fig. 2 shows the many-body energy spectra at \(n=\frac{1}{2}\) for three twist angles \(\theta=2^{\circ}\), \(3^{\circ}\), and \(3.5^{\circ}\). Near the magic angle, the ground states come in quasi-degenerate pairs as they do in the half-filled LLL (where they are related by center-of-mass magnetic translations [28]), and their many-body momenta also match those of the half-filled LLL on a torus of identical geometry, showing that the \(n=\frac{1}{2}\) state is a composite Fermi liquid.
As the twist angle increases from \(2^{\circ}\) to \(3.5^{\circ}\), no ground state level crossing occurs, suggesting that the system at \(n=\frac{1}{2}\) remains in the same composite Fermi liquid phase throughout. In contrast, for the same interaction strength \(\epsilon^{-1}=0.1\), the FQAH state at \(n=\frac{2}{3}\) undergoes a phase transition at \(\theta\approx 3^{\circ}\) where a level crossing between the FQAH ground state manifold and excited states occurs [10]. This observation suggests that the ACFL state at \(n=\frac{1}{2}\) is more resilient against departure from the magic angle than the FQAH state at \(n=\frac{2}{3}\). In the Supplemental Material, we provide similar evidence for the \(n=\frac{1}{4},\frac{3}{4}\) ACFL states near the magic angle, and show that, as twist angle increases, both these ACFL states undergo level-crossing transitions before the ACFL state at \(n=\frac{1}{2}\).
We also calculate the momentum distribution function \(n(\mathbf{k})=\frac{1}{N_{GS}^{2}}\sum_{i}\left\langle\Psi_{i}\right|c_{\mathbf{k}}^{ \dagger}c_{\mathbf{k}}\left|\Psi_{i}\right\rangle\) (where \(c_{\mathbf{k}}^{\dagger}\) creates a hole in a moire Bloch state and \(i\) runs over a set of \(N_{GS}\) degenerate many-body ground states), shown in Fig. 2. At \(\theta=2^{\circ}\), \(n(\mathbf{k})\approx 0.5\) is nearly constant. This shows that the state is not an ordinary Fermi liquid in which \(n(\mathbf{k})\) would exhibit an abrupt drop across a Fermi surface. At larger angle, \(n(\mathbf{k})\) varies throughout the Brillouin zone but still does not exhibit any signatures of an ordinary Fermi liquid.
Since \(n(\mathbf{k})\to 1-n(\mathbf{k})\) under a particle-hole transformation, the ground state property of ACFL state beyond the magic angle is clearly _not_ particle-hole symmetric. (Whether there exists an "emergent" particle-hole symmetry at low energy is an open question.) At \(\theta=3.5^{\circ}\), the spectrum is on the verge of a level crossing between the higher-energy partner of the quasi-degenerate ACFL ground states, indicating a phase transition out of the ACFL. The nature of the many-body ground state following this transition is an interesting open question for future work.
So far we have seen that (1) near a magic angle, the system resembles the LLL to a remarkable extent; and
Figure 2: Low-lying many-body energy spectra (top) and ground state momentum distribution function \(n(\mathbf{k})\) (bottom) at \(n=\frac{1}{2}\) and several twist angles. In each case \(\epsilon^{-1}=0.1\) and the lowest 5 energy levels in each momentum sector are shown.
(2) above this magic angle, some LLL-like features remain in the neighborhood of half filling, while other new phases appear at fillings away from half filling. The similarity of our many-body phase diagram at the magic angle with the LLL supports the recent argument that the wavefunctions of the magic-angle moire band can be approximately mapped to those of the LLL[24]. Equally important is the departure from the LLL analog at larger twist angles, to which we now turn our attention.
## IV Microscopic origin of particle-hole asymmetry
A generic band-projected Hamiltonian takes the form
\[H=\sum_{\mathbf{k}}\varepsilon(\mathbf{k})c_{\mathbf{k}}^{\dagger}c_{\mathbf{k}}+ \frac{1}{2}\sum_{\mathbf{kpq}}V_{(\mathbf{k+q})(\mathbf{p-q})\mathbf{kp}}c_{\mathbf{k+q}}^{\dagger}c _{\mathbf{p-q}}^{\dagger}c_{\mathbf{p}}c_{\mathbf{k}} \tag{1}\]
where spin is neglected. Therefore, it is determined entirely by the dispersion \(\varepsilon(\mathbf{k})\) and the interaction matrix elements \(V_{\mathbf{k^{\prime}p^{\prime}kp}}\). The latter are determined in turn by the band's single-particle wavefunctions and the two-body interaction potential. Any deviation between the phase diagrams of given band-projected model and the LLL originates from a deviation in these features.
Motivated by our observation of strong asymmetry between \(n\) and \(1-n\) states above the magic angle as shown in Fig. 1, we now consider the band-projected Hamiltonian under the particle-hole transformation. Under the particle-hole transformation \(d^{\dagger}(\mathbf{r})=c(\mathbf{r})\) or, equivalently, \(d_{\mathbf{k}}^{\dagger}=c_{-\mathbf{k}}\), where \(c_{\mathbf{k}}\) annihilates the particle in Bloch state \(\ket{\mathbf{k}}\), Eq. 1 can be rewritten in terms of hole operators as
\[H=\sum_{\mathbf{k}}\tilde{\varepsilon}(\mathbf{k})d_{\mathbf{k}}^{\dagger}d_{ \mathbf{k}}+\frac{1}{2}\sum_{\mathbf{kpq}}\tilde{V}_{(\mathbf{k+q})(\mathbf{p-q})\mathbf{kp}}d_{ \mathbf{k+q}}^{\dagger}d_{\mathbf{p-q}}^{\dagger}d_{\mathbf{p}}d_{\mathbf{k}} \tag{2}\]
where we neglect a constant energy shift. The full-band Slater determinant state \(\ket{\Psi_{f}}=\left(\prod_{\mathbf{k}}c_{\mathbf{k}}^{\dagger}\right)\ket{0}\) is the vacuum for holes, \(d_{\mathbf{k}}\ket{\Psi_{f}}=0\). The interaction matrix elements of holes are related to those of electrons as
\[\tilde{V}_{\mathbf{k^{\prime}p^{\prime}k}\mathbf{p}}=V_{-\mathbf{k^{\prime}-p^{\prime}-k-p ^{\prime}}}^{*}. \tag{3}\]
\(\tilde{\varepsilon}(\mathbf{k})\) describes the energy-momentum dispersion of a single particle removed from an otherwise full band,
\[\tilde{\varepsilon}(\mathbf{k})=-\varepsilon(-\mathbf{k})-\Sigma(-\mathbf{k}). \tag{4}\]
Here, \(\Sigma(\mathbf{k})\) is a self-energy term coming from the interaction between the electron at \(\mathbf{k}\) and all others in the full band (see Supplemental Material for definition). In the lowest Landau level, both \(\varepsilon(\mathbf{k})\) and \(\Sigma(\mathbf{k})\) are \(\mathbf{k}\)-independent, producing particle-hole symmetry. More generally, applying a time reversal transformation \(H\to\mathcal{T}H\mathcal{T}^{-1}\) (where \(\mathcal{T}d_{\mathbf{k}}\mathcal{T}^{-1}=d_{-\mathbf{k}}\) and \(\mathcal{T}i\mathcal{T}^{-1}=-i\)) to Eq. 2 and comparing to Eq. 1 shows that the condition \(\tilde{\varepsilon}(\mathbf{k})=\varepsilon(-\mathbf{k})\) is required to achieve particle-hole symmetry. Any particle-hole asymmetry between states at filling fractions \(n\) and \(1-n\) in the phase diagram of a band-projected model has its origin in violation of this condition.
To shed light on the approximate particle-hole symmetry near the magic angle and lack thereof at larger angles, we now calculate \(\tilde{\varepsilon}(\mathbf{k})\) in \(t\)MoTe\({}_{2}\) and compare it with the bare energy dispersion \(\varepsilon(\mathbf{k})\). A note on terminology before proceeding. Since the moire band of interest is a valence band, we define the filling factor \(n\) as the number of holes (i.e. carrying positive charge) per moire unit cell relative to charge neutrality. These holes are the elementary charge carriers in the moire superlattice. The \(n=1\) IQAH state is formed when holes completely fill the topmost moire valence band of one spin/valley. Removing a hole from the \(n=1\) state creates a quasiparticle carrying negative charge, which we refer to as an electron quasiparticle. The energy dispersion of a hole at charge neutrality is simply \(\varepsilon(\mathbf{k})\) as determined by the continuum model, whereas the energy dispersion of an electron quasiparticle in the QAH state, \(\tilde{\varepsilon}(\mathbf{k})\), is affected by interaction-induced self-energy as stated above.
In Fig. 3, we compare the hole \(\varepsilon(\mathbf{k})\) and electron quasiparticle \(\tilde{\varepsilon}(\mathbf{k})\) dispersions in the lowest \(t\)MoTe\({}_{2}\) moire band at two representative twist angles, \(\theta=2.0^{\circ}\) and \(2.7^{\circ}\). At \(\theta=2^{\circ}\), the self-energy has an insignificant influence on the electron quasiparticle dispersion so \(\tilde{\varepsilon}(\mathbf{k})\approx-\varepsilon(-\mathbf{k})\). On the other hand, at \(\theta=2.7^{\circ}\), the self-energy approxi
Figure 3: Comparison of hole dispersion \(\varepsilon(\mathbf{k})\) at \(n=0\) and electron quasiparticle dispersion \(\tilde{\varepsilon}(\mathbf{k})\) at \(n=1\) of the lowest moiré band assuming full valley polarization at \(\theta=2.0^{\circ}\) and \(\theta=2.7^{\circ}\). Note that \(\tilde{\varepsilon}(\mathbf{k})\) depends on the two-body interaction potential for which we use a Coulomb interaction with \(\varepsilon^{-1}=0.1\). (b) Color plots of the electron quasiparticle dispersion. All dispersions are shifted by a constant to be centered about zero energy.
mately doubles the electron quasiparticle bandwidth.
The physical origin of the enhanced dispersion for electron quasiparticles above the magic angle is as follows. We have directly confirmed that the \(\mathbf{k}\)-dependent part of the self-energy at this twist angle is dominated by the Fock term, which is an interaction-potential-weighted average of quantum distances between intra-unit-cell wavefunctions \(\ket{u_{\mathbf{k}}}\) and \(\ket{u_{\mathbf{k}+\mathbf{q}}}\):
\[\Sigma^{F}(\mathbf{k})=-\int\frac{d^{2}q}{(2\pi)^{2}}V(\mathbf{q})|\bra{u_{\mathbf{k}+\mathbf{ q}}}{u_{\mathbf{k}}}|^{2}. \tag{5}\]
As we showed in Ref. [10], the Bloch states in the lowest moire band are strongly layer polarized except in the vicinity of \(\gamma\), where they are strongly layer hybridized. The change in the wavefunctions' layer character at \(\gamma\) causes a peak in the Fock self-energy that adds constructively with the bare energy dispersion \(\varepsilon(\mathbf{k})\) and thereby enhances the quasiparticle bandwidth at \(n=1\). In contrast, at smaller twist angles, the self-energy adds destructively with the bare dispersion, leading to narrowed quasiparticle bandwidth shown in the Supplemental Material.
Our finding that, near the magic angle, the quasiparticle dispersions near \(n=0\)\(\varepsilon(\mathbf{k})\) and \(n=1\)\(\tilde{\varepsilon}(\mathbf{k})\) are both narrow-and consequently that the condition for particle-hole symmetry \(\tilde{\varepsilon}(\mathbf{k})=\varepsilon(-\mathbf{k})\) is only weakly violated-is consistent with our finding of approximate particle-hole symmetry in the magic-angle many-body phase diagram (see Fig. 1). Above the magic angle, we find in contrast that 1) \(\varepsilon(\mathbf{k})\) is broad to begin with, 2) \(\tilde{\varepsilon}(\mathbf{k})\) is approximately twice as broad due to strong \(\mathbf{k}\)-dependence of the Fock energy, and 3) \(\varepsilon(\mathbf{k})\) has two degenerate minima at the corners of the moire Brillouin zone whereas \(\tilde{\varepsilon}(\tilde{\mathbf{k}})\) has a unique minimum at its center. These properties are consistent with the enhanced-particle hole symmetry in the above-magic-angle many-body phase diagram shown in Fig. 1.
## V Fermi liquid and unquantized anomalous Hall effect
With this understanding of the microscopic origin of particle-hole asymmetry in the band-projected model for \(t\)MoTe\({}_{2}\) above the magic angle, and in particular of the broadened quasiparticle dispersion \(\tilde{\varepsilon}(\mathbf{k})\) in the \(n=1\) QAH state, we now study its consequences at finite doping \(n=1-\delta\) with \(\delta>0\). For small doping \(\delta\), the low-energy physics of our system maps to a uniform electron gas. Provided that the density of electron quasiparticles \(\delta\) is not too low, it is natural to expect that its ground state is a Fermi liquid. The reduction of electron quasiparticle mass by interaction-induced self energy near \(n=1\) is also beneficial to the formation of a Fermi liquid [29]. From band-projected ED calculations, we indeed find a fully spin/valley-polarized, metallic state with a filling-dependent anomalous Hall conductivity in the carrier density range \(\frac{2}{3}<n<1\) that we refer to as an _anomalous Hall Fermi liquid_.
Fig. 4 shows the momentum distribution function \(n(\mathbf{k})\) at three filling fractions \(n=0.89,0.83,0.78\). Unlike the ACFL state at \(n=\frac{1}{2}\), at these fillings \(n(\mathbf{k})\) drops sharply across a circle centered at \(\gamma\), indicating the presence of a Fermi surface expected from the quasiparticle dispersion \(\tilde{\varepsilon}(\mathbf{k})\). Moreover, the degenerate ground states' many-body momenta match those expected from simply filling electrons according to the quasiparticle dispersion \(\tilde{\varepsilon}(\mathbf{k})\). In Fig. 4 (d), for instance, the sixfold ground state degeneracy at \(n=\frac{7}{9}\) (\(\delta=\frac{2}{9}\)) comes from adding \(36\times\frac{2}{9}=8\) electrons to the 7 Bloch states closest to \(\gamma\) and one of the 6 momenta in the next available shell. Similar data obtained from other finite system geometries are also shown in the Supplemental Material.
In contrast to the Fermi liquid phase close to \(n=1\), correlation effects are much stronger at low fillings close to \(n=0\) because the effective mass of holes is much larger than that of electron quasiparticles, as seen in Fig. 3. Our finding of Fermi liquids stabilized by interaction-enhanced dispersion near unity filling echoes previous studies of other Chern band systems [29; 30; 31].
Having established its existence, we now study the properties of the ferromagnetic Fermi liquid state in \(t\)MoTe\({}_{2}\). In Fig. 5, we show the zero-temperature, intrinsic anomalous Hall conductance
\[\sigma_{AH}=\frac{1}{2\pi}\frac{e^{2}}{h}\int d^{2}\mathbf{k}\,\theta(\varepsilon( \mathbf{k})-\varepsilon_{F})F(\mathbf{k}) \tag{6}\]
where \(F(\mathbf{k})\) is the Berry curvature of the lowest moire band in the presence of full spin/valley polarization. We
Figure 4: Bloch state occupation numbers (see main text) of the many-body ground states at several carrier densities \(\frac{2}{3}\leq n<1\)\(\theta=2.70^{\circ}\), \(\epsilon^{-1}=0.1\) obtained on a torus with 36 moiré unit cells.
note that this this formula is applicable only when the system is in a Fermi liquid phase. Near the magic angle, a relatively uniform Berry curvature distribution leads to \(\sigma_{AH}\approx n\frac{e^{2}}{h}\). In contrast, at larger twist angles, the Berry curvature has a hotspot near the band maximum as shown in Fig. 5(b). This leads to a rapid reduction in anomalous Hall conductivity as the filling is reduced from \(n=1\) and the system enters the anomalous Hall Fermi liquid phase, before rising to the quantized value \(\sigma_{AH}=\frac{2e^{2}}{3h}\) in the \(n=\frac{2}{3}\) FQAH state.
In addition to the QAH states at \(n=1,\frac{2}{3},\frac{3}{5}\), the recent transport experiment on \(t\)MoTe\({}_{2}\) reveals intriguing behavior as a function of carrier density [9]. Notably, it is found that (1) \(R_{xy}\gg R_{xx}\) throughout the filling range \(\frac{1}{2}\leq n\leq\frac{2}{3}\); (2) as the band filling increases from \(\frac{2}{3}\) to 1, \(R_{xy}\) drops rapidly, is comparable to \(R_{xx}\) over an extended filling range, and eventually rises to the quantized value \(\frac{h}{e^{2}}\) at \(n=1\). Our numerical results support the conclusion that the system is an anomalous Hall Fermi liquid in the window \(\frac{2}{3}<n<1\).
## VI Discussion and outlook
In this work, we have begun to map out a global phase diagram of the fractional quantum anomalous Hall effect that occurs in partially filled topological bands of \(t\)TMDs. With an exact diagonalization study, we have shown that near a magic angle \(\theta_{m}\) the phase diagram bears remarkable resemblance to that of the LLL, hosting anomalous composite Fermi liquid phases at \(n=\frac{1}{2}\), \(\frac{1}{4}\), \(\frac{3}{4}\) and FQAH phases at Jain sequence filling fractions \(n=\frac{1}{3}\), \(\frac{2}{5}\), \(\frac{3}{7}\), \(\frac{4}{7}\), \(\frac{3}{5}\), and \(\frac{2}{3}\). Our results suggest that yet-unseen FQAH states may be found in \(t\)MoTe\({}_{2}\) devices tuned to the right twist angle.
Above the magic angle, we find the phase diagram to be strongly particle-hole asymmetric. The FQAH state at \(n=\frac{1}{3}\), gives way to charge density wave state. In contrast, the anomalous Hall Fermi liquids appear at \(\frac{2}{3}<n<1\). We find that the ACFL state at half-filling is particularly robust against deviations from the magic angle, surviving to even larger angles than \(n=\frac{2}{3}\) FQAH state. Our phase diagram explains the recent observation of (1) a highly incompressible trivial state at \(n=\frac{1}{3}\) and (2) filling-dependent anomalous Hall effect in the metallic region \(\frac{2}{3}<n<1\) in \(t\)MoTe\({}_{2}\) devices that show an \(n=\frac{2}{3}\) FQAH state.
While the ED calculations presented in this work are performed specifically for \(t\)MoTe\({}_{2}\), our main conclusions as stated above are expected to hold qualitatively for a broader class of Chern band systems with band-projected Hamiltonians that can be approximately mapped to that of the LLL. These include Chern bands from periodically modulated Landau levels [23], skrymion lattices in semiconductor-magnet heterostructures [22], and graphene moire superlattices [32; 33; 34; 19; 30] which host fractional Chern insulator states at large magnetic field [35; 36].
The discovery of integer and fractional quantum Hall effects in the two-dimensional electron gas at high magnetic field ushered in a revolution of topological quantum physics that remains fruitful over forty years later [37]. The possibility of realizing analogous topological quantum fluids in Chern bands was demonstrated by proof-of-principle model studies [38; 39; 40; 41; 42]. Of particular interest and fundamental importance is the quantized anomalous Hall effect at zero magnetic field, which requires the synergy between band topology, magnetic order and spontaneous time reversal symmetry breaking. Thanks to innovation in moire quantum materials and advanced theoretical guidance, the fractionally quantized anomalous Hall effect has finally been observed [7; 8; 9]. In the same spirit as a partially-filled Landau level, a partially-filled Chern band can exhibit a symphony of distinct phases as a function of filling factor, each bringing its own novelty as an impetus to extend the frontier of condensed matter physics.
Recent experiments have demonstrated the ability to induce phase transitions out of QAH states by applying displacement field [7; 8; 9]. These phase transitions have received limited theoretical and numerical attention thus far and provide an interesting direction for future work. Pressure, an additional _in situ_ tuning knob that has been demonstrated numerically to further stabilize FQAH states, also deserves experimental attention [5].
While our global phase diagram contains many prominent features supported both by our numerical study and/or experiment, it is by no means complete. Further
Figure 5: (a) Intrinsic anomalous Hall conductance of a fully spin/valley polarized Fermi gas in the lowest moiré band as a function of hole density at several twist angles. The white background denotes the region hosting the anomalous Hall Fermi liquid phase studied in this work at angles larger than the magic angle \(\theta_{m}\sim 2^{\circ}\). (b) Berry curvature distribution of the lowest moiré band.
study of the various phases we identify by other numerical methods with access to larger system sizes such as density matrix renormalization group can provide valuable insight and have indeed already done so in the case of the ACFL[25]. Theoretical studies indicate that the IQAH state at \(n=1\) may be energetically outcompeted at larger angles[12, 43], and the possibility of competing phases at fractional filling not captured in the band-projected model also deserves further investigation.
_Acknowledgements-_ We thank Hart Goldman, Nisarga Paul, Ahmed Abouelkomsan, and Emil Bergholtz for related collaborations. This work is supported by the Air Force Office of Scientific Research (AFOSR) under Award No. FA9550-22-1-0432 and the Simons Investigator award from the Simons Foundation. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper. |
2301.04178 | Analysis of a hadron beam in five-dimensional phase space | We conduct a detailed measurement and analysis of a hadron beam in
five-dimensional phase space at the Spallation Neutron Source Beam Test
Facility. The measurement's resolution and dynamic range are sufficient to
image sharp, high-dimensional features in low-density regions of phase space.
To facilitate the complex task of feature identification in the
five-dimensional phase space, we develop several analysis and visualization
techniques, including non-planar slicing. We use these techniques to examine
the transverse dependence of longitudinal hollowing and longitudinal dependence
of transverse hollowing in the distribution. This analysis strengthens the
claim that low-dimensional projections do not adequately characterize
high-dimensional phase space distributions in low-energy hadron accelerators | Austin Hoover, Kiersten Ruisard, Alexander Aleksandrov, Sarah Cousineau, Alexander Zhukov | 2023-01-10T19:15:18Z | http://arxiv.org/abs/2301.04178v3 | # Analysis of a hadron beam in five-dimensional phase space
###### Abstract
We conduct a detailed measurement and analysis of a hadron beam in five-dimensional phase space at the Spallation Neutron Source Beam Test Facility. The measurement's resolution and dynamic range are sufficient to image sharp, high-dimensional features in low-density regions of phase space. To facilitate the complex task of feature identification in the five-dimensional phase space, we develop several analysis and visualization techniques, including non-planar slicing. We use these techniques to examine the transverse dependence of longitudinal hollowing and longitudinal dependence of transverse hollowing in the distribution. This analysis strengthens the claim that low-dimensional projections do not adequately characterize high-dimensional phase space distributions in low-energy hadron accelerators.
## I Introduction
The beam intensity in hadron linear accelerators is limited by space-charge-driven halo formation -- the emergence of a low-density region of phase space far from a dense core [1, 2, 3] -- and consequent uncontrolled beam loss [4]. In megawatt-class accelerators, the halo density (in two-dimensional phase space) is typically four to six orders of magnitude below the peak density [5]. No simulation has reproduced measurements at this level of detail. Although the relevant physics is assumed to be modeled correctly, there remain significant uncertainties in the simulation inputs -- the electromagnetic fields throughout the accelerator and the initial distribution of particles in six-dimensional phase space [6, 7, 8].
We denote the phase space distribution by \(f(x,x^{\prime},y,y^{\prime},\phi,w)\); \(x\) and \(y\) are the transverse positions, \(x^{\prime}=dx/ds\) and \(y^{\prime}=dy/ds\) are the transverse slopes, \(s\) is the position along the reference trajectory, \(\phi\) is the deviation from the longitudinal position of the synchronous particle (in units of RF degrees), and \(w\) is the deviation from the kinetic energy of the synchronous particle. The distribution is typically reconstructed from the set of measured two-dimensional projections \(\{f(x,x^{\prime}),f(y,y^{\prime}),f(\phi,w)\}\), where each projection is obtained by integrating over the unlisted coordinates:
\[f(x,x^{\prime}) =\iiint f(x,x^{\prime},y,y^{\prime},\phi,w)dydy^{\prime}d\phi dw,\] \[f(y,y^{\prime}) =\iiint f(x,x^{\prime},y,y^{\prime},\phi,w)dxdx^{\prime}d\phi dw, \tag{1}\] \[f(\phi,w) =\iiint f f(x,x^{\prime},y,y^{\prime},\phi,w)dxdx^{\prime}dydy^{ \prime}.\]
Given only this information, the reconstruction must take the following maximum-entropy form [9]:
\[f(x,x^{\prime},y,y^{\prime},\phi,w)=f(x,x^{\prime})f(y,y^{\prime})f(\phi,w). \tag{2}\]
Direct high-dimensional measurements have been demonstrated, albeit at low resolution and dynamic range [10]. The most immediate and straightforward use of such measurements is as a seed for macro-particle simulations. In this case, no analysis of the initial distribution is required. Alternatively, these measurements may be analyzed to identify features in high-dimensional phase space -- features invisible to typical diagnostics. This task is critical to fully understanding the limitations of Eq. (2) when predicting subsequent beam evolution. Additionally, explaining the origin of high-dimensional features may elucidate the dynamics upstream of the measurement plane. This is the path taken in the present study, which builds upon the following work.
The first six-dimensional phase space measurement characterized a 2.5 MeV H\({}^{-}\) ion beam generated by a radio-frequency quadrupole (RFQ) at the Spallation Neutron Source (SNS) Beam Test Facility (BTF) using four transverse slits, a dipole-slit energy spectrometer, and a bunch shape monitor (BSM) [10]. The resolution (\(\approx 11\) points per dimension) and dynamic range (\(\approx 10^{1}\)) were relatively low, even with 32 hours of measurement time; therefore, as part of a preliminary investigation, lower-dimensional scans were used to examine smaller regions of phase space. Masking the beam in the transverse plane before measuring the energy distribution -- measuring \(f(w\mid x\)=\(x^{\prime}\)=\(y\)=\(y^{\prime}\)=0) -- revealed a bimodal energy distribution near the transverse core. Importantly, this feature was not visible in the full projection \(f(w)\), which was unimodal. The correlation's five-dimensional nature was briefly explored by varying the number of slits inserted into the beam and by varying the location of a single slit with the others held fixed; both led to pronounced changes in the energy distribution. Repeating the measurement at different beam intensities demonstrated that space charge drives this dependence.
The transverse-longitudinal correlations observed in [10] were subsequently studied in [11]. The dependence of the longitudinal phase space on \(x\) and \(x^{\prime}\) was mapped by measuring \(f(x,x^{\prime},\phi,w\mid\bar{y}\)=0), where \(\bar{y}\) is the BSM wire position (corresponding approximately to \(y^{\prime}\) at the measurement plane). The measurements were also compared to an RFQ simulation, which predicted a similar depen
dence of the energy distribution on the transverse coordinates. Following the argument in [10] that the longitudinal hollowing develops in the MEBT, particle-in-cell simulations were used in [12] to explore the longitudinal hollowing of a Gaussian beam during free expansion. These simulations illuminated the fact that hollowing is a natural consequence of charge redistribution caused by nonlinear space charge forces. However, in the "realistic" beam generated by the RFQ simulation, the correlations were already present at the end of the RFQ and showed little evolution in the MEBT. Therefore, it was concluded that this feature likely develops in the RFQ.
In this paper, we continue to refine our image of the initial phase space distribution in the BTF. In particular, we obtain a nearly complete description of the distribution by measuring \(f(x,x^{\prime},y,y^{\prime},w)\). This five-dimensional measurement, described in Section II, captures all significant inter-plane correlations in the initial beam1 and provides unprecedented detail: the resolution and dynamic range are sufficient to image sharp, high-dimensional features in low-density regions of phase space. To facilitate the complex task of feature identification in the five-dimensional phase space, we develop several analysis and visualization techniques in Section III, including non-planar slicing. In Section III.1, these techniques are used to re-examine the longitudinal hollowing described above. In Section III.2, we pivot to the transverse phase space and its dependence on the longitudinal coordinates, reporting a transverse hollowing that likely develops in the MEBT and is independent of the longitudinal hollowing in the RFQ. In Section IV, we discuss the use of five-dimensional measurements in future research.
Footnote 1: The lack of longitudinal focusing in the BTF results in rapid debunching; a strong linear correlation between the phase \(\phi\) and energy \(w\) develops before the first measurement station.
## II Five-dimensional phase space measurement
A detailed description of the BTF is available in [13]. The system consists of an RF-driven H\({}^{-}\) ion source, 65 keV low-energy beam transport (LEBT), and 402.5 MHz radio-frequency quadrupole (RFQ), all identical to the components in the SNS. These are followed by a 2.5 MeV medium-energy beam transport (MEBT) which is longer than the SNS design and contains no re-bunching cavities. The lattice ends with a 9.5-cell FODO transport line.
The BTF houses two measurement stations. The first is located 1.3 meters downstream of the RFQ; the second is located after the FODO line. Each station consists of four transverse slits (two horizontal, two vertical) and a 90-degree dipole bend followed by a scintillating screen, as shown in Fig. 1. In this setup, it is possible to measure the five-dimensional distribution \(f(x,x^{\prime},y,y^{\prime},w)\) using the screen and three upstream slits: one horizontal slit selects \(y\); two vertical slits select \(x\) and \(x^{\prime}\); \(y^{\prime}\) is a function of \(y\) and the vertical position on the screen, \(w\) is a function of \(x\), \(x^{\prime}\), and the horizontal position on the screen. The transformation from slit-screen coordinates to phase space coordinates is given in Eq. (A1). The measurement is efficient: two dimensions are measured in a single shot. The reduction in the number of scanning slits affords a higher resolution (\(>64\) points per dimension) and dynamic range (\(>10^{3}\)) than the six-dimensional measurement.
We will primarily examine a single measurement in this paper. A rectilinear scan pattern was employed with a linear correlation between \(x\) and \(x^{\prime}\) to align with the \(x\)-\(x^{\prime}\) distribution. The corners of the \(x\)-\(x^{\prime}\) grid were clipped, leading to a moderate reduction in scan time. The scan was performed as a series of "sweeps" in which the vertical slits were held stationary while the horizontal slit was moved continuously across the beam. During each sweep, the screen image was saved on each beam pulse (5 Hz repetition rate) in addition to scalar quantities such as the slit positions and beam current. The camera integral (image brightness) and beam current during the measurement are displayed in Fig. 2. The average current during the beam pulse was -25.57 mA, as measured by a beam current monitor (BCM04 in Fig. 1).
Images from the sweep containing the maximum camera integral are shown in Fig. 3, which corresponds to one spike in the inset panel of Fig. 2. All images were cropped, thresholded, and downscaled by a factor of three
Figure 1: Layout of the first 3.6 meters of the SNS-BTF MEBT, starting from the end of the RFQ. Shown are six quadrupoles (QH01, QV02, QH03, QV04, QH05, QV06), two vertical slits (VT04, VT06), two horizontal slits (HZ04, HZ06), a beam current monitor (BCM04), a 90-degree dipole (DH1), and a view screen (VS06).
Figure 2: Camera integral (from screen VS06) and beam current (from current monitor BCM04) during the five-dimensional measurement.
using local averaging. The resulting points and scalar values in five-dimensional slit-screen space were then linearly interpolated on a regular grid in five-dimensional phase space. After cropping, this procedure yielded a five-dimensional image of shape 69 \(\times\) 88 \(\times\) 69 \(\times\) 65 \(\times\) 55, with pixel dimensions 0.22 mm \(\times\) 0.21 mrad \(\times\) 0.37 mm \(\times\) 0.20 mrad \(\times\) 3.35 keV. This resolution approaches the limit dictated by the 0.2 mm slit widths.
We judge the uncertainty in the measurement to be relatively small: (i) The view screen could support an energy resolution of 0.3 keV (given the field of view and raw image resolution). This is slightly less than the estimated 0.4 keV resolution limit dictated by the finite vertical slit widths [11]. (ii) The \(y^{\prime}\) resolution limit is dominated by the width of the upstream horizontal slit, which contributes a point spread of 0.1 mrad. This is halved from the slit-slit geometry, as the slit-screen distance is more than twice the slit-slit distance. (iii) Slit-screen measurements of \(f(y,y^{\prime})\) agree with slit-slit measurements. (iv) The drift and variance in the beam current during the scan are negligible (Fig. 2), implying a small pulse-to-pulse variation of the phase space density. (v) The interpolation grid has nearly the same dimensions as the measurement grid. (vi) A two-dimensional projection of the five-dimensional measurement agrees with a separate two-dimensional measurement down to three orders of magnitude (Fig. 4). (vii) A fractional \(10^{-3.27}\) threshold relative to the global peak pixel value was applied to all images to remove noise. (viii) Only small systematic errors arise from the dipole strength calibration, image pixel size calibration, and lattice model geometry [11]. More quantitative uncertainty analyses of BTF measurements are contained in [14; 11].
## III Results
### Revisiting the dependence of the energy distribution on the transverse coordinates
Identifying and visualizing features in high-dimensional distributions is not straightforward [15]. Although metrics are available to compare two distributions to each other [16; 17; 18; 9; 9], it can be difficult to correlate these values with physical features. Visual inspection is a powerful tool but requires the distribution to be projected onto a one- or two-dimensional subspace.
The orthogonal one- and two-dimensional projections of the measured distribution are shown in a _corner plot_ in Fig. 5. No sharp features are visible, and all linear inter-plane correlations are negligible. One notable feature in the \(x\)-\(x^{\prime}\) and \(y\)-\(y^{\prime}\) projections is that the Twiss parameters in the core and tails/halo are dissimilar, which suggests
Figure 4: Logarithmic contours of \(f(x,x^{\prime})\) obtained from a seven-hour five-dimensional measurement (black) and seven-minute two-dimensional measurement (red) performed two weeks apart.
Figure 5: Corner plot of the measured five-dimensional phase space distribution. One-dimensional projections are displayed on the diagonal subplots. Two-dimensional projections are displayed on the off-diagonal subplots.
Figure 3: Processed camera images during one sweep. The vertical slits (\(x_{1}\), \(x_{2}\)) are held fixed while the horizontal slit (\(y_{1}\)) is scanned.
that a matched core could lead to a mismatched halo. Some of these projections can be examined with a much larger dynamic range, as demonstrated in [20].
The projections in Fig. 5 represent averages over large regions of phase space and do not fully describe the distribution 2. It is therefore critical to observe _partial projections_[10; 11], where a partial projection is a projection of the distribution within some constrained region of phase space. When the region is small, the information loss is minimized, and a local description of the distribution follows; many such regions must be compared to build a global description. The selected region may generally be called a _slice_. Slices are typically _planar_; in an \(n\)-dimensional space, a planar slice selects an \((n-m)\)-dimensional region defined by the intersection of \(m\) orthogonal \((n-1)\)-dimensional planes. In practice, infinitely thin slices are not possible; for example, in the measurement described here, the slice width is limited by the physical slit widths. Thus, a planar slice is more practically defined as the intersection of orthogonal \(n\)-dimensional slabs.
Footnote 2: It is helpful to observe the wealth of information contained in the two-dimensional projections in Fig. 5 relative to the one-dimensional projections. This suggests that the information lost during the transition from five/six dimensions to two dimensions could be significant.
There is significant and largely unexplored freedom here, both in the slice construction and in the visualization of the resulting partial projections. We will revisit the previously observed longitudinal hollowing in the transverse core of the beam to accentuate this freedom. As mentioned in Section I, this feature has thus far been examined by observing the energy distribution within a planar slice centered on the origin in transverse phase space, collapsing the slice dimensions one by one as in Fig. 6. We suggest two approaches to more comprehensively visualize this feature in five-dimensional phase space.3
Footnote 3: Each subplot in Fig. 6 represents a different subspace, ranging from two-dimensional to five-dimensional from left to right (neglecting the finite slice widths).
The first approach leverages the fact that an \(n\)-dimensional image is an \((n-2)\)-dimensional array of two-dimensional images. When \(n=3\), the images can be arranged in a row. When \(n=4\), the images can be arranged in a grid [11]. In Fig. 7, we follow this approach to examine the slice \(f(x^{\prime},y,y^{\prime},w\mid x\)=0). In the main panel, the \(y\)-\(w\) distribution is plotted as a function of \(x^{\prime}\) and \(y^{\prime}\). The bimodal energy distribution is visible near \(x^{\prime}=y^{\prime}=0\) (seventh row/column) but quickly disappears as one moves away from the sharp peak in the \(x^{\prime}\)-\(y^{\prime}\) distribution. The \(y\)-\(w\) distribution in these low-density regions is somewhat complex and challenging to interpret.4 We also display the three-dimensional and two-dimensional marginal distributions on the bottom/right panels of the figure. These marginal distributions highlight the information lost by integrating over momentum space. The energy hollowing is still present in the marginal distributions, but is not as pronounced; this is consistent with Fig. 6.
Footnote 4: Note that a linear correlation exists between \(y\) and \(y^{\prime}\); this explains the shifting location of the first-order \(y\) moment as \(y^{\prime}\) varies.
We stress that Fig. 7, which we call a _slice matrix plot_, still excludes a significant amount of information. First, only a fraction of the indices along the sliced dimensions are shown. Second, since the distribution is five-dimensional, one is tasked with observing a three-dimensional array of \(y\)-\(w\) images; thus, one should vary the slice location along the fifth dimension (\(x\), in this case). Third, a separate set of figures can be produced for each of the ten pair-wise relationships in the data set. These considerations can lead to a proliferation of figures, and the problem is worse in six dimensions. Nonetheless, the combination of several slice matrix plots for one or more carefully selected four-dimensional slices can be an effective tool to reveal the internal structure of a high-dimensional distribution.
A second approach utilizes non-planar slices. Consider a slice of a distribution \(f(x_{1},x_{2},\ldots,x_{n})\) defined by the intersection of \(m\) perpendicular slabs, where \(1<m<n\) and slab \(i\in[1,m]\) is defined by \(|x_{i}|<=\Delta_{i}/2\) for finite width \(\Delta_{i}\). Let us refer to the \(x_{1}\)-\(\ldots\)-\(x_{m}\) plane as subspace \(A\) and the \(x_{m+1}\)-\(\ldots\)-\(x_{n}\) plane as subspace \(B\). In subspace \(A\), the intersection defines an \(m\)-dimensional box of volume \(V_{A}=\prod_{i}^{m}\Delta_{i}\). Instead of a box, one might consider an ellipsoid (perhaps defined by the covariance matrix of \(f(x_{1},\ldots,x_{m})\)) or a more general boundary (perhaps defined by the density contours of \(f(x_{1},\ldots,x_{m})\)). It is also possible to nest two such boundaries and select the region between them; we call this a _shell slice_. Fig. 8 illustrates these options. In all cases, if the volume enclosed by the boundary goes to zero, we recover an \((n-m)\)-dimensional planar slice. Note that for planar slices, it is generally advantageous to minimize \(V_{A}\), but it may be advantageous to inflate the volume of non-planar slices.
Non-planar slices are well-suited to illuminate features in subspace \(B\) that depend on the distance from the origin in subspace \(A\). In particular, they are natural choices for demarcating the core and halo regions of the distribution [21]. There are many possibilities when applying these slices in six dimensions. (For example, one
Figure 6: Energy distribution within planar slices in transverse phase space. Each slice is obtained by fixing the indices along the specified axes of the five-dimensional image. Each profile is normalized by area. (This figure mirrors Fig. 5 in [10].)
could select only those particles within the root-mean-square (RMS) ellipse in the two-dimensional longitudinal phase space and outside the \(10^{-3}\) density contour in the four-dimensional transverse phase space, isolating the transverse halo in the longitudinal core.) In the case at hand, the energy distribution appears to have a radial dependence in transverse phase space, but it is clear that the transverse distribution does not have ellipsoidal symmetry. Therefore, we let the density contours of \(f(x,x^{\prime},y,y^{\prime})\) define the slices. Each curve in Fig. 9
Figure 7: Dependence of the \(y\)-\(w\) distribution on \(x^{\prime}\) (columns) and \(y^{\prime}\) (rows) near \(x=0\). Upper left: \(f(x^{\prime},y,y^{\prime},w\mid x\)=0); upper right: \(f(y,y^{\prime},w\mid x\)=0); lower left: \(f(x^{\prime},y,w\mid x\)=0); lower right: \(f(y,w\mid x\)=0). The color scale is linear and is not shared between frames. The axis limits are shared. 13/88 indices are selected along the \(x^{\prime}\) axis, and 13/65 indices are selected along the \(y^{\prime}\) axis. Each image is centered on the \(y\)-\(w\) origin. The sliced dimension label \(x^{\prime}\) is located at \(x^{\prime}=0\), with \(x^{\prime}\) increasing from left to right; the sliced dimension label \(y^{\prime}\) is located at \(y^{\prime}=0\), with \(y^{\prime}\) increasing from bottom to top.
is the energy distribution within a shell defined by the region between two such nested contours.
Fig. 9 is compact but useful in describing the extent of the hollow energy core in the \(x\)-\(x^{\prime}\)-\(y\)-\(y^{\prime}\) plane. The energy distribution transitions smoothly from unimodal to bimodal when moving from the low- to high-density contours. If the core is defined as the region in which \(f(x,x^{\prime},y,y^{\prime})>10^{-2}\), then the first slice selects the region outside the core, and subsequent slices select regions inside the core. (For reference, the 0.22 contour encloses one-fifth of the beam particles.) The two-dimensional projections of the lowest-density slice are shown in the top half of Fig. 9. This slice essentially forms a contour-shaped shell around the beam core in the four-dimensional transverse phase space.
Since non-planar slices naturally identify the beam core and halo in high-dimensional phase space, they may be useful in future analyses, especially when the distribution lacks ellipsoidal symmetry. (One extension of the analysis shown here would be to vary the thickness of the shells -- averaging over a larger/smaller volume. Another extension would be to define the slices in a three-dimensional space; for example, viewing the \(y\)-\(w\) distribution within contour slices in \(x\)-\(x^{\prime}\)-\(y^{\prime}\) space.)
### Charge redistribution and core following in the transverse plane
We now explore the transverse phase space distribution and its dependence on the longitudinal parameters. The five-dimensional measurement has revealed an asymmetric, longitudinally dependent following of the transverse charge distribution, shown in Fig. 10.
Some insight into the \(x\)-\(y\) distribution can be gained by considering the four-dimensional transverse phase space. To this end, Fig. 11 shows the dependence of \(x\)-\(x^{\prime}\) on the vertical coordinates, and Fig. 12 shows the dependence of \(y\)-\(y^{\prime}\) on the horizontal coordinates, both within a central energy slice.
It is clear from these figures that a hollow \(x\) or \(y\) distribution is associated with the nonlinear tails of an "s"-shaped \(x\)-\(x^{\prime}\) or \(y\)-\(y^{\prime}\) distribution. The asymmetric \(x\)-\(y\) following is explained as follows: after integration over \(y^{\prime}\) (bottom row in Fig. 11), the \(x\)-\(x^{\prime}\) distribution near \(y=0\) is oriented such that the \(x\) projection is bimodal; after integration over \(x^{\prime}\) (bottom row of Fig. 12), the \(y\)-\(y^{\prime}\) distribution near \(x=0\) is oriented such that the \(y\) projection is not bimodal.
The main panels of Fig. 11 and Fig. 12 indicate that there are inter-plane relationships in the transverse phase space distribution that are hidden by full projections. The orientation of the \(x\)-\(x^{\prime}\) distribution depends on the vertical phase space coordinates, and vice versa: the vertical distribution is diverging(converging) inside(outside) the \(x\)-\(x^{\prime}\) core. The shape of the \(x\)-\(x^{\prime}\) distribution depends on the vertical phase space coordinates, and vice versa: the "s" shape in one phase plane is most distinct near the origin in the other phase plane.
One curious feature is the apparent "splitting" of phase space near the beam edge. This is visible in both Fig. 11 and Fig. 12 (for example, the frames at (row, column) = (6, 2), (3, 4) in Fig. 11). This apparently exotic splitting is a straightforward consequence of using planar slices
Figure 8: Several possible slice geometries. Each slice selects the shaded region of space.
Figure 10: Dependence of the \(x\)-\(y\) distribution on \(w\). The color scale is linear and is not shared between subplots. Faint dashed lines indicate the location of each slice on the energy axis. The full energy projection \(f(w)\) is shown on the bottom subplot.
Figure 9: Bottom: energy distribution within contour shell slices in the \(x\)-\(x^{\prime}\)-\(y\)-\(y^{\prime}\) plane. The slice at level \(l\) selects the region \(l\leq f(x,x^{\prime},y,y^{\prime})\leq l+0.01\), with \(f(x,x^{\prime},y,y^{\prime})\) normalized to the range \([0,\,1]\). Top: two-dimensional transverse projections of the lowest density slice.
to examine a four-dimensional phase space distribution with nonlinear inter-plane correlations. It should also be noted that this is a minor feature of the distribution, accentuated only by the variable color scale per subplot: the peak density in frame (3, 4) is less than 1% of the peak density across all frames.
We suggest that the transverse hollowing in the BTF is driven by nonlinear space charge forces in the MEBT, after the RFQ, and is independent of the longitudinal hollowing that develops in the RFQ. This suggestion is
Figure 11: Dependence of the \(x\)-\(x^{\prime}\) distribution on \(y\) (columns) and \(y^{\prime}\) (rows) near \(w=0\). Upper left: \(f(x,x^{\prime},y,y^{\prime}\mid w\)=0); upper right: \(f(x,x^{\prime},y^{\prime}\mid w\)=0); lower left: \(f(x,x^{\prime},y\mid w\)=0); lower right: \(f(x,x^{\prime}\mid w\)=0). The one-dimensional projection onto the \(x\) axis is plotted as a white line. The color scale is linear and is not shared between frames. The axis limits are shared. 13/69 indices are selected along the \(y\) axis, and 13/65 indices are selected along the \(y^{\prime}\) axis. Each image is centered on the \(x\)-\(x^{\prime}\) origin. The sliced dimension label \(y\) is located at \(y=0\), with \(y\) increasing from left to right; the sliced dimension label \(y^{\prime}\) is located at \(y^{\prime}=0\), with \(y^{\prime}\) increasing from bottom to top.
based on particle-in-cell simulations of the beam evolution, described below.
Our simulation procedure is described in detail in [11]; we mention only the basic parameters here. The input bunch at the MEBT entrance was predicted using a PARMTEQ [22] model of the RFQ. The input to the PARMTEQ simulation was based on two-dimensional phase space measurements in the LEBT at 50 mA beam current. The RFQ value voltage was increased by 9% over the design value of 83 kV based on preliminary
Figure 12: Dependence of the \(y\)-\(y^{\prime}\) distribution on \(x\) (columns) and \(x^{\prime}\) (rows) near \(w=0\). Upper left: \(f(x,x^{\prime},y,y^{\prime}\mid w\)=0); upper right: \(f(x^{\prime},y,y^{\prime}\mid w\)=0); lower left: \(f(x,y,y^{\prime}\mid w\)=0); lower right: \(f(y,y^{\prime}\mid w\)=0). The one-dimensional projection onto the \(x\) axis is plotted as a white line. The color scale is linear and is not shared between frames. The axis limits are shared. 13/69 indices are selected along the \(x\) axis, and 13/88 indices are selected along the \(x^{\prime}\) axis. Each image is centered on the \(y\)-\(y^{\prime}\) origin. The sliced dimension label \(x\) is located at \(x=0\), with \(x\) increasing from left to right; the sliced dimension label \(x^{\prime}\) is located at \(x^{\prime}=0\), with \(x^{\prime}\) increasing from bottom to top.
results from x-ray spectrometry, which increased both transverse emittances by approximately 7% at the RFQ exit. The predicted RFQ transmission was 84%, resulting in a 42 mA beam current in the MEBT 5. A PyORBIT [23] model was used to propagate the bunch 1.3 meters from the RFQ exit to the first horizontal slit (HZ04), a distance including four quadrupole magnets for which a hard-edge model was used. Space charge kicks were applied every 2.5 millimeters using an FFT Poisson solver on a \(64\times 64\times 64\) mesh with \(8.6\times 10^{6}\) macro-particles. Fig. 13a shows the simulated evolution, along with an RMS-equivalent Gaussian distribution in Fig. 13b and an RMS-equivalent Waterbag distribution in Fig. 13c.
Footnote 5: The transmission of the RFQ used in this study is lower than its design value due to gradual performance degradation during fifteen years of operation in the SNS. The exact reason for this degradation is unknown.
A detailed study of the beam dynamics is beyond the scope of this paper; we briefly note the following conclusions drawn from the simulations.
1. The transverse hollowing is qualitatively reproduced.
2. The transverse hollowing develops in the MEBT regardless of the correlations that develop in the RFQ. This is supported by repeating the simulation after decorrelating the initial bunch by randomly permuting \(x\)-\(x^{\prime}\), \(y\)-\(y^{\prime}\), and \(\phi\)-\(w\) coordinate pairs.
3. The transverse hollowing is driven by nonlinear space charge forces. This is supported by comparing Fig. 13b and Fig. 13c: the hollowing (and resulting emittance growth) in the Waterbag distribution is reduced relative to the less-uniform Gaussian distribution. Similar projected phase space densities have been observed in the simulated transport of an out-of-equilibrium four-dimensional Waterbag distribution in an alternate-gradient focusing channel; see Fig. 5 in [24]. Of course, the details of the evolution are sensitive to the initial beam perveance, emittance, and the lattice focusing strength.
4. The asymmetry in the \(x\)-\(y\) hollowing is primarily due to the vertical beam waist in the early MEBT. The round initial beam, which is diverging horizontally and converging vertically, passes a vertical waist before the first quadrupole, then expands in both planes. The horizontal emittance grows most rapidly just after this waist, while the vertical emittance shrinks, presumably due to coupling between the planes. If the initial \(x\) and \(y\) beam divergences are exchanged (\(x^{\prime}\rightarrow-x^{\prime}\), \(y^{\prime}\rightarrow-y^{\prime}\)), the hollowing is seen in \(y\), not \(x\), with an associated larger vertical emittance growth. The dependence on the exact pattern of alternate-gradient focusing is weak.
5. The second-order moments disagree with the measurement -- for example, the RMS emittances differ by over 15% -- even if the beam current is artificially decreased to the measured value of 25.5 mA. This is expected based on previous longitudinal benchmarks [11]. A more detailed comparison with measurements is contained in [25].
The simulations described above support the claim that the transverse hollowing is driven by nonlinear space charge forces in the MEBT. It is difficult to verify this experimentally without a current-attenuating grid immediately after the RFQ. Instead, we repeated a five-dimensional measurement at a lower beam current extracted from the ion source. This mirrors previous efforts to verify the space-charge-dependence of the longitudinal hollowing [10]. Fig. 14 shows that no transverse hollowing occurs at this lower beam current. Although the low-current five-dimensional distribution is not hollow, it is rich in structure, presumably due to the lack of smoothing by strong space charge. We leave the investigation of this low-current distribution as future work.
## IV Discussion
In summary, we have used five-dimensional measurements to enhance our image of the initial phase space distribution in the SNS-BTF. We developed several high-dimensional visualization techniques and used them to re-examine the longitudinal hollowing in the transverse core. We also reported a transverse hollowing in the longitudinal core and explained its origin: simulations suggest that this feature is driven by nonlinear space charge forces in the MEBT, independent of the longitudinal hollowing that develops in the RFQ. We examined both features in considerable detail, leveraging the resolution and dynamic range of the five-dimensional measurement. Neither feature is visible in the two-dimensional projections of the distribution. Our data is further evidence that the three phase planes -- \(x\)-\(x^{\prime}\), \(y\)-\(y^{\prime}\), \(\phi\)-\(w\) -- are (nonlinearly) correlated in real beams.
A longstanding goal in accelerator physics is to predict the beam evolution at the halo level, which we expect will hinge on (i) improving the accuracy of the accelerator lattice model and (ii) generating a more realistic initial bunch. Five-dimensional measurements at the end of the BTF beamline will serve as precise benchmarks and help address (i). To address (ii), direct six-dimensional phase space measurements are the gold standard. However, their demonstrated resolution and dynamic range are quite low.6 In the BTF, five-dimensional measurements may be able to serve as a proxy for six-dimensional
measurements. For reasons described in Footnote 1, it is likely that a reconstruction from \(\{f(x,x^{\prime},y,y^{\prime},w),\)\(f(\phi,w)\}\) would be quite accurate. The reconstruction would ideally be treated using the principle of entropy maximization (MENT) [26]; a six-dimensional MENT solver could be adapted from one of several existing algorithms [27, 9]. Alternative reconstruction approaches which incorporate low-resolution six-dimensional measurements may also be possible [28]. In our case, it may suffice to sample from the five-dimensional distribution,
Figure 14: No transverse following is apparent at the center of the energy distribution in the low-current (7 mA) measurement. This figure is equivalent to Fig. 10, which shows the 26 mA case.
Figure 13: Simulated transport of a 42 mA bunch in the BTF MEBT from the RFQ exit to the measurement plane (HZ04). (a) PARAMETQ-generated initial distribution. (b) Gaussian distribution, RMS-equivalent to (a) in the \(x\)-\(x^{\prime}\), \(y\)-\(y^{\prime}\), and \(z\)-\(w\) planes. (c) Waterbag distribution, RMS-equivalent to (a) in the \(x\)-\(x^{\prime}\), \(y\)-\(y^{\prime}\), and \(z\)-\(w\) planes. The top three rows show snapshots of \(f(x,y\mid z\approx 0)\)\(f(x,x^{\prime}\mid z\approx 0)\), and \(f(y,y^{\prime}\mid z\approx 0)\) at three locations in the lattice. Each distribution was normalized such that \(\langle xx\rangle=\langle yy\rangle=1\) and \(\langle xx^{\prime}\rangle=\langle yy^{\prime}\rangle=0\), where \(\langle\dots\rangle\) represents the average over the distribution. Each set of contour lines was obtained by binning the coordinates on a \(75\times 75\) grid, then smoothing the resulting image using a Gaussian filter with \(\sigma=1.25\). Each set of contour lines range from 0.005-1.0 as a fraction of the peak density. The bottom two rows display the evolution of the RMS beam sizes (\(\tilde{x}=\sqrt{\langle xx\rangle}\), \(\tilde{y}=\sqrt{\langle yy\rangle}\), \(\tilde{z}=\sqrt{\langle zz\rangle}\)) and relative growth in RMS emittances (\(\varepsilon_{x}=\sqrt{\langle xx\rangle\langle x^{\prime}x^{\prime}\rangle- \langle xx^{\prime}\rangle^{2}}\), \(\varepsilon_{y}=\sqrt{\langle yy\rangle\langle y^{\prime}y^{\prime}\rangle- \langle yy^{\prime}\rangle^{2}}\), \(\varepsilon_{z}=\sqrt{\langle zz\rangle\langle ww\rangle-\langle zw\rangle^{ 2}}\)). Here we use the position \(z\) instead of the phase \(\phi\).
then assume a linear relationship between \(\phi\) and \(w\), plus some phase width.
Our work may also be useful for high-dimensional phase space tomography -- the reconstruction of a four- or six-dimensional distribution from two-dimensional projections. There are various challenges in extending tomographic algorithms to six dimensions, mainly due to memory limitations, but also due to uncertainty in the set of transformations necessary to accurately reconstruct a high-dimensional distribution [29; 30; 31; 32; 33; 34; 9]. The accuracy of reconstruction algorithms has primarily been evaluated by comparing the two-dimensional projections of the reconstruction to the ground truth; it is an open question whether the high-dimensional features presented herein can be recovered. Direct measurements could serve as valuable benchmarks. Although the manipulations necessary for six-dimensional tomography are not possible in the BTF, our five-dimensional measurement data [35] could be used as a benchmark in a simulated reconstruction.
Finally, we note that the high-dimensional analysis and visualization techniques described here could be applied to fully-correlated distributions generated by particle-in-cell simulations. In this case, the dynamic range and resolution are determined by the number of macroparticles.
## V Acknowledgements
The authors acknowledge the contribution of the SNS operators in enabling long (16+ hours) periods of continuous measurement. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This manuscript has been authored by UT Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. This research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)).
## Appendix A Transformation from slit-screen coordinates to phase space coordinates
The following transformation from five-dimensional slit-screen coordinates to phase space coordinates is obtained by assuming linear optics in the measurement region [14]:
\[\begin{split} x&=x_{1},\\ y&=y_{1},\\ x^{\prime}&=\frac{x_{2}-x_{1}}{L_{1}},\\ y^{\prime}&=\frac{y_{3}-y_{1}}{L_{1}+L_{2}+\rho+L _{3}},\\ \delta&=\frac{1}{\rho+L_{3}}\left(x_{3}+\frac{L_{3}} {\rho}x-\left(\rho-\frac{(L_{1}+L_{2})L_{3}}{\rho}\right)x^{\prime}\right). \end{split} \tag{10}\]
\(L_{1}\) is the slit-slit spacing (HZ04-HZ06, VT04-VT06); \(L_{2}\) is the slit-dipole drift length (VT06-DH1); \(L_{3}\) is the dipole-screen drift length (DH1-VS06); \(\rho\) is the dipole bend radius; \(x_{1}\) is the position of the first vertical slit (VT04); \(x_{2}\) is the position of the second vertical slit (VT06); \(y_{1}\) is the position of the horizontal slit (HZ04); \(y_{3}\) is the vertical position on the screen; \(x_{3}\) is the horizontal position on the screen; \(\delta=1+p/p_{0}\), where \(p\) is the momentum and \(p_{0}\) is the momentum of the synchronous particle. It is then straightforward to compute the energy deviation \(w\) from \(\delta\).
|
2308.08249 | The asymptotic behavior of the Bergman kernel on pseudoconvex model
domains | In this paper, we investigate the asymptotic behavior of the Bergman kernel
at the boundary for some pseudoconvex model domains. This behavior can be
described by the geometrical information of the Newton polyhedron of the
defining function of the respective domains. We deal with not only the finite
type cases but also some infinite type cases. | Joe Kamimoto | 2023-08-16T09:33:58Z | http://arxiv.org/abs/2308.08249v1 | # The asymptotic behavior of the Bergman kernel on pseudoconvex model domains
###### Abstract.
In this paper, we investigate the asymptotic behavior of the Bergman kernel at the boundary for some pseudoconvex model domains. This behavior can be described by the geometrical information of the Newton polyhedron of the defining function of the respective domains. We deal with not only the finite type cases but also some infinite type cases.
## 1. Introduction
Let \(\Omega\) be a domain in \(\mathbb{C}^{n}\) and let \(A^{2}(\Omega)\) be the Hilbert space of the \(L^{2}\)-holomorphic functions on \(\Omega\). The _Bergman kernel_\(B_{\Omega}(z)\) of \(\Omega\) (on the diagonal) is defined by \(B_{\Omega}(z)=\sum_{\alpha}|\phi_{\alpha}(z)|^{2}\), where \(\{\phi_{\alpha}\}\) is a complete orthonormal basis of \(A^{2}(\Omega)\). Throughout this paper, we assume that the boundary \(\partial\Omega\) of \(\Omega\) is always \(C^{\infty}\)-smooth.
Since the behavior of the Bergman kernel at the boundary plays essentially important roles in the study of several complex variables and complex geometry, many interesting results about its behavior have been obtained.
In the case of strictly pseudoconvex domains, a beautiful asymptotic expansion of the Bergman kernel was given by C. Fefferman [11] and Boutet de Monvel and Sjostrand [4]. In the case of weakly pseudoconvex domains, many kinds of important results have been obtained (see the references in [15], [5], etc.). In particular, in the two-dimensional finite type case, an asymptotic expansion analogous to that of Fefferman was recently given by Hsiao and Savale [14]. On the other hand, in the higher dimensional case, there does not seem to be such strong and general results. In [15], the author investigated a special case of pseudoconvex model domains and computed some asymptotic expansion of the Bergman kernel. The purpose of this paper is to generalize the results in [15].
In [15], only the finite type case was dealt with, while more general cases will be considered in this paper; for example, some infinite type cases can be also dealt with. Some two-dimensional infinite type cases have been precisely investigated in [2], [3]. We will consider higher dimensional cases, which are more complicated.
From the results in [15], [7], [6], [8], [5], it might be recognized that the information of the boundary from the viewpoint of singularity theory is valuable for the exact analysis of the Bergman kernel in the higher dimensional weakly pseudoconvex case.
In particular, the _Newton polyhedron_ determined from the boundary contains fruitful information for the singularity of the Bergman kernel at the boundary.
One of the difficulty of the analysis in the infinite type case is caused by the existence of non-zero _flat functions_ (see Section 2.1). Notice that flat functions do not affect the geometry of the Newton polyhedron and the influence of flat functions is subtle in the singularity of the Bergman kernel. However, this practical influence cannot always be negligible. In the main theorem (Theorem 2), we will give a certain condition on the geometry of the Newton polyhedron to determine the case where the above influence of flat functions can be negligible in some sense.
This paper is organized as follows. In Section 2, we state a main theorem and explain its significance. In Section 3, we exhibit an integral formula of the Bergman kernel given by F. Haslinger [12], [13], on which our analysis is based. In Section 4, we show that the singularity of the Bergman kernel can be completely determined by the local geometry of the boundary. In our analysis, it is necessary to consider various kinds of \(C^{\infty}\) functions, but the \(C^{\infty}\) class contains many troublesome functions. In [18], [19], a certain class of \(C^{\infty}\) functions, called the \(\hat{\mathcal{E}}\)-class, is introduced by the use of Newton polyhedra, which is easy to deal with. Moreover, we also introduce a class analogous to the \(\hat{\mathcal{E}}\)-class in the complex variables case in Section 5. In Section 6, we investigate the singularity of the Bergman kernel in the \(\hat{\mathcal{E}}\)-case. In Section 7, a main theorem can be shown by the use of the results in the \(\hat{\mathcal{E}}\)-case. By the way, the behavior of some Laplace type integrals is a key in the analysis in Section 6. The work of Varchenko in [23] (see also [1]) concerning local zeta functions and oscillatory integrals plays crucial roles in the investigation of the above behavior. In Section 8, we explain some results in [9], [18], [19], which generalize the above Varchenko's results, and apply these results to the anlaysis of the Bergman kernel. In the last section, we will explain some important words and concepts.
_Notation and symbols._
* We denote by \(\mathbb{N}\), \(\mathbb{Z}\), \(\mathbb{R}\), \(\mathbb{C}\) the set consisting of all natural numbers, integers, real numbers, complex numbers, respectively. Moreover, we denote by \(\mathbb{Z}_{+}\), \(\mathbb{R}_{+}\) the set consisting of all nonnegative integers, real numbers, respectively. For \(s\in\mathbb{C}\), \(\mathrm{Im}(s)\) expresses the imaginary part of \(s\).
* Let \(\alpha:=(\alpha_{1},\ldots,\alpha_{n})\), \(\beta:=(\beta_{1},\ldots,\beta_{n})\in\mathbb{Z}_{+}^{n}\). The multi-indeces will be used as follows. For \(x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\), define \[x^{\alpha}=x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}},\quad\|x\|_{\mathbb{R}} =\sqrt{x_{1}^{2}+\cdots+x_{n}^{2}}.\] For \(z=(z_{1},\ldots,z_{n}),\ \bar{z}=(\bar{z}_{1},\ldots,\bar{z}_{n})\in\mathbb{C}^{n}\), define \[z^{\alpha}:=z_{1}^{\alpha_{1}}\cdots z_{n}^{\alpha_{n}},\ \bar{z}^{\beta}:=\bar{z}_{1}^{\beta_{1}}\ldots\bar{z}_{n}^{\beta_{n}},\ |z|^{2\alpha}:=|z_{1}|^{2\alpha_{1}}\cdots|z_{n}|^{2 \alpha_{n}},\] \[\|z\|=\sqrt{|z_{1}|^{2}+\cdots+|z_{n}|^{2}}.\]
* For a positive number \(R\), we denote \[D_{\mathbb{R}}(R)=\{x\in\mathbb{R}^{n}:\|x\|_{\mathbb{R}}<R\},\quad D(R)=\{z\in \mathbb{C}^{n}:\|z\|<R\}.\]
## 2. Main results
### Newton data
In this paper, many concepts in _convex geometry_ play useful roles. We will explain the exact meanings of necessary words in convex geometry in Section 9.2 (see also [24]).
Let \(F\) be a real-valued \(C^{\infty}\) function defined near the origin in \(\mathbb{C}^{n}\). Let
\[\sum_{\alpha,\beta\in\mathbb{Z}^{n}_{+}}C_{\alpha\beta}z^{\alpha}\bar{z}^{ \beta}=\sum_{\alpha,\beta\in\mathbb{Z}^{n}_{+}}C_{\alpha\beta}z_{1}^{\alpha_{ 1}}\cdots z_{n}^{\alpha_{n}}\bar{z}_{1}^{\beta_{1}}\cdots\bar{z}_{n}^{\beta_{ n}}\]
be the Taylor series of \(F\) at the origin. The _support of \(F\)_ is the set \(S_{F}=\{\alpha+\beta\in\mathbb{Z}^{n}_{+}:C_{\alpha\beta}\neq 0\}\) and the _Newton polyhedron of_\(F\) is the integral polyhedron
\[\mathcal{N}_{+}(F)=\text{ the convex hull of the set }\bigcup\{\alpha+\beta+ \mathbb{R}^{n}_{+}:\alpha+\beta\in S_{F}\}\text{ in }\mathbb{R}^{n}_{+}.\]
We say that \(F\) is _flat_ if \(\mathcal{N}_{+}(F)=\emptyset\) and that \(F\) is _convenient_ if \(\mathcal{N}_{+}(F)\) intersects all the axes. For a compact face \(\gamma\) of \(\mathcal{N}_{+}(F)\), the \(\gamma\)-part of \(F\) is defined by
\[F_{\gamma}(z)=\sum_{\alpha+\beta\in\gamma\cap\mathbb{Z}^{n}_{+}}C_{\alpha\beta }z^{\alpha}\bar{z}^{\beta}.\]
We define a quantity \(\rho_{F}\) (\(\in\mathbb{Z}_{+}\)) as follows. When \(F\) is convenient, let
\[\rho_{F}:=\max\{\rho_{j}(F):j=1,\ldots,n\},\]
where
\[\rho_{j}(F):=\min\{t\geq 0:(0,\ldots,\overset{(j)}{t},\ldots,0)\in\mathcal{N}_{ +}(F)\}.\]
When \(F\) is not convenient, let \(\rho_{F}:=\infty\).
Hereafter, we assume that \(F\) is not flat. The _Newton distance of \(F\)_ is the nonnegative number
\[d_{F}:=\min\{t\geq 0:(t,\ldots,t)\in\mathcal{N}_{+}(F)\}.\]
Since \(F\) is not flat, there exists the minimum proper face of the Newton polyhedron \(\mathcal{N}_{+}(F)\) containing the point \(P_{F}:=(d_{F},\ldots,d_{F})\), which is called the _principal face_ of \(\mathcal{N}_{+}(F)\) and is denoted by \(\gamma_{*}\). The codimension of \(\gamma_{*}\) is called the _Newton multiplicity_ of \(F\), which is denoted by \(m_{F}\) (i.e., \(m_{F}=n-\dim(\gamma_{*})\)). In particular, when \(P_{F}\) is a vertex of \(\mathcal{N}_{+}(F)\), \(\gamma_{*}\) is the point \(P_{F}\) and \(m_{F}=n\). When \(\gamma_{*}\) is compact, the _principal part_ of \(F\) is defined by
\[F_{*}(z)=\sum_{\alpha,\beta\in\gamma_{*}\cap\mathbb{Z}^{n}_{+}}C_{\alpha\beta }z^{\alpha}\bar{z}^{\beta}\]
(i.e., the principal part of \(F\) is the \(\gamma_{*}\)-part of \(F\)).
### Main results
Let \(U\) be a complete pseudoconvex Reinhardt domain in \(\mathbb{C}^{n}\) (possibly \(U=\mathbb{C}^{n}\)). Let \(F\) be a real-valued \(C^{\infty}\) function on \(U\) satisfying the following conditions
1. \(F(z)=0\) if and only if \(z=0\) and \(F\) is not flat at the origin;
2. \(F\) is a plurisubharmonic function on \(U\);
3. \(F(e^{i\theta_{1}}z_{1},\ldots,e^{i\theta_{n}}z_{n})=F(z_{1},\ldots,z_{n})\) for any \(\theta_{j}\in\mathbb{R}\) and \(z\in U\);
4. If \(U\) is unbounded, then there are some positive numbers \(c\), \(\beta\), \(L\) such that \(F(z)\geq c\|z\|^{\beta}\) for \(z\in U\setminus D(L)\).
We will mainly deal with pseudoconvex model domains in \(\mathbb{C}^{n+1}\) of the form
\[\Omega_{F}=\{(z_{0},z_{1},\ldots,z_{n})\in\mathbb{C}\times U:\operatorname{Im }(z_{0})>F(z_{1},\ldots,z_{n})\}.\]
Note that the condition (D) implies that the dimension of \(A^{2}(\Omega_{F})\) is infinity.
In the case of the domain \(\Omega_{F}\), the finite type condition can be easily seen in the information of the Newton polyhedron of \(F\). Let \(\Delta_{1}(\partial\Omega_{F},0)\) be the D'Angelo type of \(\partial\Omega_{F}\) at the origin.
**Lemma 1** ([16]).: \(\Delta_{1}(\partial\Omega_{F},0)=\rho_{F}\)_. In particular, the following two conditions are equivalent._
1. \(\partial\Omega_{F}\) _is of finite type at_ \(0\) _(i.e.,_ \(\Delta_{1}(\partial\Omega_{F},0)<\infty\)_);_
2. \(F\) _is convenient (i.e.,_ \(\mathcal{N}_{+}(F)\) _intersects all axes)._
Let \(B_{\Omega_{F}}(z_{0},z)=B_{\Omega_{F}}(z_{0},z_{1},\ldots,z_{n})\) be the Bergman kernel (on the diagonal) of \(\Omega_{F}\). Since we are interested in the behavior of the restriction of the Bergman kernel of \(\Omega_{F}\) to the vertical line, we define
\[\mathcal{B}_{F}(\rho):=B_{\Omega_{F}}(0+i\rho,0,\ldots,0)\quad\text{ for }\rho>0.\]
The behavior of \(\mathcal{B}_{F}(\rho)\), as \(\rho\) tends to \(0\), can be exactly expressed by the use of the geometry of the Newton polyhedron \(\mathcal{N}_{+}(F)\) of \(F\).
**Theorem 1** ([15]).: _If \(\partial\Omega_{F}\) is of finite type at \(0\), then_
\[\mathcal{B}_{F}(\rho)=\frac{\Psi(\rho)}{\rho^{2/d_{F}+2}(\log\rho)^{m_{F}-1}}, \tag{1}\]
_where \(\Psi(\rho)\) admits the asymptotic expansion:_
\[\Psi(\rho)\sim\sum_{j=0}^{\infty}\sum_{k=a_{j}}^{\infty}C_{jk}\frac{\rho^{j/m} }{(\log(1/\rho))^{k}}\quad\text{as $\rho\to 0$,} \tag{2}\]
_where \(m\) is a positive integer, \(a_{j}\) are integers and \(C_{jk}\) are real numbers (the exact meaning of the above asymptotic expansion is explained in Section 9.1, below)._
_Furthermore, the first coefficient of the above expansion can be determined by the use of the Newton data of \(F\) as follows:_
\[\mathcal{B}_{F}(\rho)=\frac{C(F_{*})}{\rho^{2/d_{F}+2}(\log\rho)^{m_{F}-1}}\cdot( 1+o(\rho^{\varepsilon}))\quad\text{as $\rho\to 0$,} \tag{3}\]
_where \(\varepsilon\) is a positive constant and \(C(F_{*})\) is a positive constant depending only on the principal part \(F_{*}\) of \(F\)._
In this paper, we improve the above theorem in some sense as follows.
**Theorem 2**.: _Suppose that there exists a \(C^{\infty}\) function \(F_{0}\) defined near the origin in \(\mathbb{C}^{n}\) such that \(F_{0}\) satisfies the condition (C) and the \(\hat{\mathcal{E}}\)-condition at the origin (see Section 5) and \(F(z)-F_{0}(z)\) is a nonnegative flat function. If the principal face \(\gamma_{*}\) of the Newton polyhedron of \(F\) is compact, then_
\[\mathcal{B}_{F}(\rho)=\frac{C(F_{*})}{\rho^{2/d_{F}+2}(\log\rho)^{m_{F}-1}} \cdot(1+o(\rho^{\varepsilon}))\quad\text{as $\rho\to 0$,} \tag{4}\]
_where \(\varepsilon\) is a positive constant and \(C(F_{*})\) is a positive constant depending only on the principal part \(F_{*}\) of \(F\)._
_Remark._ (1) Note that the finite type condition is equivalent to the convenience condition of \(F\) from Lemma 1. When \(F\) is convenient, \(F\) itself satisfies the \(\hat{\mathcal{E}}\)-condition (see [18], [19]) and the principal face of \(\mathcal{N}_{+}(F)\) is always compact. Therefore, the assumption of Theorem 2 is weaker than that of Theorem 1. Indeed, Theorem 2 can be applied to some infinite type cases.
(2) In the two-dimensional case (i.e. \(\Omega_{F}\subset\mathbb{C}^{2}\)), Lemma 1 implies that the finite type condition is equivalent to the nonflatness of \(F\). Since we assume the nonflatness of \(F\) in (A), the advantage of Theorem 2 can be only seen in the case where the dimension is higher than two.
(3) Theorem 2 can be applied to the following examples, which are in the infinite type case.
* When \(F(z_{1},z_{2})=|z_{1}|^{6}+|z_{1}|^{2}|z_{2}|^{4}+e^{-1/|z_{2}|^{2}}\) near the origin, \[\mathcal{B}_{F}(\rho)=\frac{C(F_{*})}{\rho^{8/3}}\cdot(1+o(\rho^{\varepsilon} ))\quad\text{as $\rho\to 0$.}\] (Note that \(F_{*}(z_{1},z_{2})=|z_{1}|^{6}+|z_{1}|^{2}|z_{2}|^{4}\).)
* When \(F(z_{1},z_{2})=|z_{1}|^{6}+|z_{1}|^{2}|z_{2}|^{2}+e^{-1/|z_{2}|^{2}}\) near the origin, \[\mathcal{B}_{F}(\rho)=\frac{C(F_{*})}{\rho^{3}\log\rho}\cdot(1+o(\rho^{ \varepsilon}))\quad\text{as $\rho\to 0$.}\] (Note that \(F_{*}(z_{1},z_{2})=|z_{1}|^{2}|z_{2}|^{2}\).)
* When \(F(z_{1},z_{2})=|z_{1}|^{2}|z_{2}|^{2}+e^{-1/|z_{1}|^{2}}+e^{-1/|z_{2}|^{2}}\) near the origin, \[\mathcal{B}_{F}(\rho)=\frac{C(F_{*})}{\rho^{3}\log\rho}\cdot(1+o(\rho^{\varepsilon }))\quad\text{as $\rho\to 0$.}\] (Note that \(F_{*}(z_{1},z_{2})=|z_{1}|^{2}|z_{2}|^{2}\).)
* When \(F(z_{1},z_{2},z_{3})=|z_{1}|^{8}+|z_{2}|^{8}+|z_{1}|^{2}|z_{2}|^{2}|z_{3}|^{2}+ e^{-1/|z_{3}|^{2}}\) near the origin, \[\mathcal{B}_{F}(\rho)=\frac{C(F_{*})}{\rho^{3}(\log\rho)^{2}}\cdot(1+o(\rho^ {\varepsilon}))\quad\text{as $\rho\to 0$.}\] (Note that \(F_{*}(z_{1},z_{2})=|z_{1}|^{2}|z_{2}|^{2}|z_{3}|^{2}\).)
We remark that \(e^{-1/|z_{j}|^{2}}\) (\(j=1,2,3\)) in the above examples can be replaced by any flat functions which are positive away from the origin.
(4) Our method in the proof of Theorem 2 cannot generally show the existence of an asymptotic expansion of the form (2). We guess that the pattern of the asymptotic expansion might not be expressed as in the form (2) in general from some observation of the strange phenomena in [20], [21], [22], which are seen in the analytic continuation of local zeta functions.
(5) In the case where the principal face is noncompact, there exist many examples in which the behavior of the Bergman kernel \(\mathcal{B}_{F}(\rho)\) as \(\rho\to 0\) is different from (4). For example, in the case where \(F(z_{1},z_{2})=|z_{1}|^{2}+e^{-1/|z_{2}|^{p}}\) near the origin, \(\mathcal{B}_{F}(\rho)\) locally satisfies
\[\frac{c_{1}|\log\rho|^{1/p}}{\rho^{3}}\leq\mathcal{B}_{F}(\rho)\leq\frac{c_{2} |\log\rho|^{1/p}}{\rho^{3}}\]
near \(\rho=0\), where \(c_{1},c_{2}\) are positive constants ([17]). Notice that the logarithmic functions appear in the numerators in the above estimates. Observing the above estimates, we can see that the behavior of \(\mathcal{B}_{F}(\rho)\) depends on \(p\), which is an information of the flat term \(e^{-1/|z_{2}|^{p}}\). In the noncompact principal face case, the information of the Newton polyhedron of \(F\) cannot always determine the singularity of the Bergman kernel completely.
(6) Of course, the constant \(\varepsilon\) in (3), (4) does not depend on \(\rho\). The equations of similar type to (3), (4) will be often seen in this paper. In these equations, the constant \(\varepsilon\) can be chosen by the use of the geometry of the respective Newton polyhedron.
Since \(\rho^{\varepsilon}=o((\log(1/\rho)^{-1})\) holds for any \(\varepsilon>0\), \(o(\rho^{\varepsilon})\) in the equations in (3), (4) and in the examples in Remark 1 (3) can be replaced by \(o((\log(1/\rho)^{-1})\).
(7) It is desirable to show that the conditions (A-C) of \(F\) imply the existence of \(F_{0}\) in the assumption of the theorem (in other words, the existence of \(F_{0}\) might be removed in the assumption). Even if the existence of \(F_{0}\) is not known, we can give
the following estimate:
\[\mathcal{B}_{F}(\rho)\leq\frac{C(F_{*})}{\rho^{2/d_{F}+2}(\log\rho)^{m_{F}-1}} \cdot(1+C\rho^{\varepsilon})\quad\text{for $\rho\in(0,\delta)$},\]
where \(C(F_{*})\) is as in the theorem and \(C,\varepsilon,\delta\) are positive constants. This can be easily seen from the proof of Theorem 2.
## 3. Integral formula of the Bergman kernel
Let \(U\) be a domain in \(\mathbb{C}^{n}\) and let \(F:U\to\mathbb{R}_{+}\) be a \(C^{\infty}\)-smooth plurisubharmonic function. The weighted Hilbert space \(H_{\tau}(U)\) (\(\tau>0\)) consists of all entire functions \(\psi:U\to\mathbb{C}\) such that
\[\int_{U}|\psi(z)|^{2}e^{-2\tau F(z)}dV(z)<\infty,\]
where \(dV(z)\) denotes the Lebesgue measure on \(\mathbb{C}^{n}\). We only consider the case where \(H_{\tau}(U)\) is a nontrivial Hilbert space with the reproducing kernel. (When \(U\) is bounded, this is obvious. When \(U\) is unbounded, the condition (D) in Section 2.2 can imply the above nontriviality.) We denote the above kernel (on the diagonal) by \(K_{F}(z;\tau)\). We remark that the function \(\tau\mapsto K_{F}(z;\tau)\) is continuous for fixed \(z\in U\) from the result in [10]. F. Haslinger [12], [13] shows that the Bergman kernel \(B_{\Omega_{F}}(z_{0},z)\) of the domain \(\Omega_{F}\) can be expressed by the use of \(K_{F}(z;\tau)\) as follows.
\[B_{\Omega_{F}}(z_{0},z)=\frac{1}{2\pi}\int_{0}^{\infty}e^{-2\rho\tau}K_{F}(z; \tau)\tau d\tau, \tag{5}\]
where \(\rho\) is the imaginary part of \(z_{0}\).
In the case where \(F\) safisfies the conditions (C), (D) in Section 2.2, we can take a complete orthonormal system for \(H_{\tau}(U)\) as
\[\left\{\frac{z^{\alpha}}{c_{\alpha}(\tau)}:\alpha\in\mathbb{Z}_{+}^{n}\right\},\quad\text{ with }c_{\alpha}(\tau)^{2}=\int_{U}|z|^{2\alpha}e^{-2\tau F(z)}dV(z).\]
Therefore, \(K_{F}(z;\tau)\) can be expressed as
\[K_{F}(z;\tau)=\sum_{\alpha\in\mathbb{Z}_{+}^{n}}\frac{|z|^{2\alpha}}{c_{ \alpha}(\tau)^{2}}. \tag{6}\]
From (5), (6), \(\mathcal{B}_{F}(\rho)\) can be expressed as
\[\mathcal{B}_{F}(\rho)=\frac{1}{2\pi}\int_{0}^{\infty}e^{-2\rho\tau}\frac{\tau} {c_{0}(\tau)^{2}}d\tau. \tag{7}\]
In order to see the behavior of \(\mathcal{B}_{F}(\rho)\) as \(\rho\to 0\), it suffices to investigate that of \(c_{0}(\tau)^{2}\) as \(\tau\to\infty\).
## 4. Localization lemma
Let \(U\) be an open neighborhood of the origin in \(\mathbb{C}^{n}\) and let \(F:U\to\mathbb{R}\) be a nonnegative \(C^{\infty}\) function with \(F(0)=0\).
Let \(R\) be a positive number such that \(D(R)\subset U\). Let \(\varphi:\mathbb{R}^{n}\to[0,1]\) be a \(C^{\infty}\) function such that \(\varphi\) identically equals \(1\) on \(D_{\mathbb{R}}(R/2)\) and its support is contained in \(D_{\mathbb{R}}(R)\). Let \(\Phi:\mathbb{C}^{n}\to[0,1]\) be a \(C^{\infty}\) function defined by \(\Phi(z_{1},\ldots,z_{n})=\varphi(|z_{1}|,\ldots,|z_{n}|)\).
We define an integral of the form
\[\tilde{\mathcal{B}}_{F}(\rho)=\frac{1}{2\pi}\int_{0}^{\infty}e^{-2\rho\tau} \frac{\tau}{\tilde{c}_{0}(\tau)^{2}}d\tau\quad\text{ for }\rho>0, \tag{8}\]
where
\[\tilde{c}_{0}(\tau)^{2}=\int_{U}e^{-2\tau F(z)}\Phi(z)dV(z). \tag{9}\]
Since there exists a positive number \(a\) such that \(F(z)\leq a\|z\|\) on \(D(R)\), we see
\[\begin{split}\tilde{c}_{0}(\tau)^{2}&\geq\int_{D(R/ 2)}e^{-2\tau F(z)}dV(z)\\ &\geq c\int_{0}^{R/2}e^{-2a\tau x}x^{2n-1}dx\geq\frac{c}{(2a\tau )^{2n}}\int_{0}^{1}e^{-s}s^{2n-1}ds=\frac{C}{\tau^{2n}},\end{split} \tag{10}\]
for \(\tau\geq 1\), where \(c,C\) are positive constants independent of \(\tau\).
By the use of the estimate (10), the integral \(\tilde{\mathcal{B}}_{F}(\rho)\) in (8) can be considered as a \(C^{\infty}\) function defined on \((0,\infty)\), which is samely denoted by \(\tilde{\mathcal{B}}_{F}(\rho)\).
Since \(\tilde{c}_{0}(\tau)^{2}\leq c_{0}(\tau)^{2}\), the relationship between \(\tilde{\mathcal{B}}_{F}(\rho)\) and the Bergman kernel \(\mathcal{B}_{F}(\rho)\) can be seen: \(\tilde{\mathcal{B}}_{F}(\rho)\geq\mathcal{B}_{F}(\rho)\). The following proposition shows that the singularities of \(\mathcal{B}_{F}(\rho)\) and \(\tilde{\mathcal{B}}_{F}(\rho)\) at the origin are essentially the same.
**Proposition 1**.: _If \(F\) satisfies the conditions (A-D) in Section 2.2, then \(\tilde{\mathcal{B}}_{F}(\rho)-\mathcal{B}_{F}(\rho)\) can be real-analytically extended to an open neighborhood of \(\rho=0\)._
Proof.: From the integral expressions in (7) and (8), we have
\[\mathcal{B}_{F}(\rho)-\tilde{\mathcal{B}}_{F}(\rho)=\frac{1}{2\pi}\int_{0}^{ \infty}e^{-2\rho\tau}\left(\frac{1}{c_{0}(\tau)^{2}}-\frac{1}{\tilde{c}_{0}( \tau)^{2}}\right)\tau d\tau.\]
Therefore, the proposition can be easily seen by the use of the following lemma.
**Lemma 2**.: _There exist positive numbers \(L,C,q,\epsilon\) such that_
\[\left|\frac{1}{\tilde{c}_{0}(\tau)^{2}}-\frac{1}{c_{0}(\tau)^{2}}\right|\leq C \tau^{q}e^{-2\epsilon\tau}\quad\text{ for }\tau\geq L.\]
Proof.: Let
\[e(\tau):=c_{0}(\tau)^{2}-\tilde{c}_{0}(\tau)^{2}=\int_{U}e^{-2\tau F(z)}(1-\Phi(z ))dV(z).\]
From the conditions (A), (C) in Section 2.2, there exist positive numbers \(c,\epsilon\) such that
\[F(z)\geq\epsilon+c(|z_{1}|^{\beta}+\dots+|z_{n}|^{\beta})\quad\text{for }z\in U \setminus D(R/2).\]
By the use of the above inequality, we can
\[\begin{split}|e(\tau)|&\leq\int_{U\setminus D(R/2) }e^{-2\tau F(z)}dV(z)\\ &\leq(2\pi)^{n}e^{-2\epsilon\tau}\int_{\mathbb{R}_{+}^{n}}e^{-2c \tau(x_{1}^{\beta}+\dots+x_{n}^{\beta})}\left(\prod_{j=1}^{n}x_{j}\right)dx \leq C\tau^{-2n/\beta}e^{-2\epsilon\tau}.\end{split} \tag{11}\]
Applying (10), (11) to the right hand side of the equation
\[\frac{1}{\tilde{c}_{0}(\tau)^{2}}-\frac{1}{c_{0}(\tau)^{2}}=\frac{e(\tau)}{ \tilde{c}_{0}(\tau)^{4}(1+e(\tau)/\tilde{c}_{0}(\tau)^{2})},\]
we can get the estimate in the lemma.
## 5. The \(\hat{\mathcal{E}}\)-condition
In this section, we introduce some classes of \(C^{\infty}\) functions, which are defined by the use of Newton polyhedra. When \(F\) belongs to this class, the behavior of \(\tilde{\mathcal{B}}_{F}(\rho)\) is relatively easy to be understood (see Theorem 3, below).
### Newton polyhedra in the real case
Let \(f\) be a real-valued \(C^{\infty}\) function defined near the origin in \(\mathbb{R}^{n}\). Let
\[\sum_{\alpha\in\mathbb{Z}_{+}^{n}}c_{\alpha}x^{\alpha}=\sum_{\alpha\in \mathbb{Z}_{+}^{n}}c_{\alpha}x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}} \tag{12}\]
be the Taylor series of \(f\) at the origin. The Newton polyhedron of \(f\) is the integral polyhedron
\[\mathcal{N}_{+}(f)=\text{the convex hull of the set }\bigcup\{\alpha+\mathbb{R}_{+}^{n}: \alpha\in S(f)\}\text{ in }\mathbb{R}_{+}^{n},\]
where \(S(f):=\{\alpha\in\mathbb{Z}_{+}^{n}:c_{\alpha}\neq 0\}\). In the real case, we also say that \(f\) is _flat_ if \(\mathcal{N}_{+}(f)=\emptyset\) and that \(f\) is _convenient_ if \(\mathcal{N}_{+}(f)\) intersects all the axes.
### The \(\hat{\mathcal{E}}\)-condition in the real case
Let \(f\) be a \(C^{\infty}\) function defined near the origin in \(\mathbb{R}^{n}\). We say that \(f\)_admits the \(\gamma\)-part_ on an open neighborhood \(V\) of the origin in \(\mathbb{R}^{n}\) if for any \(x\in V\) the limit
\[\lim_{t\to 0}\frac{f(t^{a_{1}}x_{1},\ldots,t^{a_{n}}x_{n})}{t^{l}}\]
exists for _all_ pairs \((a,l)=(a_{1},\ldots,a_{n},l)\in\mathbb{Z}_{+}^{n}\times\mathbb{Z}_{+}\) defining \(\gamma\) (i.e., \(\{x\in P:\sum_{j=1}^{n}a_{j}x_{j}=l\}=\gamma\)). It is known in [18] that when \(f\) admits the \(\gamma\)-part, the above limits take the same value for any \((a,l)\), which is denoted by \(f_{\gamma}(x)\). We consider \(f_{\gamma}\) as the function on \(V\), which is called the \(\gamma\)-_part_ of \(f\). For a compact face \(\gamma\) of \(\mathcal{N}_{+}(f)\), \(f\) always admits the \(\gamma\)-part near the origin and \(f_{\gamma}(x)\) equals the polynomial \(\sum_{\alpha\in\gamma\cap\mathbb{Z}_{+}^{n}}c_{\alpha}x^{\alpha}\), where \(c_{\alpha}\) are as in (12).
**Definition 1** ([18], [19]).: We say that \(f\) satisfies the \(\hat{\mathcal{E}}\)-_condition at the origin_ if \(f\) admits the \(\gamma\)-part for every proper face \(\gamma\) of \(\mathcal{N}_{+}(f)\).
_Remark 1_.: (1) If \(f\) is real analytic near the origin or \(f\) is convenient, then \(f\) satisfies the \(\hat{\mathcal{E}}\)-condition (see [18], [19]). In particular, in the one-dimensional case, nonflat functions satisfy the \(\hat{\mathcal{E}}\)-condition.
(2) For example, \(f(x_{1},x_{2})=x_{1}^{2}+e^{-1/x_{2}^{2}}\) does not satisfy the \(\hat{\mathcal{E}}\)-condition.
Every \(C^{\infty}\) function \(f\) can be decomposed into an \(\hat{\mathcal{E}}\)-function and a flat function.
**Proposition 2**.: _For any \(C^{\infty}\) function \(f\) defined near the origin in \(\mathbb{R}^{n}\), there exists a \(C^{\infty}\) function \(f_{0}\) satisfying the \(\hat{\mathcal{E}}\)-condition at the origin such that \(f-f_{0}\) is flat at the origin._
Proof.: Since the proposition is obvious when \(f\) does not vanish at the origin or \(f\) is flat at the origin, we will only consider the other cases.
Let \(p=(p_{1},\ldots,p_{n})\in\mathbb{Z}_{+}^{n}\) be a vertex of the Newton polyhedron of \(f\). We inductively define \(C^{\infty}\) functions \(R_{0},R_{1},\ldots,R_{n}\) defined near the origin as follows.
Let \(R_{0}(x)=f(x)\). Let \(k\) is an integer with \(1\leq k\leq n\). If \(p_{k}\geq 1\), then there exists a \(C^{\infty}\) function \(R_{k}\) defined near the origin such that
\[R_{k-1}(x)=\sum_{j=0}^{p_{k}-1}c_{kj}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{n })x_{k}^{j}+x_{k}^{p_{k}}R_{k}(x), \tag{13}\]
where \(c_{jk}\) are \(C^{\infty}\) functions of \((n-1)\)-variables. Note that (13) is the Taylor expansion of \(R_{k-1}\) with respect to the variable \(x_{k}\). On the other hand, if \(p_{k}=0\), then set \(R_{k}(x)=R_{k-1}(x)\).
By the use of the equations for \(k=1,\ldots,n\), \(f\) can be expressed as
\[f(x)=P(x)+x^{p}R_{n}(x), \tag{14}\]
where \(P\) is written in the form of the sum of a monomial times a \(C^{\infty}\) function of \((n-1)\)-variables, and \(x^{p}=x_{1}^{p_{1}}\cdots x_{n}^{p_{n}}\). It is easy to see that \(R_{n}(0)=c_{p}\), where \(c_{p}\) is as in (12) (i.e., \(c_{p}\) is the coefficient of the term containing \(x^{p}\) of the Taylor series of \(f\)). Since \(p\) is the vertex of the Newton polyhedron of \(f\), \(c_{p}\) does not vanish, which implies \(R_{n}(0)\neq 0\). It follows from Proposition 6.3 in [18], we can see that \(x^{p}R_{n}(x)\) satisfies the \(\hat{\mathcal{E}}\)-condition.
Now, let us construct a \(C^{\infty}\) function \(f_{0}\) in the proposition by induction on \(n\). Note that every non-flat \(C^{\infty}\) function of one variable satisfies the \(\hat{\mathcal{E}}\)-condition. Let \(f\) be a non-flat \(C^{\infty}\) function of \(n\)-variables with \(f(0)=0\). Then \(f\) can be expressed as in (14) and \(P\) takes the form of the sum of a monomial times a \(C^{\infty}\) function of \((n-1)\)-variables. Now we assume that every \(C^{\infty}\) function of \((n-1)\)-variables can be decomposed into an \(\hat{\mathcal{E}}\)-function and a flat function. Then \(P\) has the same type decomposition and so a desirable \(\hat{\mathcal{E}}\)-function \(f_{0}\) can be obtained.
### The \(\hat{\mathcal{E}}\)-condition in the complex case
Now, let us consider a nonnegative \(C^{\infty}\) function \(F\) defined near the origin in \(\mathbb{C}^{n}\) with \(F(0)=0\). If \(F\) satisfies the condition (C), then there exists a \(C^{\infty}\) function \(f\) defined near the origin in \(\mathbb{R}^{n}\) such that \(f(|z_{1}|,\ldots,|z_{n}|)=F(z_{1},\ldots,z_{n})\). We say that \(F\) satisfies the \(\hat{\mathcal{E}}\)-condition at the origin, if \(f\) satisfies the \(\hat{\mathcal{E}}\)-condition at the origin. It is easy to see that the \(\hat{\mathcal{E}}\)-condition of \(F\) is independent of the choice of the function \(f\).
**Proposition 3**.: _For any \(C^{\infty}\) function \(F\) defined near the origin in \(\mathbb{C}^{n}\) satisfying the condition (C), there exists a \(C^{\infty}\) function \(F_{0}\) satisfying the condition (C) and the \(\hat{\mathcal{E}}\)-condition at the origin such that \(F-F_{0}\) is flat at the origin._
_Remark._ It is desirable to show that the additional conditions (A), (B) imply the existence of \(F_{0}\) such that \(F-F_{0}\) is a _nonnegative_ flat function. If this implication can be shown, then the existence of \(F_{0}\) can be removed in the assumption in Theorem 2.
## 6. Behavior of \(\tilde{\mathcal{B}}_{F}(\rho)\) in the \(\hat{\mathcal{E}}\)-case
In order to prove Theorem 2, it suffices to consider the case where \(F\) satisfies the \(\hat{\mathcal{E}}\)-condition.
**Theorem 3**.: _Suppose that \(F\) is a \(C^{\infty}\) function with \(F(0)=0\) satisfying the condition (C) in Section 2.2 and the \(\hat{\mathcal{E}}\)-condition at the origin. If the principal face of \(\mathcal{N}_{+}(F)\) is compact, then_
\[\tilde{\mathcal{B}}_{F}(\rho)=\frac{C(F_{*})}{\rho^{2+2/d_{F}}(\log\rho)^{m_{F }-1}}\cdot(1+o(\rho^{\varepsilon}))\quad\text{ as $\rho\to 0$,}\]
_where \(d_{F}\) and \(m_{F}\) are as in Section 2.1, \(\varepsilon\) is a positive constant, and \(C(F_{*})\) is a positive constant depending only on the principal part \(F_{*}\) of \(F\)._
Proof.: Recall the expression of \(\tilde{\mathcal{B}}_{F}(\rho)\):
\[\tilde{\mathcal{B}}_{F}(\rho)=\frac{1}{2\pi}\int_{0}^{\infty}e^{-2 \rho\tau}\frac{\tau}{\tilde{c}_{0}(\tau)^{2}}d\tau\] \[\text{with }\ \tilde{c}_{0}(\tau)^{2}=\int_{\mathbb{C}^{n}}e^{-2 \tau F(z)}\Phi(z)dV(z).\]
This theorem can be directly shown by applying the following lemma to the above integral formulas.
**Lemma 3**.: _Under the same assumptions on \(F\) as those of Theorem 3, \(\tilde{c}_{0}(\tau)^{2}\) satisfies_
\[\tilde{c}_{0}(\tau)^{2}=c(F_{*})\tau^{-2/d_{F}}(\log\tau)^{m_{F}-1}\cdot(1+o( \tau^{-\varepsilon}))\quad\text{ as }\tau\to\infty, \tag{15}\]
_where \(\varepsilon\) is a positive constant and \(c(F_{*})\) is a positive constant depending only on the principal part \(F_{*}\) of \(F\)._
The proof of the above lemma will be given in Section 8.
## 7. Proof of Theorem 2
Let us give a proof of Theorem 2. We assume that \(F\) is a \(C^{\infty}\) function satisfying the conditions (A-D) in Section 2.2. Moreover, let \(F_{0}\) be as in Theorem 2. It is easy to see that \(F_{0}\) also satisfies the conditions (B), (C).
For a positive number \(M\), we define
\[F_{1}(z)=F_{0}(z)+\sum_{j=1}^{n}|z_{j}|^{2M}.\]
Since the principal face of \(\mathcal{N}_{+}(F_{0})\) is compact, there exists a positive constant \(M\) such that the principal face of \(\mathcal{N}_{+}(F_{1})\) is the same as that of \(\mathcal{N}_{+}(F_{0})\). Moreover, since \(F_{1}\) is convenient, \(F_{1}\) also satisfies the \(\hat{\mathcal{E}}\)-condition at the origin (see Remark 2 (1) ). It is easy to see that there exists a positive number \(R\) such that
\[F_{0}(z)\leq F(z)\leq F_{1}(z)\quad\text{ for }z\in D(R). \tag{16}\]
For \(j=0,1\), we define
\[\tilde{\mathcal{B}}_{F_{j}}(\rho)=\frac{1}{2\pi}\int_{0}^{\infty}e^{-2\rho \tau}\frac{\tau}{g_{j}(\tau)}d\tau\quad\text{ for }\rho>0,\]
where
\[g_{j}(\tau)=\int_{\mathbb{C}^{n}}e^{-2\tau F_{j}(z)}\Phi(z)dV(z),\]
where \(\Phi\) is as in Section 4. From the relationship in (16), we see
\[\tilde{\mathcal{B}}_{F_{0}}(\rho)\leq\tilde{\mathcal{B}}_{F}(\rho)\leq\tilde{ \mathcal{B}}_{F_{1}}(\rho). \tag{17}\]
Since it is easy to see that \(F_{0},F_{1}\) satisfy the assumptions in Theorem 3, we have
\[\tilde{\mathcal{B}}_{F_{j}}(\rho)=\frac{C(F_{*})}{\rho^{2+2/d_{F}}(\log\rho)^{m_{ F}-1}}\cdot(1+o(\rho^{\varepsilon_{j}}))\quad\text{ as }\rho\to 0, \tag{18}\]
for \(j=0,1\), where \(\varepsilon_{0},\varepsilon_{1}\) are positive numbers. Therefore, (17), (18) imply that
\[\tilde{\mathcal{B}}_{F}(\rho)=\frac{C(F_{*})}{\rho^{2+2/d_{F}}(\log\rho)^{m_{ F}-1}}\cdot(1+o(\rho^{\varepsilon}))\quad\text{ as }\rho\to 0,\]
where \(\varepsilon\) is the minimum of \(\varepsilon_{1},\varepsilon_{2}\). From the localization lemma in Proposition 1, we can obtain the equation (4) in the theorem.
## 8. Asymptotic analysis of some Laplace integrals
The purpose of this section is to give a proof of Lemma 3. For this purpose, we will investigate the behavior of a Lapace integral of the form
\[L_{f}(\tau)=\int_{\mathbb{R}_{+}^{n}}e^{-2\tau f(x)}\varphi(x)\left(\prod_{j= 1}^{n}x_{j}\right)dx, \tag{19}\]
for large \(\tau>0\), where \(f\), \(\varphi:V\to\mathbb{R}_{+}\) are nonnegative and nonflat \(C^{\infty}\) functions defined on a small open neighborhood \(V\) of the origin with \(f(0)=0\) and \(\varphi(0)=1\) and the support of \(\varphi\) is contained in \(V\) of the origin. Since the compact support of \(\varphi\) implies the convergence of the above integral, \(L_{f}\) can be considered as a \(C^{\infty}\) function defined on \((0,\infty)\).
The analysis in this section is based on the studies in [8], [18], [19], which deal with oscillatory integrals and local zeta functions.
### Newton data in the real case
Let \(f\) be a nonflat real-valued \(C^{\infty}\) function defined near the origin in \(\mathbb{R}^{n}\). Let \(f\) admit the Taylor series \(\sum_{\alpha\in\mathbb{Z}_{+}^{n}}c_{\alpha}x^{\alpha}\) at the origin and let \(\mathcal{N}_{+}(f)\) be the Newton polyhedron of \(f\) defined in Section 5.1. The _Newton distance_ of \(f\) is a nonnegative number
\[d_{f}=\min\{t\geq 0:(t,\ldots,t)\in\mathcal{N}_{+}(f)\}.\]
The minimum proper face of \(\mathcal{N}_{+}(f)\) containing the point \(P_{f}:=(d_{f},\ldots,d_{f})\) is called the _principal face_ of \(\mathcal{N}_{+}(f)\), and is denoted by \(\gamma_{*}\). The codimension of \(\gamma_{*}\) is called the _Newton multiplicity_, which is denoted by \(m_{f}\). When \(\gamma_{*}\) is compact, the _principal part_ of \(f\) is defined by \(f_{*}(x)=\sum_{\alpha\in\gamma_{*}\cap\mathbb{Z}_{+}^{n}}c_{\alpha}x^{\alpha}\).
### Meromorphic extension of some local zeta functions
Let us consider an integral of the form
\[Z_{f}(s)=\int_{\mathbb{R}_{+}^{n}}f(x)^{s}\varphi(x)\left(\prod_{j=1}^{n}x_{j} \right)dx\quad\text{ for }s\in\mathbb{C}, \tag{20}\]
where \(f,\varphi\) are the same as in (19). The convergence of the integral easily implies that \(Z_{f}\) can be regarded as a holomorphic function on the right half-plane, which is called a _local zeta function_ and is samely denoted by \(Z_{f}\).
The situation of analytic extension of the above local zeta function plays crucial roles in the investigation of the behavior of Laplace integrals as \(\tau\to\infty\).
**Theorem 4**.: _Suppose that_
* \(f\) _satisfies the_ \(\hat{\mathcal{E}}\)_-condition (see Section_ 5_);_
* \(f\) _is Newton nondegenerate (see Section_ 9.3_);_
* _The principal face of_ \(\mathcal{N}_{+}(f)\) _is compact._
_Then \(Z_{f}\) can be analytically continued as a meromorphic function to the whole complex plane, which will be samely denoted by \(Z_{f}\). More precisely, the set of the poles of \(Z_{f}\) are contained in the set \(\{-j/m:j\in\mathbb{N}\}\) where \(m\) is a positive integer._
_Furthermore, the most right pole of \(Z_{f}\) exists at \(s=-2/d_{f}\) and its order is \(m_{f}\), where \(d_{f},m_{f}\) are as in Section 8.1. The coefficient of the leading term of the Laurent expansion of \(Z_{f}\) at \(s=-2/d_{f}\)_
\[c_{Z}(f_{*})=\lim_{s\to-2/d_{f}}\left(s+\frac{2}{d_{f}}\right)^{m_{f}}\cdot Z_ {f}(s)\]
_is a positive constant depending only on the principal part \(f_{*}\) of \(f\)._
Proof.: The above theorem is a special case of Theorems 10.1 and 10.8 in [19]. Indeed, \(f\) satisfies the conditions (b) and (c) in Theorem 10.8.
_Remark._ The idea of the proof is based on the method of Varchenko in [23], [1] (see also [8], [18], [19]). In his method, the toric resolution of singularities based on the geometry of Newton polyhedra plays an essential role.
### Asymptotic behavior of some Laplace integrals
From an information about the analytic continuation of \(Z_{f}\) in Theorem 4, we can see the behavior of the Laplace integral in (19) as \(\tau\to\infty\).
**Theorem 5**.: _Suppose that \(f,\varphi\) satisfy the same conditions as in Theorem 4. Then \(L_{f}\) admits the asymptotic expansion: for any \(N\in\mathbb{N}\), there exists a positive constant
\(C_{N}\) such that_
\[\left|L_{f}(\tau)-\sum_{j=0}^{N}\sum_{k=1}^{n}c_{jk}\tau^{-j/m}(\log\tau)^{k-1} \right|<C_{N}\tau^{-N/m-\varepsilon}, \tag{21}\]
_for \(\tau\geq 2\), where \(m\) is a positive integer determined by \(\mathcal{N}_{+}(f)\), \(\varepsilon\) is a positive number and \(c_{jk}\) are constants. In particular, the first term of the expansion can be expressed as_
\[L_{f}(\tau)=c_{L}(f_{*})\tau^{-2/d_{f}}(\log\tau)^{m_{f}-1}\cdot(1+o(\tau^{- \varepsilon}))\quad\text{ as }\tau\to\infty,\]
_where \(d_{f}\) and \(m_{f}\) are as in Section 8.1 and \(c_{L}(f_{*})\) is a positive number depending only on the principal part \(f_{*}\) of \(f\)._
Proof.: Define the fiber integral \(H_{f}:\mathbb{R}_{+}\to\mathbb{R}\) as
\[H_{f}(u)=\int_{W_{u}}\varphi(x)\left(\prod_{j=1}^{n}x_{j}\right)\omega,\]
where \(W_{u}:=\{x\in\mathbb{R}^{n}:f(x)=u\}\) and \(\omega\) is the surface element on \(W_{u}\), which is determined by \(df\wedge\omega=dx_{1}\wedge\cdots\wedge dx_{n}\).
It is easy to see that the Laplace integral \(L_{f}\) and the local zeta function \(Z_{f}\) can be represented by the use of \(H_{f}\) as follows.
\[L_{f}(\tau)=\int_{0}^{\infty}e^{-2\tau u}H_{f}(u)du, \tag{22}\]
\[Z_{f}(s)=\int_{0}^{\infty}u^{s}H_{f}(u)du. \tag{23}\]
Applying the inverse formula of the Mellin transform to (23), we have
\[H_{f}(u)=\frac{1}{2\pi i}\int_{r-i\infty}^{r+i\infty}Z_{f}(s)u^{-s-1}ds,\]
where \(r>0\) and the integral contour follows the line \(\operatorname{Re}(s)=r\) upwards. The meromorphic extension of \(Z_{f}\) in Theorem 4 implies that the deformation of the integral contour as \(r\) tends to \(-\infty\) gives an asymptotic expansion of \(H_{f}(u)\) as \(u\to 0\) by the residue formula. For any \(N\in\mathbb{N}\), there exists a positive constant \(B_{N}\) such that
\[\left|H_{f}(u)-\sum_{j=0}^{N}\sum_{k=1}^{n}b_{jk}u^{j/m}(\log u)^{k-1}\right|< B_{N}u^{N/m+\varepsilon}, \tag{24}\]
for \(u\in(0,1/2)\), where \(m\) is a positive integer determined by \(\mathcal{N}_{+}(f)\), \(\varepsilon\) is a positive number and \(b_{jk}\) are constants. We remark that the above deformation of the contour can be done. By applying the asymptotic expansion (24) to (22), we can obtain (21).
In particular, we have
\[H_{f}(u)=c_{H}(f_{*})u^{2/d_{f}-1}(\log u)^{m_{f}-1}\cdot(1+o(u^{\varepsilon})) \quad\text{ as }u\to 0. \tag{25}\]
Substituting (25) into (22), we have
\[L_{f}(\tau)=c_{L}(f_{*})\tau^{-2/d_{f}}(\log\tau)^{m_{f}-1}\cdot(1+o(\tau^{- \varepsilon}))\quad\text{ as }\tau\to\infty.\]
### Proof of Lemma 3
By the use of the polar coordinates \(z_{j}=x_{j}e^{i\theta_{j}}\), with \(x_{j}\geq 0,\theta_{j}\in\mathbb{R}\), for \(j=1,\ldots,n\), then we have
\[\tilde{c}_{0}(\tau)^{2} =\int_{U}e^{-2\tau F(z)}\Phi(z)dV(z)\] \[=(2\pi)^{n}\int_{\mathbb{R}_{+}^{n}}e^{-2\tau f(x)}\left(\prod_{j =1}^{n}x_{j}\right)\varphi(x)dV(x),\]
where \(\varphi\) is as in Section 4 and \(f\) is a \(C^{\infty}\) function defined near the origin satisfying \(f(|z_{1}|,\ldots,|z_{n}|)=F(z_{1},\ldots,z_{n})\).
From [16], the conditions (A-C) implies that \(F_{\gamma}\) is positive on \((\mathbb{R}\setminus\{0\})^{n}\) for every compact face of \(\mathcal{N}_{+}(F)\), which implies the Newton nondegeneracy of \(f\) (see Section 9.3). Moreover, it follows from the assumptions of \(F\) that the the principal face of \(f\) is compact and \(f\) satisfies the \(\hat{\mathcal{E}}\)-condition. Therefore, since \(f\) satisfies all the assumptions in Theorem 5, we can obtain
\[\tilde{c}_{0}(\tau)^{2}=C(f_{*})\tau^{-2/d_{f}}(\log\tau)^{m_{f}-1}\cdot(1+o( \tau^{-\varepsilon})),\]
where \(C(f_{*})\) is a positive constant depending only on \(f_{*}\) and \(\varepsilon\) is a positive number. Since \(d_{f}=d_{F}\) and \(m_{f}=m_{F}\), we can obtain (15) in the lemma.
## 9. Appendix
### Asymptotic expansion in (2)
Theorem 1 implies that the singularity of the Bergman kernel can be expressed by the following complicated asymptotic expansion
\[\Psi(\rho)\sim\sum_{j=0}^{\infty}\sum_{k=a_{j}}^{\infty}C_{jk}\frac{\rho^{j/m }}{(\log(1/\rho))^{k}}\quad\text{as }\rho\to 0,\]
where \(m\) is a positive integer, \(a_{j}\) are integers and \(C_{jk}\) are real numbers.
Let us explain the exact meaning of the above asymptotic expansion. For any \(N\in\mathbb{N}\), there exists a positive number \(C_{N}\) such that
\[\left|\Psi(\rho)-\sum_{j=0}^{N}C_{j}(\zeta)\rho^{j/m}\right|<C_{N}\rho^{N/m+ \varepsilon}\quad\text{ for }\rho\in(0,\delta),\]
where \(\zeta=\log(1/\rho)^{-1}\) and every \(C_{j}(\zeta)\) satisfies that, for any \(M\in\mathbb{N}\) satisfying \(M\geq a_{j}\) there exists a positive number \(\tilde{C}_{M}\) such that
\[\left|C_{j}(\zeta)-\sum_{k=a_{j}}^{M}C_{jk}\zeta^{k}\right|<\tilde{C}_{M}\zeta^ {M+\tilde{\varepsilon}}\quad\text{ for }\zeta\in(0,\delta).\]
Here \(\varepsilon\), \(\tilde{\varepsilon}\), \(\delta\) are some positive numbers. Note that \(\zeta\to 0\) if and only if \(\rho\to 0\).
### Convex geometry
Let us explain fundamental notions in the theory of convex polyhedra which are necessary for our investigation. Refer to [24] for a general theory of convex polyhedra.
For \((a,l)\in\mathbb{R}^{n}\times\mathbb{R}\), let \(H(a,l)\) and \(H_{+}(a,l)\) be a hyperplane and a closed half-space in \(\mathbb{R}^{n}\) defied by
\[H(a,l)=\{x\in\mathbb{R}^{n};\langle a,x\rangle=l\},\] \[H_{+}(a,l)=\{x\in\mathbb{R}^{n};\langle a,x\rangle\geq l\},\]
respectively. Here \(\langle a,x\rangle:=\sum_{j=1}^{n}a_{j}x_{j}\).
A (_convex rational_) _polyhedron_ is an intersection of closed half-spaces: a set \(P\subset\mathbb{R}^{n}\) presented in the form \(P=\bigcap_{j=1}^{N}H_{+}(a^{j},l_{j})\) for some \(a^{1},\dots,a^{N}\in\mathbb{Z}^{n}\) and \(l_{1}\dots,l_{N}\in\mathbb{Z}\).
Let \(P\) be a polyhedron in \(\mathbb{R}^{n}\). A pair \((a,l)\in\mathbb{Z}\times\mathbb{Z}\) is said to be valid for \(P\) if \(P\) is contained in \(H_{+}(a,l)\). A face of \(P\) is any set of the form \(F=P\cap H(a,l)\), where \((a,l)\) is valid for \(P\). Since \((0,0)\) is always valid, we consider \(P\) itself as a trivial face of \(P\); the other faces are called _proper faces_. Conversely, it is easy to see that any face is a polyhedron. Considering the valid pair \((0,-1)\), we see that the empty set is always a face of \(P\). Indeed, \(H_{+}(0,-1)=\mathbb{R}^{n}\), but \(H(0,-1)=\emptyset\).
The dimension of a face \(F\) is the dimension of its affine hull (i.e., the intersection of all affine flats that contain \(F\)), which is denoted by \(\dim(F)\). The faces of dimensions \(0,1\) and \(\dim(P)-1\) are called vertices, edges and facets, respectively.
When \(P\) is the Newton polyhedron of some non-flat \(C^{\infty}\) function, \(\gamma\) is a compact face if and only if every valid pair \((a,l)=(a_{1},\dots,a_{n},l)\) defining \(\gamma\) satisfies \(a_{j}>0\) for any \(j\).
### Newton nondegeneracy condition
We say that \(f\) is _Newton nondegenerate_ if the gradient of the \(\gamma\)-part of \(f\) has no zero in \((\mathbb{R}\setminus\{0\})^{n}\) for every compact face \(\gamma\) of the Newton polyhedron \(\mathcal{N}_{+}(f)\). This concept is very important in the study of singularity theory.
When \(\gamma\) is a compact face of \(\mathcal{N}_{+}(f)\) with the valid pair \((a_{1},\dots,a_{n},l)\) defining \(\gamma\), the following Euler identity holds:
\[lf_{\gamma}(x)=a_{1}x_{1}\frac{\partial f_{\gamma}}{\partial x_{1}}(x)+\dots+ a_{n}x_{n}\frac{\partial f_{\gamma}}{\partial x_{n}}(x).\]
It follows from this identity that if \(f_{\gamma}\) has no zero in \((\mathbb{R}\setminus\{0\})^{n}\) for every compact face \(\gamma\), then \(f\) is Newton nondegenerate.
_Acknowledgement_ The author greatly appreciates that the referee carefully read this paper and gave many valuable comments. This work was supported by JSPS KAKENHI Grant Numbers JP20K03656, JP20H00116
|
2303.06116 | A Framework for Synthetic Power System Dynamics | The paper is published in Chaos. Please refer to the Chaos version from now
on.
Anna B\"uttner, Anton Plietzsch, Mehrnaz Anvari, Frank Hellmann; A framework
for synthetic power system dynamics. Chaos 1 August 2023; 33 (8): 083120.
https://doi.org/10.1063/5.0155971
Information on power grids is confidential and thus real data is often
inaccessible. This necessitates the use of synthetic power grid models in
research. So far the models used, for example, in machine learning had to be
very simple and homogeneous to produce large ensembles of robust grids. We
present a modular framework to generate synthetic power grids that considers
the heterogeneity of real power grid dynamics but remains simple and tractable.
This enables the generation of large sets of synthetic grids for a wide range
of applications. We also include the major drivers of fluctuations on
short-time scales. The synthetic grids generated are robust and show good
synchronization under all evaluated scenarios, as should be expected for
realistic power grids. This opens the door to future research that studies
grids under severe stress due to extreme events which could lead to
destabilization and black-outs. A software package that includes an efficient
Julia implementation of the framework is released as a companion to the paper. | Anna Büttner, Anton Plietzsch, Mehrnaz Anvari, Frank Hellmann | 2023-03-10T18:05:56Z | http://arxiv.org/abs/2303.06116v2 | # A Framework for Synthetic Power System Dynamics
###### Abstract
Information on power grids is confidential and thus real data is often inaccessible. This necessitates the use of synthetic power grid models in research. So far the models used, for example, in machine learning had to be very simple and homogeneous to produce large ensembles of robust grids. We present a modular framework to generate synthetic power grids that considers the heterogeneity of real power grid dynamics but remains simple and tractable. This enables the generation of large sets of synthetic grids for a wide range of applications. We also include the major drivers of fluctuations on short-time scales. The synthetic grids generated are robust and show good synchronization under all evaluated scenarios, as should be expected for realistic power grids. This opens the door to future research that studies grids under severe stress due to extreme events which could lead to destabilization and black-outs. A software package that includes an efficient Julia implementation of the framework is released as a companion to the paper.
* November 2022
## 1 Introduction
Synthetic power grids have become an important tool for studying the dynamics of power systems. Traditionally, most dynamical simulation studies in the engineering literature were performed using benchmark test cases, such as the "New England" IEEE 39-Bus System [1] or the IEEE Reliability Test System-1996 [2]. The advantage of this approach is that models and parameters can be specified in great detail and the test cases are therefore highly realistic. Further, the use of standardized benchmark test cases guaranties a certain comparability of different dynamic models and analytical methods. However, for many emerging research questions this approach can be quite limiting and the use of automatically generated synthetic grid models might be beneficial. This is for instance the case when the power system in a specific region should be studied but the detailed topology and parameters of the real grid are not publicly accessible. Often there is enough data or knowledge available to generate a synthetic grid that resembles the main properties of a real grid to a reasonable degree. An example is the algorithm by Birchfield et al. [3, 4] that generates realistic transmission network topologies from spatial load distributions based on geographic population data. The algorithm is expanded in [5] to also enable transient stability analysis of the synthetic
power grids. Besides the transmission system, synthetic grids are also required for studying mid- and low-voltage grids as their exact structure is often unknown [6]. For German medium and low-voltage grids, the _DingO_ model [7] is an extensive and well-documented option to generate topologies [7] and supply and demand distributions [7]. _DingO_ is part of the larger research project _open eGo_ and is an open-source software that uses freely available data.
Another important use case for synthetic power grid models is to generate large data sets of synthetic test cases that can be used to investigate the system dynamics with methods of machine learning [8, 9]. A number of studies have shown that the network topology of grids has a direct influence on their dynamic stability [10, 11, 12]. However, most of these studies are based on very simplistic component models and unrealistically homogeneous parameters. Graph-Neural-Networks have been shown to be a powerful method that could potentially extend these stability analyses to more realistic power grid models [9, 13, 14]. The training of such neural networks requires large data sets of realistic grids, that are for example generated by a synthetic grid model.
Finally, synthetic grid models will be crucially important for the investigation of dynamic effects of future power grids. Within the next decades, the power system will undergo a fundamental transformation as new transmission infrastructure is built and conventional machines are replaced by renewable energy sources (RES). A major challenge is that the exact dynamical behavior of generation units is widely unknown as renewable generation units are connected to the grid via inverters with various control schemes. In order to maintain stability in such inverter-based grids, a certain share of these controls have to be grid-forming. Today, most RES are still equipped with grid-following control schemes and hence, there is a lack of practical knowledge on the collective dynamical behavior of a large number of grid-forming generation units. It is therefore of great importance to do simulation studies of these systems to ensure that new technology being integrated into the grid does not lead to unexpected collective effects and blackouts [15]. Unfortunately, there is a lack of both benchmark test cases as well as synthetic power grid models for studying such inverter-based grids.
In this paper, we present a modular framework for generating synthetic grids that are suited for dynamic power system studies. We give an overview of all necessary steps from the generation of grid topologies, to the definition and parametrization of component models and the calculation of the steady state. The paper is accompanied by a software repository that provides an implementation of all algorithms described in this paper. Our approach is modular in the sense that users can easily adapt each step in the grid generation process to their own needs, e.g. by providing their own specific grid topologies or by using different dynamic models for the generating units in the system. We focus on extra high voltage (EHV) level transmission grids, which in the continental European transmission grid includes the 380 kV-400 kV and the 220 kV voltage levels. Collective dynamical effects are traditionally studied in the highest grid layer [16], which is why we can rely on a comprehensive foundation there. In principle, the approach presented extends to all grid layers though.
The framework is designed to be capable of efficiently generating large numbers of synthetic grids with very limited input data. At the same time the component models and parameters have a comparatively high level of realism: Generator and inverter models feature voltage dynamics, the active power production and demand are heterogeneous and the parametrization of line admittances is according to data of the German transmission grid. The framework is therefore well-suited for applying machine learning methods, e.g. to predict dynamical stability from the structural properties of the grid.
Another important feature is the possibility to model power grids with high shares of inverter-based generation units. For this, we bypass the problem that the exact dynamical models of such systems are still uncertain by using a technology and control scheme neutral model [17] that has been shown to reproduce the behavior of a large class of different inverter controls. However, we also point out open research questions for improving the modeling of future power grids.
## 2 Synthetic Power Grid Framework
The modular framework introduced in this paper adopts the following structure: First, a topology or network structure for the synthetic power grid is generated. Then active power set points for the nodes in the network are defined. The next step is to specify the node and line models in order to populate the networks with dynamics. Then an operation point, that fulfills certain stability criteria, is determined. In the last step, we validate the synthetic grids and assure that the dynamic network properties are similar to those of real power grids that are carefully planned.
For the analysis of the resulting grids, we also provide stochastic models that characterize fluctuations processes that are typical at the timescale of interest.
In the following, we will describe the default settings of each step of the algorithm. As the framework is modular each step can be switched out by another approach as long as it adheres to the general structure.
Most of the steps presented here have been used in research projects before, however they are now, for the first time, combined as a comprehensive package that is available for further research. Particularly, it is the first step towards a synthetic model of future power grids with high integration of RES. Each section contains a summary of a step in the framework as well as a critical analysis of the state-of-the-art. In the respective sections, we give an outlook and show which additional work could be done to improve the model, particularly for the representation of future power grids.
### Grid Topology
The default topologies in our framework are generated using the random growth algorithm introduced in [18]. We choose this model as it is conceptually straightforward to generate a large number of interesting and plausible topologies and as it has
little computational complexity, which is convenient for generating large ensembles of synthetic test cases. However, it is at the conceptual end of the synthetic grid spectrum. If the interest is to study dynamics on more realistic topologies, other models should be employed.
The algorithm of [18] generates synthetic networks that resemble EHV real-world power grids with respect to the exponentially decaying degree distribution and the mean degree. The algorithm includes first an initialization phase, where a spatially embedded minimum spanning tree is generated, and then a growth phase. The growth phase includes a heuristic target function for the trade-off between the total line length, which determines the costs, and the smallest number of edges that would need to be removed to disconnect the grid into two parts, which influences the redundancy.
The default parameters of the growth algorithm have been set to \([N_{0},p,q,r,s]=[1,1/5,3/10,1/3,1/10]\), as employed in [12], where \(N_{0}\) is the initial number of nodes in the minimum spanning tree, \(p\), \(q\) are the probabilities for generating a new redundant line, \(s\) is the probability of splitting an existing line, and \(r\) is the exponent for the trade-off between redundancy and cost.
Since distribution grids typically exhibit rather different network structures (mostly radial and ring topologies [7]) these parameters have to be adapted when the growth algorithm should be used for modeling lower voltage levels.
For the default step, we assume that there is no correlation between the grid topology and the positioning of generation units in future grids. We thus assume that the transmission system topology will remain very similar to today, even if the position of generation units will be correlated to the renewable energy potentials and the location of generation thus changes. This is not entirely realistic and future studies should consider that the grid will be expanded and adapted to the new supply sources. However, such changes are expensive and time consuming [19] and thus likely to be limited. To properly incorporate these aspects a synthetic geographical model, potentially incorporating economic optimization, such as [20] is needed.
### Active Power Distribution
In order to correctly represent the dynamics of the power grid, a realistic distribution of power in the grid is required. For this purpose the _ELMOD-DE_[21] data set, an open-source spatially distributed, nodal dispatch model for the German transmission system is consulted. Following [22], which also analyses the data set, we examine the net power \(\Delta P\) at each node given in the data set.
The _ELMOD_ data set includes a time series for the total demand \(P_{tot}\) in all of Germany. The demand is distributed to the individual nodes by introducing the nodal load share \(ls_{m}\) which specifies the proportion of the consumption of a node \(m\) from the total demand \(P_{tot}\). It is distinguished between two different types of load scenarios, off-peak and on-peak. Egerer et. al. [21] defines on-peak and off-peak as the highest and lowest load level meaning the maximum and minimum of \(P_{tot}\) respectively. The data set gives the
load shares \(ls_{m}\) for both scenarios the off-peak and on-peak. In the following, we will always work with the off-peak scenario. The consumption at a node \(P_{con,m}\) is then given by:
\[P_{con,m}=P_{tot}\cdot ls_{m}. \tag{1}\]
The _ELMOD_ data set includes the installed capacity for each generation unit \(c_{j}\), which is the maximum power output the unit \(j\) can produce. As multiple power plants can be connected to a single node, the nodal capacity \(C_{m}\) is given by the sum of all capacities at the node \(C_{m}=\sum_{j}c_{j}\). Typically, the full capacity of a generation unit is not available. In addition to the approach by Taher et al. [22], we also include the availability factors \(a^{tech}\) for each technology during the off-peak scenario. The nodal availability \(A_{m}\) is then given by:
\[A_{m}=\sum_{j}c_{j}\cdot a^{tech}. \tag{2}\]
The total available power is defined as \(A_{tot}=\sum_{m}A_{m}\). As there is no data about how much power each node generates at a given time point we follow the approach given in [22] and reduce the nodal availability \(A_{m}\) by the factor \(x=\frac{P_{tot}}{A_{tot}}\), such that generation and consumption are balanced. The nodal generation \(P_{gen,m}\) is thus given by: \(P_{gen,m}=A_{m}\cdot x\). Finally, we can define the net nodal power \(\Delta P_{m}\) as:
\[\Delta P_{m}=P_{gen,m}-P_{con,m}. \tag{3}\]
Figure 1 shows a histogram of \(\Delta P_{m}\). It can be seen that the distribution is bimodal, asymmetric and that the power generation is heavy-tailed. The heavy tail in the distribution of the power distribution can be explained by the structure of today's power grid where the power is mostly produced by a smaller number of large generators. In the _ELMOD_ data set 301 nodes are classified as net consumers, while only 137 are net generators. For a future RES heavy scenario one can replace the capacities and availabilities above with a model for the deployment of wind and solar renewable resources.
Following [22] the active power \(P\) of each node is sampled from a bimodal distribution, given by:
\[p(P)=\frac{1}{2\sigma\sqrt{2\pi}}\left(\exp\frac{(P-P_{0})^{2}}{2\sigma^{2}}+\exp \frac{(P+P_{0})^{2}}{2\sigma^{2}}\right) \tag{4}\]
in this work we will use \(P_{0}=\overline{\Delta P_{380}}\approx 131\,\mathrm{MW}\).
The topologies used here, mimic the extra high voltage \(380\,\mathrm{kV}\) transmission grids. All following calculations are performed in a Per-Unit-System (p.u.), meaning that an appropriate base power \(P_{base}\) and base voltage \(V_{base}\) have to be chosen. As this work only examines the highest voltage layer of the grid the base voltage is simply chosen as \(V_{base}=380\,\mathrm{kV}\). To define the base power for the \(380\,\mathrm{kV}\) level we extract all nodes that are connected to \(380\,\mathrm{kV}\) lines and calculate the mean \(\overline{\Delta P_{380}}\approx 131\,\mathrm{MW}\). Based on the available data, we choose \(P_{base}=100\,\mathrm{MW}\) as the base power for the synthetic power grids.
The _ELMOD_ data set represents the current load and capacity distribution, which means that renewables are still in the minority. The analysis shown here is suitable for the distribution of active power in synthetic grids which should represent the status quo as most buses are either generation heavy or load heavy. For this work, we will adopt the bimodal model which was introduced in [22]. How this distribution will change due to the increasing share of RES but also changing consumption remains an open research question. A promising possibility is to base the distribution of active power supply on the renewable potentials of geographical areas. For this purpose, established software packages, such as atlite [23], could be consulted. For the consumption side, new sectors with additional loads will be connected to the electric grid, for example, electric cars or hydrogen production.
Figure 1: Histograms of the net nodal generation and consumption in the _ELMOD-DE_[21] data set during the off-peak scenario. The distribution is bimodal and asymmetric. The power generation shows a heavy tail.
Furthermore, it should also be taken into account that the set points for the power change in the grid over time due to the evolution of the demand over the day and year. Typically these set points are updated every 15 minutes based on a cost optimization procedure. It would be valuable to study a grid and its dynamics under different load scenarios. Moreover, the demand is not constant between two dispatch times, but fluctuates, for example, studied in [24]. In section,3.2 we will apply the model for realistic demand fluctuations, which has been derived in [24], to our power grids. Future work could also consider that the generation is typically distributed via an cost optimization approach.
### Power Grid Model
On the most abstract level, we will mathematically describe power grids as systems of differential-algebraic equations (DAEs). The constraints mostly appear in the load models. Explicit DAEs are defined as:
\[\dot{x} =f(x,y) \tag{5}\] \[0 =g(x,y) \tag{6}\]
where equation (5) and (6) represent the differential and algebraic equations respectively. The vector \(x\) holds the differential variables, whose derivatives appear in the DAE, while the vector \(y\) gives the algebraic variables, whose derivatives do not appear.
The specific models for the nodes and lines as well as for the networks are introduced in the following sections.
#### 2.3.1 Node Models
Our synthetic grids will consist of grid-forming components, for example, power plants and novel types of inverters that contribute to grid stability and components without grid-forming capabilities, such as loads or grid-following inverters, that have to rely on an already stable grid. For this work, we have decided to use elementary nodal models to depict components with and without grid-forming abilities that are able to cover a large range of dynamical actors.
In this work, PQ-buses [25] are used to represent the components without grid-forming behavior. The PQ-bus locally fixes the active and reactive power of node \(m\):
\[0=(P_{set,m}+\mathrm{i}Q_{set,m})-u_{m}\cdot i_{m}^{*}. \tag{7}\]
where \(P_{set,m}\) and \(Q_{set,m}\) are the active and reactive power set points of the node, and \(u_{m}\) and \(i_{m}\) are respectively the voltage and current of node \(m\). The model can depict either loads or sub-networks of consumers and renewable energy producers who are connected to the grid via grid-following inverters. The PQ-bus (7) is a constraint equation as given in equation (6) and forces us to use the DAE description of the power grids.
To represent grid-forming components we use the normal-form, a technology-neutral model for grid-forming actors, that has been introduced in [17]. The normal form captures the most important nonlinearities of grid-forming components. Various models
of grid-forming components, such as droop-controlled inverters [26] and synchronous machine models [27], have been mapped to the normal form. The normal form has been validated by numerical simulations and lab measurements of a grid-forming inverter so far. A normal form at node \(m\) with a single internal variable, the frequency \(\omega_{m}\), is given by:
\[v_{m}^{2} =u_{m}u_{m}^{*}\] \[\dot{\omega}_{m} =A^{x,m}+B^{x,m}\delta\omega_{m}+C^{x,m}\delta v_{m}^{2}+G^{x,m} \delta P_{m}+H^{x,m}\delta Q_{m} \tag{8}\] \[\frac{\dot{u}_{m}}{u_{m}} =A^{u,m}+B^{u,m}\delta\omega_{m}+C^{u,m}\delta v_{m}^{2}+G^{u,m} \delta P_{m}+H^{u,m}\delta Q_{k}\]
where \(u_{m}\) is the complex voltage. \(\delta P_{m}\) and \(\delta Q_{m}\) represent the difference of the active and reactive power to the set-points. \(\delta v_{m}^{2}\) is the difference of the squared voltage magnitude \(v_{m}^{2}\) to the squared voltage set-point. The other coefficients are the modeling parameters that capture all the differences between the various models the normal form can represent. The parameters \(A^{x,m}\) and \(A^{u,m}\) are zero when the system is, as in our case, defined in the co-rotating reference frame. In the normal form, all differences between the two models are absorbed in the parametrization.
The free parameters for the normal form can be gathered by approximating other models, moreover, it is also possible to derive them from experimental data, which has also been performed in [17] for a specific type of inverter in a lab. For the example provided in this work we will use a normal form approximation of a droop-controlled inverter [26] whose parameters can be derived analytically.
The exact ability of the normal form to cover all needed dynamics is subject of current research. We expect that the normal form captures the dynamics of sub-networks which include grid-forming components as well. Future work will include measurements on different types of inverters and deriving the parameters of the normal form from the data. This is a crucial step to study the dynamics and stability of realistic future power grids, which will consist of a variety of interacting grid-forming inverters.
In addition to the models for grid-forming and following components we introduce a slack bus [25] into the synthetic power grid model. The slack bus locally fixes the voltage \(u_{m}\) of node \(m\):
\[0=u_{set,m}-u_{m}. \tag{9}\]
where \(u_{set,m}\) is the set point voltage. The voltage magnitude \(|u_{set,m}|\) of the slack is typically set to 1 p.u. and its voltage angle is \(\phi_{m}=0^{\circ}\). The slack bus is used as the reference for all other buses in the system. While solving the load flow problem, the active and reactive power of the slack are free to change to compensate for the power imbalance in the network. Therefore it is assumed that the slack bus has a large amount of energy stored which can be released quickly. The slack bus is typically considered to be a large power plant or battery, a connection point to a higher grid layer, or another part of the power system which is not modeled explicitly.
#### 2.3.2 Line Model
For this work, the Pi-Model, see for example [28], is used. In the Pi-Model the impedance \(Z_{km}=\frac{1}{Y_{km}}\) is placed in the center of the line. The capacitance between the line and the ground is also taken into account by introducing the shunt admittance \(Y_{sh,km}\) which is placed, in parallel, at both ends of the line. The current on the lines connecting node \(k\) and \(m\) is then given by [28]:
\[I_{km} =Y_{km}(U_{k}-U_{m})+Y_{sh,km}U_{k} \tag{10}\] \[I_{mk} =Y_{km}(U_{m}-U_{k})+Y_{sh,km}U_{m} \tag{11}\]
where \(Y_{km}\) is the impedance of a line connecting node \(k\) and \(m\) and \(U_{k}\) and \(U_{m}\) are the complex nodal voltages. The impedance \(Y_{km}\) and shunts are calculated according to the _dena_ model of standard 380 kV overhead power lines [29] given in table 1. The reactance \(X\) is specified for the nominal frequency of 50 Hz, which is why we use a static line model here. The admittance \(Y\) is calculated according to:
\[k_{c} =\frac{cables}{cables_{typical}} \tag{12}\] \[k_{w} =\frac{wires}{wires_{typical}}\] (13) \[Y_{km} =\frac{k_{c}k_{w}}{R+jX}l_{km}\] (14) \[Y_{sh,km} =\frac{-(j\omega C_{sh})k_{c}k_{w}}{2}l_{km} \tag{15}\]
where \(l_{km}\) is the line length in kilometers. For consistency, we fix the grid frequency \(\omega\), in the shunt capacitance \(Y_{sh,km}\), to the nominal frequency. The coefficients \(k_{c}\) and \(k_{w}\) define the ratio between the typical number of cables \(cables_{typical}\) and wires \(wires_{typical}\) and the actual numbers in the line [30]. The typical numbers of cables and wires is 3 and 4 respectively for transmission lines in the 380 level in Germany [30]. In the default version of the algorithm, we assume that all transmission lines have the typical number of cables and wires. Section 2.5.4 introduces an additional step in the algorithm where probabilistic power flow scenarios are considered and the line capacities are increased, by adding a new cable to the existing line \(mk\), if a load scenario leads to an overload in line \(mk\). This ensures that the line parameters are well-suited for the load flow used.
To calculate the line properties the lengths of the transmission lines are needed. As the model of [18] generates an embedded topology, but does not provide a spatial scale, we need an additional step to determine the spatial scale. This is done by requiring that the line lengths of the synthetic grids resemble the line lengths of real EHV grids.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Voltage level & \(R\) [\(\Omega\)/km] & \(X\) [\(\Omega\)/km] & \(C_{sh}\) [nF/km] \\ \hline
380 kV & 0.025 & 0.25 & 13.7 \\ \hline \end{tabular}
\end{table}
Table 1: Standard overhead line parameters according to [29] for the typical number of cables and wires.
The line lengths \(l_{mk}\) in kilometers are obtained by converting the euclidean distances \(d_{mk}\) of the lines, which are generated by the random growth model [18]. The conversion factor \(c\) is given by the mean length \(\langle l\rangle\) of overhead lines in the extra high voltage (EHV) level, that concerns voltages equal or greater than \(220\,\mathrm{kV}\), divided by the mean euclidean distance \(\langle d\rangle\):
\[c =\frac{\langle l\rangle}{\langle d\rangle} \tag{16}\] \[l_{mk} =cd_{mk}. \tag{17}\]
Additionally, we used the shortest line \(l_{min}\) in the EHV level as a threshold. The admittances of lines that are shorter than \(l_{min}\) are set to the threshold impedance of the shortest line.
The mean line length was determined from the _SciGRID_ data set [30], which consists of openly available geographic data of the German power grid. At the time of the creation of the data set the coverage of the EHV level in Germany was around 95% [30], which thus offers an excellent basis for such a study.
The _ELMOD_ data-set [21] also offers a network topology that is based on network plans by the transmission system operators (TSOs) and OpenStreetMap data. Since the data in _SciGRID_ is much better documented and the study deals much more intensively with the network topology, we base our transmission line lengths on _SciGRID_. Still, for completeness, we will also analyze the data from _ELMOD_. A comparison between _SciGRID_, _ELMOD_ and our synthetic grids, that are based on _SciGRID_ is given in table 2.
In table 2 it can be seen that the mean line length, as well as the standard deviation of the line length of _SciGRID_ and _ELMOD_, match well. Furthermore, it can be seen that the synthetic grid line length shows a standard deviation that matches the _SciGRID_ as well as the _ELMOD_. The most significant difference between the two data-sets is the minimum line length \(l_{min}\), which is about \(4\,\mathrm{km}\) in _ELMOD_ and about \(60\,\mathrm{m}\) in _SciGRID_. For the reasons that were stated above, we have adopted \(l_{min}\) from _SciGRID_.
Future work would also include not only analyzing the mean and standard deviation of the length but also matching the distributions of line lengths (see fig. 5 in the Appendix). This goes beyond the random growth algorithm [18] which is currently used, and would
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & \(\langle l\rangle\) [km] & \(\sigma_{l}\) [km] & \(l_{min}\) [km] \\ \hline _SciGRID_[30] & 37.13 & 36.59 & 0.06 \\ \hline _ELMOD-DE_[21] & 40.98 & 35.54 & 0.42 \\ \hline Synthetic Grids & 37.13 & 34.6 & 0.06 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of transmission line lengths between different models. The values for the synthetic grid were calculated by generating 10000 different topologies. The mean line length is given by \(\langle l\rangle\), the standard deviation of the line length is \(\sigma_{l}\) and the minimal line length by \(l_{min}\).
require an algorithm that considers line lengths and node locations. A preliminary study [31] on extending the algorithm which uses different node positioning rules has been performed but it does not deal with recovering the correct line length distribution.
#### 2.3.3 Network Models
Combining the nodal and line models we obtain the full network model. The power grid topology is given by a graph \(\mathcal{G}=\{N,E\}\), where \(\mathcal{G}\) is the set of nodes \(N\) and edges \(E\)[32]. In power systems, the nodes are the buses and the edges are the transmission lines connecting them. The nodal admittance matrix \(Y\) is needed to calculate the power flow within the network. It is defined as:
\[Y_{km}=\begin{cases}y_{k}+\sum_{m=1,k\neq m}^{N}y_{km}&k=m\\ -y_{km}&k\neq m\end{cases} \tag{18}\]
where \(y_{km}\) stands for the admittance between two nodes \(k\), \(m\). The self-admittance of node \(k\) is given by \(y_{k}\) and includes all admittances connected to node \(k\)[25]. Ohms law defines the current injected at the nodes \(I=YU\) and the power flow of the network is given by [33]\(S_{k}=u_{k}i_{k}^{*}=P_{k}+\mathrm{i}Q_{k}\). Where \(S_{k}\) is the apparent power at node \(k\) and \(P_{k}\) and \(Q_{k}\) are the real and reactive power injected at \(k\) respectively.
### Operation Point and Reactive Power
Finding an operation point for synthetic power grids is challenging as power systems are generally non-linear and multi-stable. Generally, only the fixed point where all frequencies are synchronous and the voltage magnitudes of all nodes are close to 1 p.u. are physically meaningful for power grids. Furthermore, the AC load flow has no guarantee for convergence and for many synthetic grid models there is no prior information about the reactive power at the nodes.
Reactive power planning is considered to be one of the most intricate problems in power grid planning [34]. The review article [34] gives an excellent overview over the objectives and constraints that are considered in reactive power planning. Instead of implementing one of the complex established models presented in [34] we use a straightforward method to solve the reactive power flow. We employ the voltage stability objective, which is also a standard objective according to [34], and assume that it has to be met perfectly. This requires adjusting the reactive powers at the nodes to match the objective using an ancillary AC power flow. As the synthetic grids generated in this work have less than 10000 nodes our approach still leads to feasible power flow solutions. Once the grids become bigger a more in-depth reactive power flow planning algorithm, such as [35], will be needed to find feasible operation points. The reactive powers of the nodes are adjusted to control the voltage magnitudes in the power grid [25].
To find the nodal reactive powers we generate an ancillary power grid with the same topology, line models and active powers as the full power grid. The ancillary power grid consists purely of PV buses where all nodes are constrained to have voltages magnitudes of \(V_{m}=1p.u.\) and the same active power that they should generate in the actual power
grid. The reactive powers, of the ancillary grid are found by using the power flow calculation of PowerModel.jl[36] and a root-finding algorithm to find a steady state. The operation point of the ancillary grid is used as the initial guess for the operation point search of the actual grid.
### Validators
Real-life power grids are planned carefully to lead to stable operations. Synthetic processes can never fully capture this planning stage. To handle this challenge we will use a rejection sampling approach. Synthetic power grids whose dynamics do not satisfy the stability properties of real-life power grids are rejected. In this section, we will introduce a set of validators that review the stability of the synthetic power grids in their operation point.
The default settings we provide lead to very stable grids, and few or no rejections are observed. Besides their conceptual importance, the practical value of the validators lies in the ability to develop new topological and dynamical models for situations that put more stress on the grid.
To assess our default settings we generated a set of synthetic networks with different sizes and studied the number of rejections. We generated power grids ranging from 100 to 1300 nodes with a step size of 25 nodes. For each grid size we generate 100 power grids and can report that no grid was rejected.
#### 2.5.1 Voltage Magnitude
Firstly, we verify that nodal voltages magnitudes full fill the standard of the EN 50160 report [37]. The report specifies that the average 10 minutes root mean square voltage has to stay within the bounds of \(\pm 10\%\) for 95% of the week. We assure this by validating that all nodal voltage magnitudes are \(V\approx 1p.u.\) in the operation point. If the set-points of the system and the parametrization have been chosen properly the voltage condition should be fulfilled. Even if the reactive power is chosen to ensure a stable power flow with good voltage magnitudes, incorrectly specified control dynamics or machine parameters, can still lead to a violation of the voltage conditions in the operating point. Thus the verification of the voltage condition is still essential in order to catch such mistakes.
#### 2.5.2 Line Loading Stability Margin
In a stable operation of the power grid, no line is overloaded. There are different thresholds for the allowed loading of a transmission line. In this work, we will focus on the physically possible limit of the line \(P_{max}\).
The power flow transferred over a line connecting node \(m\) and \(k\), neglecting the reactive power flow and line losses, is given by:
\[P_{mk}=\frac{V_{m}V_{k}}{X_{mk}}\sin(\theta_{mk}) \tag{19}\]
where \(V_{m}\) and \(V_{k}\) are the nodal voltage magnitudes, \(X_{mk}\) is the line reactance and \(\theta_{mk}\) is the difference of the voltage angles of node \(m\) and \(k\). The transferred power becomes
maximal when \(\theta_{mk}=\frac{\pi}{2}\). Which means that the physically possible limit of the line is \(P_{max}=\frac{V_{m}V_{k}}{X_{mk}}\). To assure a stable power system transmission lines are operated well below this limit and a so-called stability margin \(sm\) is introduced [38]. Which means that the actual transferred power of a line \(P_{rated}\) must be below a threshold given by: \(P_{rated}\leq P_{max}(1-sm)\). In this study, we choose \(sm=0.3\) as suggested in [38]. If any line loading in our power grid violates this threshold we reject the power grid.
#### 2.5.3 Small Signal Stability Analysis
Since the grids we consider in this work are described by DAEs, we cannot simply study the eigenvalues of the Jacobian in the equilibrium to determine the linear stability of the system. Instead, we perform a small signal stability analysis for DAEs according to [39].
In this approach, the eigenvalues of the so-called reduced Jacobian, or state matrix \(J_{red}\) are examined. The reduced Jacobian is set up by decomposing the full Jacobian matrix \(J\) into the following blocks:
\[J=\begin{bmatrix}\partial_{x}f&\partial_{y}f\\ \partial_{x}g&\partial_{y}g\end{bmatrix} \tag{20}\]
where \(\partial_{x}f\) is an abbreviation for the matrix of partial derivatives of the right-hand side of the differential equations \(f\) with respect to the differential variables \(x\), and \(\partial_{y}g\) gives the matrix of the partial derivatives of the algebraic equations \(g\) with respect to the algebraic variables \(y\).
Following [39], the reduced Jacobian is defined as:
\[J_{red} =\partial_{x}f-D \tag{21}\] \[D =\partial_{y}f\left(\partial_{y}g\right)^{-1}\partial_{x}g \tag{22}\]
where \(D\) is the degradation matrix. The eigenvalues of \(J_{red}\) can be examined as usual again, meaning that power grids whose eigenvalues of \(J_{red}\) have positive real parts are classified as linearly unstable. Power grids whose operation point is linearly unstable would not exist in reality and therefore have to be rejected before any further investigations are performed.
#### 2.5.4 Probabilistic Capacity Expansion
So far we have only assured that the synthetic power grids are stable under a single power set point that was drawn from the probability distribution (4), or any other source. However, real power grids do not operate under a single set point but the set points are rather updated regularly, e.g. in Germany a new demand plan is implanted every 15 minutes. Therefore, it is important to also verify the stability of the grid under different set points. In principle, one can simply run all validators on a sample of set points. In this work, we only focus on the capacity of lines, as this is the most directly affected by the demand, and assure that there is always enough line capacity to cover the expected load cases.
For now, we resort to a simplistic approach and sample completely new set-points from the bimodal distribution (4) but double the mean power \(P_{0}\) in order to study the system
under more stress. A more realistic analysis of high-stress power flow scenarios would require an extensive investigation of the space of expected set points and is therefore beyond the scope of this paper.
For each new scenario, we calculate the load flow in the grid and then analyze the line loading as given in section 2.5.2. If a line is overloaded we add three additional cables to the line to increase its admittance as in equation (14). This approach is repeated for \(N\) different scenarios. So far no new cables were added for all performed simulations. This is to be expected since, for example, in the _SciGRID_[30] data set, more than 90% of the EHV transmission lines have the typical number of cables. It is nevertheless important to validate the grid under different load scenarios to assure its stability. Furthermore, this capacity evaluation could become important once more realistic load scenarios are evaluated, which in the future could include the weather-dependent time series generated by Atlite [23].
While these validators cover the most basic functioning of the grid, further conditions can also be considered. A natural extension for future work would be to add N-1 stability as a condition that the grids need to satisfy.
## 3 Nodal fluctuations
Due to the increase of the share of variable renewable energy sources (RES), i.e. wind and solar energy, power grids are exposed to new sources of fluctuations. RES are highly fluctuating at different time scales [40, 41] and, particularly, have intermittent fluctuations at short time-scales [42].
Along with supply-side fluctuations, recent studies of high-resolution recorded electricity consumption demonstrate intermittent fluctuations on the demand-side [43, 24, 44] as well. Thus, demand can be another major driver of fluctuations in power grids. To generate synthetic power grids that imitate the dynamics of real power systems at such short time scales, one should add fluctuations from both the supply and demand side to the node models explained in section 2.3.1.
Here we introduce stochastic processes that generate fluctuating wind and solar power, as well as demand time-series. These models have been derived to ensure that, these synthetic time-series have the same short time-scale stochastic characteristics as empirically observed in real data. Therefore, one can confidently use the synthetic time-series for further research in power grids, and consider the response of power systems to these fluctuations. We assume that the nodes with grid forming capabilities can handle the fluctuations internally, but that nodes without grid forming capabilities feed them into the grid. This means we use the synthetic time-series to drive time dependent demand at the PQ-buses. The effects on the grids frequency and voltage are illustrated in section 4.
### Supply fluctuations
The intermittent nature of wind speed and solar irradiance, along with their turbulent-like behavior, which transfer to wind and solar power and, consequently, to power grids has been widely discussed [40, 42, 45, 46]. As demonstrated in these studies, wind and solar power are non-Gaussian time series and, indeed, they have heavy tailed probability distribution function (PDF). Extreme fluctuations, such as a 90% reduction in power in just a few seconds, occur often in RES. These fluctuations can present additional challenges for maintaining the energy balance in power systems. Having a deep insight about the characteristics of RES, and their effects on power grids, allow us to deploy future control strategies that can cope with these extreme fluctuations.
Here, we employ a non-Markovian Langevin-type stochastic process [47], as well as a jump-diffusion model [48] to generate respectively wind and solar power with similar short time-scale characteristics as the empirical data sets. The Langevin-type model used here is:
\[\dot{P}_{wind}(t)=P_{wind}(t)(\Gamma-\frac{P_{wind}(t)}{P_{0}})+\sqrt{\kappa P _{wind}^{2}(t)}n(t) \tag{23}\]
where, \(\Gamma\) and \(P_{0}\) are constant parameters, and \(\kappa\) is a parameter with which one can tune the intensity of the noise \(n\) (the exact values of these parameters in our simulations are given in Section 4). The noise \(n\) is obtained from the following Langevin equation:
\[\dot{n}(t)=-\gamma n(t)+\zeta(t) \tag{24}\]
where \(\zeta\) is a Gaussian noise with \(\langle\zeta(t)\rangle=0\) and \(\langle\zeta(t)\zeta(t^{{}^{\prime}})\rangle=\delta(t-t^{{}^{\prime}})\). The jump-diffusion model emulating short time-scale fluctuations in the solar power is:
\[dP_{solar}(t)=D^{(1)}(P_{solar},t)dt+\sqrt{D^{(2)}(P_{solar},t)}dw(t)+\eta dJ(t) \tag{25}\]
where \(D^{(1)}\) and \(D^{(2)}\) are respectively the drift and diffusion coefficients. In eq. 25, \(dw\) is the Wiener process and \(dJ\) is the Poisson process with jump size \(\eta\), which is assumed to be a normally distributed random number, i.e \(\eta\sim N(0,\sigma_{\eta})\). The Poisson process comprises also a jump rate, which we call \(\lambda\). The advantage of the jump-diffusion model is that it is a non-parametric model, i.e. all parameters are derived from the empirical data sets (for more details see [48]).
### Demand fluctuations
Standard load profiles used to balance energy in the grid in advance have a time resolution of 15 minutes. Shorter time scales are balanced by control mechanisms rather than by trading. To study the dynamics at short time scales the load profiles are thus of limited use. Instead we can consider empirical measurements of loads that have a high enough resolution to reveal short term fluctuations such as [43, 24, 49].
Here, we apply the superstatistics model introduced in [24] to generate the short time-scales fluctuations of the demand-side and add them to the dynamic model of load in
PQ-bus. Following the superstatistical approach the demand fluctuations are obtained by taking the 2-norm of several Gaussian distributions plus a constant offset \(\mu_{MB}\):
\[P^{fluc}(t)=\sqrt{(z_{1}(t))^{2}+(z_{2}(t)^{2})+...+(z_{J}(t))^{2}}+\mu_{MB} \tag{26}\]
where we use \(J=3\) as discussed in [24], and \(z_{i}(t)\) is obtained from the following Langevin equation:
\[dz_{i}(t)=\gamma z_{i}(t)dt+\epsilon dw_{i} \tag{27}\]
where \(dw_{i}\) is the Wiener process with a mean 0 and standard deviation \(\sigma=\epsilon/\sqrt{2\gamma}\). We employ the same parameter values \(\mu_{MB}\), \(\gamma\), and \(\epsilon\) as reported in [24].
It should be noted that the stochastic time series we have introduced here are based on empirical measurements of power grid actors that are typically not directly connected to the highest level of the power grid. As not all producers and consumers connected to a particular bus are perfectly correlated the fluctuations would be somewhat attenuated in reality. Unfortunately few or no measurements of the actual correlations of fluctuations exist, and we have to leave this point to future work.
## 4 Simulation Examples
In this section we generate a fully electrified synthetic power grid, whose structure is shown in figure 2, and will study its behavior in response to the three different fluctuations processes that have been introduced in section 3. The synthetic grid that we consider here consists of 100 nodes with an equal share of grid following and grid forming inverters. We expect that future power grids will have a high share of variable renewable energies and, therefore, we consider multi-node fluctuations in this example. We assume that the grid-forming inverters are equipped with sufficiently large storage units. Hence the RES fluctuations are only fed into the grid via the grid-following inverters.
The fluctuations \(P_{fluc,i}(t)\) are added to the set points \(P_{set,i}\) of the nodes. This results in the following equation for the active power \(P_{i}\) at node \(i\):
\[P_{i}(t)=P_{set,i}+P_{fluc,i}(t). \tag{28}\]
For the different processes we will analyze the two edge cases, completely correlated fluctuations, meaning that all nodes have the same fluctuating time series \(P_{fluc}(t)\), and secondly completely uncorrelated fluctuations where all nodes have different fluctuating time series.
In order to compare the results we will study two performance measures, the synchronization norm \(||\mathcal{L}||_{sync}\)[50] and the \(L_{2}\) norm of the average deviation from the nominal grid frequency \(||\mathcal{L}||_{dev}\)[51]:
\[||\mathcal{L}||_{sync} =\sqrt{\frac{1}{T}\int_{0}^{T}\frac{1}{N}\sum_{m=1}^{N}\left( \omega_{m}(t)-\frac{1}{N}\sum_{k=1}^{N}\omega_{k}(t)\right)^{2}dt} \tag{29}\] \[||\mathcal{L}||_{dev} =\sqrt{\frac{1}{T}\int_{0}^{T}\frac{1}{N}\sum_{m=1}^{N}\left( \omega_{m}(t)-\omega_{0}\right)^{2}dt} \tag{30}\]
where \(\omega_{0}\) is the nominal grid frequency. The indices \(m,k\) run over all \(N\) grid-forming inverters as the grid-following inverters have no internal frequency dynamics (7).
The synchronization norm (29) measures the synchronicity in the power grid. A large synchronization norm expresses a lack of synchronization. The synchronization norm however neglects any fluctuation of the so-called bulk [52], the joint response of the
Figure 2: Network structure of a synthetic power grid. Triangular and circular nodes depict grid-following and grid forming inverters respectively.
entire power grid, of synchronous frequencies. Therefore the authors of [51] introduce the deviation norm \(||\mathcal{L}||_{dev}\) which measures the contribution of the bulk to the fluctuations. In [51] it has been shown that the bulk is the dominant contributor in response to single node renewable energy fluctuations.
The results are summarized in table 3 and 4. In all cases it can be seen that the deviation norm \(||\mathcal{L}||_{dev}\) is larger than the synchronization norm. This indicates that the bulk fluctuations are the main contributors for multi-node renewable energy fluctuations as well. This holds for all fluctuation process and for both edge cases, the correlated or uncorrelated fluctuations. Furthermore, we can see that the deviation norm is smaller for the uncorrelated case than for the correlated case, which is to be expected. Moreover it can be seen that the synchronization norm is very small for all cases which implies that the networks have a high degree of synchronicity under renewable energy fluctuations.
In the following we will go more into more details for the results of the demand fluctuations. The results for the solar and wind fluctuations can be found in the appendix 5.
The figures 3 and 4 show the results for the correlated and uncorrelated demand fluctuations respectively. In this example we use the coefficients for the stochastic process, introduced in [24], which have been extracted from the NOVAREF data set [53] which consists of high-resolution demand profiles. In a transmission grid, the number of consumers is significantly higher than in the data-sets analyzed in [24]. We expect that the fluctuations that take place at a single node will average out to some extent because the demand of all consumers is not fully correlated. The actual fluctuations should therefore be smaller. Thus, the result which we present here should be considered a pessimistic estimate. This explains why the frequency response for the uncorrelated fluctuations shown in figure is relatively sever and occasionally even surpasses \(0.1\,\mathrm{Hz}\).
For all fluctuation processes considered in this work we find that the voltage magnitudes
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \(||\mathcal{L}||_{sync}\) & \(||\mathcal{L}||_{dev}\)[51] \\ \hline Wind Fluctuations & 0.001 & 0.153 \\ \hline Demand Fluctuations & 0.002 & 0.417 \\ \hline Solar Fluctuations & 0.027 & 0.099 \\ \hline \end{tabular}
\end{table}
Table 4: Performance measures for completely uncorrelated fluctuations.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \(||\mathcal{L}||_{sync}\) & \(||\mathcal{L}||_{dev}\)[51] \\ \hline Wind Fluctuations & 0.001 & 0.874 \\ \hline Demand Fluctuations & 0.002 & 1.952 \\ \hline Solar Fluctuations & 0.033 & 0.686 \\ \hline \end{tabular}
\end{table}
Table 3: Performance measures for completely correlated fluctuations.
of the nodes stay close to the set-point of 1 [p.u.] which is to be expected as we simulate active power fluctuations which couple to the frequency.
This example demonstrates that we are able to generate robust and stable synthetic grids. The grid does not lose synchrony even under fluctuations which are stronger than they will be in reality as we have not considered averaging effects at the nodes.
This opens the door to future research that studies grids that are under severe stress, possibly from compound events, meaning that multiple stressors occur at once. Extreme scenarios that can destabilize grids include the loss of multiple lines, as grids are built N-1 stable, special weather conditions that can cause storage to be locally depleted, causing grid forming inverters to have to compromise on their grid forming capabilities and to inject fluctuations as well.
Figure 3: Results for completely correlated demand fluctuations at the nodes. Figure on the left shows the active powers of the grid-following inverters. The frequency response of the grid-forming inverters is shown in the figure on the right side.The parameters \([\gamma,\epsilon,\mu_{MB}]=[0.016,33.81,0.03]\), as in [24], were used to generate the demand fluctuations.
## 5 Conclusions
In this work a framework to generate synthetic power grid models for studying collective dynamical effects has been introduced. For the first time the following established methods are combined to obtain synthetic power grids: topologies [18], active power set-points [21, 22] and short term fluctuations, node [17] and line models and finally an operation point that is validated to fulfill certain stability criteria [38, 45]. Each element in the framework can be substitute as long as it adheres to the general structure thus making the approach modular. For the default elements we have chosen methods which have already been used in various research projects. We have reviewed these established approaches and are drawing attention to possible improvements in the respective sections, in particular in order to investigate electricity grids with a high share of renewable energy. In particular, we have identified two elements that need improvement, the generation of network topologies and the distribution of active power supply.
An essential contribution of this work is a set of validators that ensure that the different parts of the system work well together to provide a plausible stable operating state. This will be a key technical tool for developing new models in the future.
The topologies created with the random growth model [18] cannot reflect the distribution of transmission line lengths in the empirical _SciGRID_ data-set [30]. The model has been designed to resemble network properties, such as the degree distribution, of real EHV power grids. However, the positioning of the nodes is random, which does not reflect the growth of power grids realistically. We assume that it is possible to correct the length distribution by introducing an additional step in the algorithm that considers the geographical location of the nodes. Furthermore, we have assumed that the transmission system topology will remain very similar to today's. Future studies should consider how
Figure 4: Results for completely uncorrelated demand fluctuations at the nodes. Figure on the left shows the active powers of the grid-following inverters. The frequency response of the grid-forming inverters is shown in the figure on the right side. The parameters \([\gamma,\epsilon,\mu_{MB}]=[0.016,33.81,0.03]\), as in [24], were used to generate the demand fluctuations.
the energy transition influences the topology, as for example, RES are connected to the grid differently than large power plants and the grid evolves to adapt to the new locations.
The major issue in the distribution of active power supply for our synthetic model is that the _ELMOD-DE_[21] specifies scenarios which reflect the current power supply. As we are interested in studying future dynamics as well, a new method for drawing active power is needed. Atlite [23] is a software tool that generates weather dependent power generation potentials and time series for renewable energy technologies. These potentials and time series are a promising tools that could be used to update the active power supply in our model. Further, as the time-series depend on the weather, they could also be used to study the synthetic grid under multiple supply scenarios.
Besides the generation of the synthetic grid dynamics in stable operation points, we also include the major drivers of fluctuations at short time scales. We have implemented the three major drivers of short term fluctuations in future power grids, solar, wind and demand. As an example we study a fully synthetic power grid under these fluctuations. We have decided to add the fluctuations only to the components without grid-forming capabilities as grid-forming components will usually be equipped with sufficient storage. We find that the synthetic grid shows good synchronicity under all three fluctuation scenarios. We saw that there is a relevant contribution of the joint response of synchronous frequencies.
It remains a challenge to find a balance between simplicity and tractability of the model and realism. We have outlined a wide range of points at which realism can be increased. In the current state the complete model is already well suited to drive new research directions. This includes developing methods to study compound and extreme events that particularly stress the system. More immediately it will allow us to advance the study of dynamic power grid stability using graph neural networks [9, 13, 14]. It enables for the first time to generate a large and robust set of heterogeneous DAE models that will challenge the GNN models in completely new ways, and allow us to take one step closer to predicting the dynamic stability of real power grids.
|
2308.04473 | Opponents and proponents of the war in Ukraine in Russian social media:
who are they? | Understanding the personality of Russians who support the war in Ukraine is
one of the key steps to understanding how this war became possible. However,
during the war, traditional sociological methods are not always applicable.
Social media provides an alternative source of what is inside people's heads.
In this paper, I compare the political identities, values, and interests of
social media users in Russia who hold a strong position for or against the war
in Ukraine. I collect data from VK, the most popular Russian social media
platform, and analyze self-filled profile information as well as the groups
that the users subscribed to. I found that proponents of the war tend to have a
weaker political identity (self-identified as "moderate") compared to
opponents, who specify it more precisely (often, but not limited to,
"liberal"). Additionally, the values of the proponents more frequently align
with those promoted by the Russian government, such as orthodoxy and family.
Despite these differences, pro-war and anti-war users share many common
interests, as evidenced by their subscriptions to the same groups focused on
music, history, and sport. When asked to state the most important trait in
people (a field users can fill in VK), the most frequent answer for both groups
is "kindness and honesty". The analysis results, in addition to contributing to
the understanding of public opinion in Russia, can be utilized for predicting
one's position on the war based on their social media profile. | Alesya Sokolova | 2023-08-08T12:21:49Z | http://arxiv.org/abs/2308.04473v1 | # Opponents and proponents of the war in Ukraine in Russian social media: who are they?
###### Abstract
Understanding the personality of Russians who support the war in Ukraine is one of the key steps to understanding how this war became possible. However, during the war, traditional sociological methods are not always applicable. Social media provides an alternative source of what is inside people's heads. In this paper, I compare the political identities, values, and interests of social media users in Russia who hold a strong position for or against the war in Ukraine. I collect data from VK, the most popular Russian social media platform, and analyze self-filled profile information as well as the groups that the users subscribed to. I found that proponents of the war tend to have a weaker political identity (self-identified as "moderate") compared to opponents, who specify it more precisely (often, but not limited to, "liberal"). Additionally, the values of the proponents more frequently align with those promoted by the Russian government, such as orthodoxy and family. Despite these differences, pro-war and anti-war users share many common interests, as evidenced by their subscriptions to the same groups focused on music, history, and sport. When asked to state the most important trait in people (a field users can fill in VK), the most frequent answer for both groups is "kindness and honesty". The analysis results, in addition to contributing to the understanding of public opinion in Russia, can be utilized for predicting one's position on the war based on their social media profile.
**Keywords:** War in Ukraine, Russia, Social media, Political position, Identity,
Prediction
## 1 Introduction
On February 24, 2022, Russia invaded Ukraine, leading to thousands of deaths and millions of displaced in Ukraine, tightening repression and thousands of arrested dissenters in Russia, and economic and social upheaval in the rest of the world. The question of how many Russians support the invasion and why they do so became a significant part of public discussion.
The traditional methods to measure public opinion, such as polls, during times of war and under repressive pressure are skewed by respondents' fears. Experiments show that a significant percentage of Russian citizens are likely to be hiding their true views about the conflict [1]. Additionally, research shows that opponents of the war and the regime were more likely to be concerned about responding to pollsters and making political statements than supporters [2, 3]. However, survey data from three independent projects (Levada Center [4], the Chronicles project [5], and Russian Field [6]) indicates similar distributions of responses on crucial issues, which indicates some relevance of this survey data [2]. Such data shows that consistent war supporters and opponents make up approximately only a quarter of the population each, while the position of the remaining part tends to lean towards the support of the war, but is mixed [5, 6].
Another way to study public opinion and perception of the war in Ukraine is through in-depth interviews. In contrast to polls, which can capture a general trend of the attitude to the war, in-depth interviews allow tracking of precise narratives and complex perceptions, which often cannot fit into standardized sets of intelligible positions. Such a method confirms that most of the supporters of the war do not have a consistent and strong political view and, moreover, are often depoliticized [7, 8, 9].
Both of these approaches do not usually focus on the values and personalities of respondents, which may be one of the key steps to understanding how their position developed. Meanwhile, social media provides us with an alternative source of data, from which the information on public opinion can be extracted [10, 11], and that was extensively used to study political phenomena [12, 13].
In this paper, to extract the information about personalities of proponents and opponents of the war, I use data from VK, the largest Russian social media platform, which is frequently used in studies on political and social movements in Russia [14, 15, 16, 17].
This social media, however, has some peculiarities that affect the contingent of its users. For the last decade, it is under the specific attention of the Centre for Combating Extremism (Centre E), a unit within the Russian Ministry of Internal Affairs heavily criticized for repressing opposition activists [18, 19, 20]. For publications, reposts, comments, and likes posted on their VK pages, many Russian citizens were sentenced to fines, suspended sentences, and imprisonment. For this reason, many opposition users stopped using it years before the full-scale invasion of Ukraine. However, it still remained a platform for political communication and protest coordination for some fraction of users [15, 16]. Then, several days after the start of the full-scale invasion, Russia accepted new laws designed to criminalize public anti-war statements [21], pushing many of the remaining opposition users to leave VK or at least to erase there any signs of the opposition views. Additionally, Russia blocked other popular social
media platforms - Instagram and Facebook - and announced their parent company Meta extremist organization [22], and many pro-war users moved from there to VK.
For these reasons, VK is far from being a representative source of the statistics on the number of supporters and opponents of the war. However, in my analysis, I do not focus on the comparison of amounts of pro-war and anti-war activity. Instead, I focus on the views, values, and interests of people. While it is still possible to find enough active opponents of the war on VK, the mentioned peculiarities should not affect the reliability of the results of this study.
For the current research, I collect a dataset of the profiles of the users who clearly express a position for or against the war in the posts they write on VK. I analyze their self-filled profile fields such as "political views", "religion", "personal priority" and "important in others". Additionally, I analyze the groups they subscribed to, to identify and compare their interests.
Since I consider only those who are confident in their position enough to write a post on it, my analysis is more focused on the consistent supporters and opponents of the war (who, according to the Chronicles project [5], make up 22% and 20% of the total population correspondingly), than on those whose position is not so strongly defined.
## 2 Methods
### Dataset
For collecting a dataset of the users who support (pro-war class) and do not support (anti-war class) the war, I used VK API. Firstly, I found pro-war and anti-war posts. Then, I added the suitable profiles of the authors of the posts to the dataset.
To find pro-war (anti-war) posts, I used three types of search requests:
* Pro-war (anti-war) hashtags
* Offensive language that supporters (opponents) of the war use with respect to opponents (supporters)
* Terms to describe the war in Ukraine.
The detailed description of the search requests can be found in A. The date of publishing of posts was from Mai 1, 2022, to January 6, 2023.
Then, I collected the profiles of the authors of the pro-war (anti-war) posts. From this set of profiles, I removed the following:
* Profiles that went to both pro-war and anti-war classes (e.g. if the user uses both pro-war and anti-war hashtags in their posts)
* Users who have less than one subscription to the groups
* Users whose country of residence (according to the profile) is not Russia, or who did not set it
* Users who went to anti-war class, but used pro-war hashtags that were not used for search (for the detailed description, see A).
The final dataset contains 10551 profiles, 7284 of which used pro-war vocabulary, and 3267 - anti-war. For each profile in the dataset, the following information was extracted (the standard VK fields with a choice of answers):
* Sex (female or male)
* Date of birth
* City
* Political views (apathetic, communist, socialist, moderate, liberal, conservative, monarchist, ultraconservative, libertarian)
* Religion (Judaism, Orthodoxy, Catholicism, Protestantism, Islam, Buddhism, Confucianism, Secular Humanism, Pastafarianism, other)
* Personal priority (family and children, career and money, entertainment and leisure, science and research, improving the world, personal development, beauty and art, fame and influence)
* Important in others (intellect and creativity, kindness and honesty, health and beauty, wealth and power, courage and persistence, humor and love for life).
Also, subscriptions to the groups ("publics", as they are called in VK) were extracted for each user.
The potential problem with this dataset could be the fake accounts ("trolls"), created by companies associated with the government to promote its interests [23]. To check if the accounts I found actually belong to real people, I compared the distribution of activity counters of the users who were allocated to the pro-war class with those allocated to the anti-war class and found that they look natural in both cases (see Appendix B). It makes it unlikely that a significant fraction of the accouts was created artificially to promote some position. This result is not surprising: the goal of trolls is to imitate public activity ("astroturfing"), and for this, writing comments and putting "likes" is more efficient than writing posts on a personal page. Additionally, trolls tend to not use public accounts (so their posts and other personal information are not accessible from search or VK API) [24].
### Considering bias in the distributions
To check the accuracy of the classification of posts as pro-war or anti-war based on keywords, I read 311 posts manually. From those classified as anti-war (157 posts), 95 (60%) posts actually contained anti-war positions, 43 (27%) contained pro-war positions, and from 19 (12%) I could not determine the position. From those classified as pro-war (154 posts), 144 (93.5%) posts contained pro-war positions, 2 (1.3%) - anti-war positions, and from 8 (5.2%) I could not determine the position.
I assumed that if the position is undefined from the posts, then the user supports or opposes the war with equal probability. Then, the fraction of actual opponents of the war in the class of anti-war users is 67%, and the fraction of proponents of the war in the class of pro-war users is 96%.
To calculate the error of these values, I quadratically summed up half of the fraction of undefined posts with the statistical error of the binomial distribution. As a result, the fraction of the anti-war users in the anti-war class is
\[f_{\text{anti-war}}=0.67\pm 0.07, \tag{1}\]
and the fraction of pro-war users in the corresponding class is
\[f_{\text{pro-war}}=0.96\pm 0.03. \tag{2}\]
Knowing these values, the initial distributions can be corrected to remove the bias caused by inaccurate classification. If \(\tilde{p}_{\text{anti-war}}(x)\) and \(\tilde{p}_{\text{pro-war}}(x)\) are the distributions of some parameter in the biased dataset (e.g. x = {"political views" = "liberal"}, and \(\tilde{p}_{\text{anti-war}}(x)\) is a fraction of the users in anti-war class who specified their political views as liberal), and \(p_{\text{anti-war}}(x)\) and \(p_{\text{pro-war}}(x)\) are the actual unbiased distributions, then:
\[\tilde{p}_{\text{anti-war}}(x) =f_{\text{anti-war}}p_{\text{anti-war}}(x)+(1-f_{\text{anti-war}}) p_{\text{pro-war}}(x), \tag{3}\] \[\tilde{p}_{\text{pro-war}}(x) =f_{\text{pro-war}}p_{\text{pro-war}}(x)+(1-f_{\text{pro-war}})p_ {\text{anti-war}}(x). \tag{4}\]
From these equations, the unbiased distributions can be easily expressed in terms of dataset distributions:
\[p_{\text{anti-war}}(x) =\frac{f_{\text{pro-war}}\tilde{p}_{\text{anti-war}}(x)-(1-f_{ \text{anti-war}})\tilde{p}_{\text{pro-war}}(x)}{f_{\text{anti-war}}+f_{\text{ pro-war}}-1}, \tag{5}\] \[p_{\text{pro-war}}(x) =\frac{f_{\text{anti-war}}\tilde{p}_{\text{pro-war}}(x)-(1-f_{ \text{pro-war}})\tilde{p}_{\text{anti-war}}(x)}{f_{\text{anti-war}}+f_{\text{ pro-war}}-1}. \tag{6}\]
All the distributions in the section 3 are corrected with Equation 5 and Equation 6.
### Machine learning for predicting support of the war
The information about the user subscriptions can be used to train a model for predicting positions about the war in Ukraine. To do this, I used the method described in [25], where the personality traits and political positions were predicted based on the likes on Facebook. The only difference is that I use subscriptions instead of likes.
I constructed a matrix \(A\), the rows of which are users, and the columns are accounts that are followed by more than 20 people from the dataset. Each element of the matrix \(a_{ij}\) is equal to 0 or 1, depending on whether user \(i\) is subscribed to account \(j\). Users subscribed to less than two accounts were excluded from the sample. After that, some random pro-war users were excluded from the sample, so that the number of elements in pro-war and anti-war classes was the same.
For the resulting matrix, dimensionality reduction using Singular Value Decomposition (SVD) was applied, with a final dimension of 100. This reduced the matrix size from (5514, 11110) to (5514, 100). Each row of this matrix still corresponds to one user, while the columns now represent some combination of subscriptions.
Finally, the users, whose class was checked by reading the posts manually, were excluded from the training dataset. These users are to be used for the test of the accuracy of the model. The training was done with logistic regression.
## 3 Results
### Demography
I start the comparison of profiles of pro-war and anti-war users with demography.
Gender is a mandatory field in VK, so it is filled in by all the users, and its distribution in our sample is shown in Figure 1(b). Among the supporters of the war, there are more women, which is surprising since most surveys [6, 26] indicate that women support the war less than men, and this has been a consistent result since the beginning of the Russian invasion. Therefore, it is worth talking about who writes posts about their position more often instead of who supports the war. Apparently, women write pro-war posts more actively than men.
The city is indicated by 2863 (88%) users from the anti-war class, and 6569 (90%) - from the pro-war class. As shown in Figure 1(b), a half of the proponents of the war in VK (51%) lives in Moscow and Saint-Petersburg, while among pro-war users this fraction is only 18%. In contrast, 27% of suppporters of the war live in small localities (with population less than 100k), while only 6% of anti-war users live in such places. It is consistent with the common beliefs that large cities are more opposition. One should notice that this result can be biased, because some of the people could leave Russia or move to the different region, but not change their location in VK.
Let's now move on to the age distribution. The date of birth is filled in by 1564 (48%) users from the anti-war class and 3958 (54%) - from the pro-war. The picture Figure 1(c) shows a 3 year floating average of user's age. Note that this data reflects the demographics of VK users rather than of the population as a whole, therefore their absolute values do not allow making any conclusions, and one can only compare the age difference between supporters and opponents of the war. The average age of people supporting the war is 45 years old, the median is also 45. For opponents of the
Figure 1: **(a)** Distribution of users by gender, **(b)** city and **(c)** age. All the distributions are corrected as described in subsection 2.2.
war, these values are 42 and 40 years accordingly. As expected [27], the supporters of the war turned out to be older than the opponents.
### Views and values
The next characteristics I analyzed were the similarities and differences of self-filled user views and values between supporters and opponents of the war.
I start with political views (see Figure 2(a)) - they are indicated by 995 (14%) people from the pro-war class and 662 (20%) from the anti-war. Among anti-war-minded people, the percentage of people identified as liberals is significantly larger than among pro-war, for whom it is one of the least popular political views. This result is totally consistent with the popular worldview in Russia, according to which liberal views are incompatible with pro-government ideology [28, 29].
Figure 2: Distribution of user views: **(a)** political views, **(b)** religion, **(c)** personal priority, **(d)** important in others. All the distributions are corrected as described in subsection 2.2.
Among pro-war users, the most popular views are "moderate". It is possible that this variant is a default option for people who do not have any strong political orientation. Therefore, it may indicate lower interest in politics or even political apathy of the supporters of the war. It confirms the result that Russian propaganda derives its effectiveness from political apathy rather than its ability to persuade [30].
Interestingly, in terms of the ratio of socialists and communists, as well as monarchists, there is almost no difference between anti-war and pro-war users. The result that the supporters of the war are fans of the USSR not more than opponents is also confirmed in subsection 3.3.
Religious views are indicated in their profile by 482 (15%) users with an anti-war position and 728 (10%) with a pro-war. Only 20% of the anti-war users define their religion as Orthodox, while among the supporters of the war, this number reaches almost 65%.
The most popular answer among the war opponents in our sample is the free-form option, which is shown as "Other" in the diagram. Here are some examples of the answers in this field (translated from Russian): "God exists", "priest of the Orthodox Church", "God is not exactly what the church says", "believe in love", "not religious", "believe in myself ". Thus, it is difficult to determine the religiosity of this category of users.
According to surveys [31], 71% of Russians consider themselves Orthodox. However, only 22% of the respondents attended religious services more than once a year, and 49% believe in life after death. Therefore, Orthodox identity does not necessarily mean following the Orthodox religion. Considering the result that the number of "Orthodox" people among the supporters of the war in VK is more than 3 times higher than among opponents, one can assume that Orthodox identity may be based on supporting the values and political course of the Russian Orthodox Church, which leans in line with the government course.
The next profile fields that I analyzed are "personal priority" (indicated in 589 (18%) anti-war and 999 (14%) pro-war users) and "important in others" (indicated in 597 (18%) and 1066 (15%) users, respectively). Interestingly, anti-war and pro-war users do not differ a lot in these parameters. The most significant difference is that "family and children" are more important for war supporters (about half of the pro-war users chose this option), while war opponents are slightly more likely to choose "self-development" (38%). This difference is especially interesting in the context of the rhetoric of the Russian government and Putin, who actively advocates for "traditional values", the main component of which is a strong family [32].
### Subscriptions and interests
The next thing I explored is how the VK-publics a user follows are related to support or non-support of the war.
For each public, I calculated what percentage of users from the sample are subscribed to it (i.e. popularity), which is shown on the vertical axis at Figure 3(a), and the fraction of the public subscribers who support and do not support the war (which is shown on the horizontal axis).
I denote a fraction of users from the anti-war class subscribed to the public \(i\) as \(\vec{p}^{i}_{\text{anti-war}}=\frac{N^{\text{subscribed}}_{\text{anti-war}}}{N^{ \text{total}}_{\text{anti-war}}}\), and correspondingly \(\vec{p}^{i}_{\text{pro-war}}=\frac{N^{\text{subscribed}}_{\text{pro-war}}}{N^{ \text{total}}_{\text{pre-war}}}\). These values should be corrected with Equation 5 and Equation 6 to get unbiased distributions \(p^{i}_{\text{anti-war}}\) and \(p^{i}_{\text{pro-war}}\). Then, these distributions are used to get \(x_{i}\) and \(y_{i}\) for each public \(i\) in Figure 3(a):
Figure 3: Distribution of subscriptions of the users. **(a)** VK-publics, ranged by their popularity and a degree of pro-war/anti-war orientation. Each point is a public. The size of a point is the total number of subscribers of this public. On the vertical axis is the fraction of the users from the selection, subscribed to this public (in other words, popularity among the selection). On the horizontal axis is the fraction of anti-war users in this public (e.g. x = 0.75 means that among all the subscribers of this public from the selection, there are 75% anti-war users and 25% pro-war). X and Y coordinates are calculated with Equation 7 and Equation 8. Some publics are highlighted by magenta, to give examples of their names (translated from Russian). **(b)** VK-publics classified into topics (see Appendix C), where each point corresponds to a public. The horizontal axis is the same as in (a). The size of the point here is the popularity among the selection (this parameter is on the vertical axis in (a)). The weighted average positions of the publics on the horizontal axis for each topic are shown in magenta, as well as the standard error of the distribution.
\[x_{i}=\frac{p^{i}_{\text{anti-war}}}{p^{i}_{\text{anti-war}}+p^{i}_{\text{pro-war}}}, \tag{7}\]
\[y_{i}=\frac{p^{i}_{\text{anti-war}}+p^{i}_{\text{pro-war}}}{2}. \tag{8}\]
The more to the right the public in the Figure 3(a), the more opponents of the war are subscribed to it, and the fewer supporters. And the higher it is, the more popular it is with users of both categories. Publics that are in the middle are equally popular among both supporters and opponents of the war.
All publics can be approximately divided into 3 groups. On the right are opposition accounts: "TV channel Rain", "Navalny", "Meduza", etc. On the left are pro-war political accounts: "United Russia", "Russian Ministry of Defense", "Putin V.V.", etc. And in the middle - publics on neutral topics. It is worth noting here that many (but not all) opposition publics in VK are blocked for people located in Russia, which may affect the results of the analysis.
An interesting observation here is an absence of clustering: most publics are followed by both supporters and opponents of the war. Therefore, the pro-war and anti-war users (even those who have a strong position enough to write a post on it) often coexist in the same information fields, and consume the same content, when it is not connected to the war.
Figure 3(b) shows interests, highlighted by keywords (C) in the names of publics, and their average position on the axis of support/non-support of the war (the same horizontal axis as in Figure 3(a)). For the detail of calculating the average position, see D. The wider the magenta line, the greater the spread - for example, among the publics about a country house, there exist the publics with almost only subscribers who support the war, but also the publics where the fraction of supporters and opponents is almost equal.
This analysis shows that many interests are shared by both anti-war and pro-war users. For example, history, sports, and music are similarly interesting for both user classes. Interestingly, almost equal number of supporters and opponents of the war are interested in USSR, which complements the result from subsection 3.2, showing that there is almost equal number of communists and socialists among pro-war and anti-war users.
However, apart from the obvious political interests, there are some topics that users of one of the groups are more interested in than the other. For example, war supporters are more interested in family and cooking, which is in line with previous results showing that family and children are more important to war supporters. At the same time, opponents of the war are more often interested in learning English and traveling.
### Predicting position based on subscriptions
Machine learning to determine whether a person supports war or not, based on information about subscriptions. Details on how it works can be found in subsection 2.3. The accuracy of the model was determined on users (with more than 2 subscriptions)
whose posts were read and classified manually (72 anti-war posts and 160 pro-war ones).
It turned out that it is possible to predict that the user adheres to the anti-war position in 90% of cases based on their subscriptions. For pro-war users, the fraction of correctly guessed positions is 70%.
## 4 Conclusion
Understanding the personality of the pro-war Russians is essential for understanding the roots of the war, as well as for finding ways to prevent similar situations in the future. Social media provide an extensive source of information on people's personalities and can be successfully utilized for this purpose.
My analysis shows that war supporters on VK are more interested in what is similar to the "traditional values" in terms of the Russian government: family and children are more important to them, and they usually consider themselves Orthodox. Meanwhile, opponents of the war are more interested in learning English, travelling, books and science.
However, despite different political views and different interests on some issues, supporters and opponents of the war have some commonalities. They constantly coexist in the same information space: they read the same content about history, sports, and music. Representatives of both groups - both supporters and opponents of the war - most often value kindness and honesty in people, and this value does not depend on whether the person supports the war.
About the political believes, supporters of the war most often define their political views on VK as "moderate", which may indicate their weak political identity or apolriticality, while opponents of the war often can determine their political orientation more precisely, with majority of them consisting of liberals, but also with significant fractions of socialists and conservatives. Additionally, opponents and supporters of the war are equally interested in the topic of the USSR, and the number of communists and socialists among these categories is the same.
The extra practical result of the analysis is that based on the user subscriptions, the machine learning model can predict the position regarding the war. For anti-war users, the accuracy of such prediction is 90%, while for pro-war users it is 70%.
Acknowledgments.I am grateful to Novaya gazeta Europe and Teplitsa for organizing the hackathon Projector 2023, as well as my team at this hackathon, where I had a chance to present the results of the current research for the first time, improve it by gaining insights during discussions with experts and team members, and the inspiration that it gave to me. I am also thankful to Leonid Yuldashev and Equalitie for useful discussions and support.
## Statements and declarations
Data availability.The anonymized data collected during the current study, as well as the code for analysis, are available in the Zenodo repository, [https://doi.org/10.5281/zenodo.8125674](https://doi.org/10.5281/zenodo.8125674).
Funding.There was no funding for this research.
Competing interests.There is no conflict of interest.
## Appendix A Keywords for data collection
Table A1 contains search words used for dataset collection.
The profiles, initially classified as anti-war, but whose posts contained the hashtags from Table A2, were removed from the dataset, since they are likely pro-war or their polarity is undefined.
## Appendix B Comparison of activity counters between classes
All the user activity counters, such as the number of videos, audio tracks, photos, friends, gifts, groups, followers, pages (meaning subscriptions to group pages, "publics"), subscriptions (to other user pages), follow the usual [34, 35, 36] for such parameters power law, that looks linear in double-logarithmic scale for large values (Figure B1). The maximum number of some of these counters in VK is restricted to 10000 (friends, audios, and groups).
Despite some differences in user counters between users from pro-war and anti-war classes, both distributions look natural, making it unlikely that a significant fraction of the accounts were created artificially specifically for promoting the pro-war position.
\begin{table}
\begin{tabular}{l l l} \hline \hline Keyword – original & Keyword – translated & Notes \\ \hline \#ZaMir & \#ForPeace & "Z" as a military symbol 1 \\ \#3aMir & \#ForPeace & – \\ \#3aHir & \#ForOurs & Meaning: "for our people" \\ \#MirBmecte & \#WeAreTogether & – \\ \#ZaIreismelenta & \#ForPresident & “Z" as a military symbol 1 \\ \#ZaPiecnino & \#ForRussia & “Z" as a military symbol 1 \\ \#3aPoschio & \#ForRussia & – \\ \#ZaPiecnino & \#ForRussia & – \\ \#ZaPiecnino & \#UR & “United Russia” (political party) \\ \#ZaPiecnino & \#UnitedRussia & “United Russia” (political party) \\ \#ZaPiecnino & \#MariupollsRussianCity & – \\ \#ZaPiecnino & \#YesVictory & Opposite to "no war" \\ \#ZaPiecnio & \#ForVictory & “Z" as a military symbol 1 \\ \#ZaPiecnio & \#ForVictory & “Z" as a military symbol 1 \\ \#WremmHomatr & \#TimeToHelp & – \\ \#FeonZ & \#HeroesZ & “Z" as a military symbol 1 \\ \#CBO & \#SMO & Special military operation \\ \#LHP & \#DPR & Donetsk People’s Republic \\ \#THP & \#LPR & Lugansk People’s Republic \\ \#ZaHanniPradja & \#TruthIsWithUs & “Z" as a military symbol 1 \\ \#ZaIparab & \#ForTruth & “Z" as a military symbol 1 \\ \#ZaIparab & \#ForTruth & – \\ \#ZaIparab & \#JackieChan & See 2 \\ \hline \hline \end{tabular}
\end{table}
Table A2: Keywords identifying that the author of the post initially classified as anti-war should be removed from the dataset
## Appendix C Topics of publics
To define topics of the publics, I used keywords. The publics was considered to belong to some topics if its name contained at least one of the keywords in any form of case. E.g. the publics was considered to be about history if there was "history" or "historical" in its name. The full list of the keywords for topics is listed in Table C3.
## Appendix D Weighted average
To calculate the average pro-war/anti-war ratio for publics on a particular topic, I use the equation for weighted average:
\[\bar{x}=\frac{\Sigma x_{i}y_{i}}{\Sigma y_{i}},\] (D1)
where \(x_{i}\) and \(y_{i}\) are calculated with Equation 7 and Equation 8 correspondingly.
A square of the standard deviation of the weighted average is
\[SE^{2}=\frac{n}{(n-1)(\Sigma y_{i})^{2}}(\Sigma(y_{i}x_{i}-\bar{y}\bar{x})^{2} -2\bar{x}\Sigma(y_{i}-\bar{y})(y_{i}x_{i}-\bar{y}\bar{x})+\bar{x}^{2}\Sigma(y_ {i}-\bar{y})^{2}),\] (D2)
where \(n\) is a number of elements (publics in the topic). |
2301.08742 | Unifying Consciousness and Time to Enhance Artificial Intelligence | Consciousness is a sequential process of awareness which can focus on one
piece of information at a time. This process of awareness experiences causation
which underpins the notion of time while it interplays with matter and energy,
forming reality. The study of Consciousness, time and reality is complex and
evolving fast in many fields, including metaphysics and fundamental physics.
Reality composes patterns in human Consciousness in response to the
regularities in nature. These regularities could be physical (e.g.,
astronomical, environmental), biological, chemical, mental, social, etc. The
patterns that emerged in Consciousness were correlated to the environment, life
and social behaviours followed by constructed frameworks, systems and
structures. The complex constructs evolved as cultures, customs, norms and
values, which created a diverse society. In the evolution of responsible AI, it
is important to be attuned to the evolved cultural, ethical and moral values
through Consciousness. This requires the advocated design of self-learning AI
aware of time perception and human ethics. | Mahendra Samarawickrama | 2023-01-10T11:15:41Z | http://arxiv.org/abs/2301.08742v1 | # Unifying Consciousness and Time to Enhance Artificial Intelligence
###### Abstract
Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.
Keywords:Consciousness, Time, AI, Relativity, Quantum Mechanics, Reality, Responsible AI
## 1 Introduction
The notion of time is an integral part of consciousness [1]. The consciousness experiences the causation or changes in reality/environment and perceives the time. Therefore, in our previous publication [2], we assumed that consciousness is a sequential process which is aware of a single piece of information at a time. Even though the brain processes sensory data of five sensors (i.e., Sight, Sound, Smell, Taste, and Touch) in parallel in the neural network, the awareness of causation is a sequential process following cause and effect. See the illustration of this idea in Figure 1,
The assumption of sequential awareness in consciousness enables mapping the perception of time into consciousness. Based on the theory of relativity [3], the perception of time is relative to the frame of reference. Einstein assumed that the speed of light is constant in all frames of reference, and the time is derived based on that fundamental assumption. In our paper, we defined the shortest time to be aware of reality as a consci
Figure 1: The interplay of five sensors, brain and consciousness. The brain processes sensory information in parallel. However, the awareness of causation (i.e., consciousness) is a sequential process focusing on a single piece of information at a time. This sequential process of awareness in consciousness operates fast and consistently, which underpins our perception of reality.
on relativity, this consciousness cycle is also subjected to dilation, like relativistic time
\[T_{v}=\frac{T_{0}}{\sqrt{1-\frac{v^{2}}{c^{2}}}}\,, \tag{1}\]
where, \(T_{v}\) is the dilated period of the consciousness cycle related to the rest period of the consciousness cycle \(T_{0}\). Note that the \(\sqrt{1-\frac{v^{2}}{c^{2}}}\) is the _Lorentz factor_, where \(v\) is the relative velocity between inertial reference frames, and \(c\) is the speed of light in a vacuum. Then, we mathematically modelled [2] how consciousness would interplay with matter and energy, forming reality, which can be adapted to understand limitations and opportunities in AI consciousness. This paper extends our discussion towards the time perception of artificial intelligence systems (AIS).
## 2 The motion of time in perception and reality
Humans, like any other life forms, experience time through causation. Patterns are composed in the human consciousness in response to the regularities in nature [4]. Since the beginning of human civilisation, humans have learnt and evolved complex concepts and constructs by incorporating time emerged through patterns in the consciousness. The earth's rotation around itself determines the day, and orbiting around the sun determines the year. The Moon takes about one month to orbit the earth. The tilt of the earth's spin axis with respect to its orbital plane causes the weather seasons. These environmental patterns cause many biological patterns and lifestyle patterns in human life. To predict and organise these patterns effectively, humans introduce standard time with clocks, calendars and various other frameworks. These artificial frameworks enable us to model time and objectively measure subjective experiences.
Physics has been evolved by observation of nature with various frameworks of time. In this way, time became an essential construct and dimension of our understanding of reality. For example, Newtonian physics [5] evolved assuming that time is absolute and flows consistently from past to present and into the future. That enables the development of mathematical models for explaining patterns in reality with time. However, later observations, such as the perihelion motion of Mercury, allow humans to understand time as a relativistic measure rather than an absolute. The modern understanding of the universe is based on the theory of relativity [6; 7], which is completely articulated by space-time principles. Based on relativity, John Wheeler [8] stated, "Space tells matter how to move. Matter tells space how to curve". Relativity enables us to accurately understand and predict the behaviours of black holes, stars, and planets. Further, relativity enables humans to develop technologies like the atomic clock [9] and Global Positioning System (GPS) [10] that are useful in everyday life.
The behaviour of particles is completely different to larger objects like planets, stars, etc. This led to the evolution of Quantum physics [11] as opposed to relativity. Quantum physics exhibits amazing accuracy in predicted results in particle physics. However, it greatly disturbs the notion of time modelled in relativity. For example, in the collapse of the wave function in quantum entanglement, Einstein described that as a spooky action at a distance [12]. As per relativity, information cannot transfer faster than the speed of light. As per the recent discoveries in quantum entanglement, information can be transferred instantly, faster than the speed of light, making our reality non-local [13]. The non-local reality contradicts relativity, which is now applied in quantum teleportation at the subatomic level. On the other hand, at the quantum level, the reality is uncertain, as described by Heisenberg's uncertainty principle [14]. As per the uncertainty principle, it is impossible to precisely measure or be aware of the position and speed of a particle in a given time. This brings the limitation of human awareness and perception of time. Therefore, many believe now that consciousness is fundamental and that time and causation are derived from consciousness [15].
## 3 The implication of principles of time for ais
The inability to consolidate quantum physics and the theory of relativity makes our understanding of reality incomplete. Moreover, the new discoveries proving the idea of non-local reality shake the status quo of fundamental physics [16]. Therefore, it is still impossible to supervise AI to experience the notion of time to understand reality precisely. On the other hand, human understanding of reality is also about 5%, whereas most of the universe consists of dark matter and dark energy, which humans do not understand [17]. Under these conditions, AI might be used to explore reality and time in a way we have never imagined. Perhaps incorporating AI to understand reality and causation might help humans to become fully aware of reality by overcoming inherent biases from evolution, culture and nature.
Typical Reinforcement Learning (RL) technique can be adapted to automate the learning of AI. The RL process can be mathematically formulated using Markov Decision Process (MDP) [18]. That is a sequential learning process by trial and error. In this process, the learning agent (i.e., AI) sequentially interacts with the environment with an intelligent decision (i.e. action) followed by receiving a reward or a penalty based on the policy imposed. There will be no influence on the AI agent's action, but convey the value of its action through feedback with reward or penalty. This way, the AI agent will self-learn about the environment over time. The RL process is illustrated in Figure 2:
## 3 The implication of human beliefs, values and cultures for the perception of time in ais
Human beliefs, customs, culture and values are tightly linked with various dynamics and interpretations of the time and periodicities based on the movement of the earth, Moon and other terrestrial bodies. From the beginning, humans identified that time affects life and nature differently. Therefore, in the Greece era, early Western culture, there were at least three gods representing different time forms: _Chronos_, _Aion_, and _Kairos_[19]. _Chronos_ represented the linear time flowing from past to present into the future. This is the time that humans feel when life passes. In contrast, _Aion_ represented the cyclical nature of time experienced from natural events such as weather patterns, rebirths, etc. The third god _Kairos_ represented the opportunist time, which reflects the appropriate time to achieve a task. In this way, time, environment and beliefs were tightly linked with life and governed society and values.
On the other hand, in Eastern culture, the horoscope is one good example of a planetary and constellation framework underpinning Astrology as a foundation of certain belief systems [20]. These beliefs assume that Astrology is associated with time and causality, which can predict the future and guide humans.
The human observation of the night sky led to perceiving time from various cyclical patterns going far back in time. For example, the Aboriginal Australians [21] observed the night sky and mapped them to the environment and life stages that evolved various customs, arts and even religions. Not only by interacting planets and stars but the tilt of the earth's spin axis also significantly led to diversifying human cultures based on seasons, particularly when moving away from the equator.
The notion of time and associated beliefs, customs, and values are important to consider when training AIS [22]. That will help promote human cultural values, ethics, and diversity, equity and inclusion (DEI). AI development may need to pay attention to and integrate the time attributes that emerged from nature, values and cultures. Humans may include them in the policies for rewarding self-learning AI algorithms (e.g., in MDP).
Figure 2: Components of the Markov Decision Process (MDP) and its function in the agent-environment interaction. The sequential step of time is represented by \(t\).
The implication of biological time on AIS
The biological cycles play a fundamental role in human behaviours and the perception of time--for example, mood cycles, circadian rhythms, and the menstrual cycle. Without understanding these biological time-keeping processes, AI cannot seamlessly integrate with human society when creating values in health, culture, art, etc. These insights are essential to realising emotional intelligence, empathy and awareness in AI. Literature shows the effective use of Cyclic Hidden Markov Models (CyH-MMs) for detecting and modelling cycles in a multidimensional heterogeneous biological time series data collection [23]. It is important to attribute the relevant features of biological processes when training AIS, which raises more awareness about humans.
Recent discoveries in quantum physics argue that our reality is non-local, where awareness can happen instantly, faster than the speed of light. Physicists and neurologists think brain neurons might be aware of the quantum world through the orchestrated collapse of microtubules in the neurons in the brain [24, 25]. If this hypothesis is true, then there are possibilities that human awareness can be linked with non-local realities to expand our consciousness across the universe instantly. From this perspective, future AI might need to be evolved with the capabilities of biological neurons, which interplay with the quantum realities. The recent development of neurotech realising brain-computer interface (BCI) along with emerging quantum computers might enable such capabilities in the near future [26].
## 4 Conclusion
Consciousness and perception of time and causation are key to awareness and understanding reality. The notion of time emerged from causation, a perception relative to the observer as per the relativity principles. In relativity, it's not time but the light-speed constant in all frames of reference. In contrast, in quantum entanglement, the reality is non-local, and information can be transferred instantly faster than light. While the principles of time contradict the foundation of physics, time also influenced the formation of diverse customs, values and cultures based on patterns that emerged from nature, particularly around the regularities in the earth's movement, environment, astronomy and biology. Therefore, understanding time and related artefacts (i.e., cultures, beliefs, values, customs, physics, health, etc.) are very important to realise deep awareness of reality. From the AIS perspective, it will enhance the understanding of AI in human health, cultures, customs, values and various other diversities. Bringing this awareness to AI will be a challenging and complex yet rewarding milestone in the evolution of ethical and responsible AI.
|
2305.15582 | Balancing Effect of Training Dataset Distribution of Multiple Styles for
Multi-Style Text Transfer | Text style transfer is an exciting task within the field of natural language
generation that is often plagued by the need for high-quality paired datasets.
Furthermore, training a model for multi-attribute text style transfer requires
datasets with sufficient support across all combinations of the considered
stylistic attributes, adding to the challenges of training a style transfer
model. This paper explores the impact of training data input diversity on the
quality of the generated text from the multi-style transfer model. We construct
a pseudo-parallel dataset by devising heuristics to adjust the style
distribution in the training samples. We balance our training dataset using
marginal and joint distributions to train our style transfer models. We observe
that a balanced dataset produces more effective control effects over multiple
styles than an imbalanced or skewed one. Through quantitative analysis, we
explore the impact of multiple style distributions in training data on
style-transferred output. These findings will better inform the design of
style-transfer datasets. | Debarati Das, David Ma, Dongyeop Kang | 2023-05-24T21:36:15Z | http://arxiv.org/abs/2305.15582v1 | # Balancing Effect of Training Dataset Distribution of Multiple Styles for Multi-Style Text Transfer
###### Abstract
Text style transfer is an exciting task within the field of natural language generation that is often plagued by the need for high-quality paired datasets. Furthermore, training a model for multi-attribute text style transfer requires datasets with sufficient support across all combinations of the considered stylistic attributes, adding to the challenges of training a style transfer model. This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model. We construct a pseudo-parallel dataset by devising heuristics to adjust the style distribution in the training samples. We balance our training dataset using marginal and joint distributions to train our style transfer models. We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one. Through quantitative analysis, we explore the impact of multiple style distributions in training data on style-transfered output. These findings will better inform the design of style-transfer datasets.
## 1 Introduction
Multi-style text transfer is a challenging task today with applications such as automatic domain-appropriate, style-conformant writing Fu et al. (2018) and AI-assisted stylistic language editing. Text style transfer is an intricate task as all language has a specific context, and those contexts influence the attributes of the language Hovy and Yang (2021). Text style transfer is challenging because it involves dealing with the aspects of style coupled with the textual content Hu et al. (2017); Shen et al. (2017); Lample et al. (2018). This domain's other obstacles include the need for parallel corpus Jhamtani et al. (2017) and quality training data. As the number of style dimensions increases with multi-style text transfer, not only is the requirement of a jointly annotated corpus across all the stylistic dimensions problematic, but the different styles are not necessarily independent.
While "style" can also refer to authorial or domain-specific style, in this paper, we focus on "micro-styles" as defined by Kang and Hovy (2021) where they define "micro-style" as a complex combination of different factors such as formality markers, emotions, and metaphors. People intentionally Troiano et al. (2021) tune these styles in writing differently based on their mood, the person they are addressing, the content of the message, or the platform. Multiple micro-styles can jointly describe a text; for example, a given text could simultaneously be formal and sad. Micro-styles also more easily lend themselves to being represented as spectra with varying degrees of intensity. These points align with our vision of an application where users can edit micro-style aspects of their writing.
Much research exists on models implementing multi-style text transfer and interdependency of micro-styles Kang and Hovy (2019); Goyal et al. (
Figure 1: When an input sentence is passed to the multi-style transfer model, to increase formality and decrease arousal, we hypothesize that when the model is trained on a _balanced_ joint distribution of formality and arousal (all four style combinations have a 25% representation) - the style transfer is more successful as opposed to when the model is trained on a _skewed_ joint distribution (there is no representation of the “informal unaroused” style combination) of styles in the training data.
2020; Subramanian et al., 2018). However, there needs to be more exploration of the joint distribution of inherent micro-styles in the style transfer training dataset and how these micro-style distributions are related. Therefore, we pose a question - _Can a dataset with minimal variance across multiple micro-style combinations, such that it experiences a "balancing effect", lead to a better style transferred output?_ Figure 1 illustrates our intuition that a dataset that experiences a "balancing effect" will have more control over the multi-style transferred output than a "skewed" dataset. Suppose the style transfer model sees examples of every style combination that can exist - this could aid in the style generation of even unlikely combinations of styles compared to a skewed distribution of these joint micro-styles.
In this research, we consider a multi-style text style transfer pipeline assuming that the user has no access to parallel data or the style of the original text that he wishes to transfer, as would seem natural for a style language editing application. We introduce the changing of the training dataset micro-style joint distributions in such a pipeline and quantitatively explore the impact of this modification on the style transferred output. We perform a set of empirical analyses to demonstrate the influence of joint distributions on style-transferred output and show how this trend varies as the number of micro-styles considered changes. The 'balancing effect' on a training dataset leads to style transferred sentences from even the joint style combinations that are typically rare ("informal unbiased and unaroused"). Our study is the first of its kind on the distribution of micro styles in training datasets for multi-style text style transfer and is likely to have implications for designing datasets for multi-style transfer model training and fall within the context of and align with recent work on characterizing datasets and factors impacting style transfer (Bender and Friedman, 2018; Schoch et al., 2021; Li et al., 2019; Zhang et al., 2020; Gururangan et al., 2018).
## 2 Multi Style Transfer Pipeline
**Datasets:** We chose four micro-styles from the style hierarchy defined in Troiano et al.: Formality, Arousal, Sentiment, and Bias, for our study and used publicly available NLP datasets built by other researchers (Rao and Tetreault, 2018; Buechel and Hahn, 2022; Go et al., 2009; Pryzant et al., 2020; Kang and Hovy, 2019) to develop and test our models. Appendix A mentions the details of the datasets and their usage.
**Pipeline Overview:** Our experimental setup for multi-style transfer is inspired by the work of (Krishna et al., 2020). Like them, we first generate a "diverse" paraphrase of the input sentence, and then the paraphrased sentence is rewritten in the style of choice. Towards this end, we train a paraphrasing model (separately on a parallel paraphrase dataset). Then, the trained paraphrase model is used to create "pseudo-gold" parallel data for training style models.
First, we adopted a pre-trained T5 model (Raffel et al., 2020) to generate paraphrases. This model
Figure 2: The input sentence transitions through every step in our multi-style text style transfer pipeline. The box in red indicates our main contribution to the pipeline, which helps us explore the effects of joint micro-style combinations on style-transferred output.
was trained for the task of paraphrase generation on the ParaNMT-filtered dataset provided by Krishna et al. (2020). Once we had this trained paraphrase model, we used diverse beam search Vijayakumar et al. (2016) to generate diverse fluent paraphrased outputs. An important assumption is that the paraphrase is stripped of its original style and does not leak into the training.
We address this potential issue by training classifiers Sanh et al. (2019) to predict style on the original and paraphrased datasets and find that all our micro-style classifiers have a classification accuracy of higher than 80% F1, which is acceptable for pseudo-label creation. After we generate diverse paraphrases, we choose the most diverse paraphrase and then derive micro-style classifications for the paraphrased sentence using our trained micro-style classifiers. Therefore each sentence is assigned a classification score for each micro-style label and can form a "pseudo parallel" dataset for training the T5-based joint transfer model. Thus, our approach does not need a parallel dataset.
We then converted the classifier predictions into buckets of style (ranging from "very low" to "very high") based on the chosen style of the original and then paraphrased sentences. The bucketing process is described in Appendix B. After this step, we introduce our contribution of "constructing style distributions" into the pipeline, as illustrated in Figure 2. Following that, we perform multi-style text style transfer. We appended the "bucket" information to the paraphrased sentence to achieve the necessary intensity transfers, as motivated by the original T5 paper Raffel et al. (2020). We train T5-based style transfer models, where the paraphrased sentence and its style buckets are used as input parameters, while the style buckets assigned to the anchor sentence are used as proxy levels of output style transfer. All model-specific details are provided in Appendix B. For generating sentences from our trained models, we used beam search Vijayakumar et al. (2016) and nucleus sampling Holtzman et al. (2019) and chose the top 3 sentences from the generations. The following is an example of the input to the joint style transfer model and the expected output.
Goal - Highly increase the formality of the sentence, slightly increase the arousal of the sentence
Input - transfer: I'm sad you're going \(|\) input formality: low \(|\) input arousal: low \(|\) output formality: high \(|\) output arousal: mid
Output - I am sorry you are going to go.
Thus, we implemented a multi-style transfer pipeline to test our hypothesis without any finicky modeling paradigms popular in style transfer research, such as variational inference or autoregressive sampling He et al. (2020); Subramanian et al. (2018).
**Constructing Micro-style Distributions** We define a "style combination" as a possible combination of the states that the micro-styles can take together - such as 'informal biased negative.' Since there are three micro-styles, each having binary states, the total possible number of style combinations, in this case, is given by \(N_{c}=2\times 2\times 2=2^{3}\). Therefore to generalize, if \(|m_{i}|\) indicates the cardinality of each micro-style and \(n\) indicates the number of micro-styles considered, the total possible number of style combinations (\(N_{c}\)) possible is given by :
\[N_{c}=\prod_{i=1}^{n}|m_{i}| \tag{1}\]
To create the **balanced** joint distribution of styles, we ensure the standard deviation across the style combinations is close to 0. We do this by down-sampling each style combination, such that the number of samples in each style combination is the same as the least represented style combination. As we increase micro-styles, some micro-style combinations do not occur naturally together, so their representation is close to 0. In such cases, we assume that the least represented style combination is at least 5% of the total dataset. To ensure our comparison across the "balanced" and "skew" settings is fair, we construct a **skewed** dataset with a total sample size that is the same as that of the balanced dataset. Thus, the balanced dataset has a uniform distribution, while the skewed dataset
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Style Combination** & **Balanced** & **Skewed** \\ \hline Formal Aroused & 3395 & 8685 \\ Formal Unaroused & 3395 & 2792 \\ Informal Aroused & 3395 & 1275 \\ Informal Unaroused & 3395 & 828 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training data statistics (number of samples) for the balanced and skewed settings, when considering the micro-styles of Formality and Arousal.
has a non-uniform distribution. Table 1 shows the number of samples in each style combination of Formality and Arousal, given a "balanced" and "skewed" setting.
## 3 Experimental Results and Discussion
**Evaluation Metrics:** Style transfer accuracy metrics quantify how nicely output texts match the desired style. However, more than this metric is required. Motivated by Jin et al., we evaluate style transfer across the three main properties of text style transfer: style transfer accuracy, content preservation, and fluency. We use our custom joint sequence classification models, implemented using HuggingFace libraries Wolf et al. (2020) to evaluate the style transfer success ratio. Our definition for the Style Transfer Success \(S_{c}\) is the total number of matches between intended and transferred style buckets, divided by the total number of samples. To judge content preserved in style transferred text, we use three metrics: BLEU Papineni et al. (2002), embedding-based similarity Wieting et al. (2019) using cosine similarity of two sentence embeddings Reimers and Gurevych (2019), and Word Mover's Distance (WMD) Mir et al. (2019). For fluency, we use measures like perplexity using GPT2 Radford et al. (2019) and an adversarial classifier using the cross-aligned autoencoder model Mir et al. (2019).
**Experimental Setup:** In this paper, we illustrate different micro-style combinations in the training data, for a randomly selected case, with each combination in both the "balanced" and "skewed " settings. Therefore, we consider 6 cases respectively: 1) Formality and Arousal in a balanced setting (FA balanced) 2) Formality and Arousal in a skewed setting (FA skewed) 3) Formality, Arousal and Bias in a balanced setting (FAB balanced) 4) Formality, Arousal and Bias in skewed setting (FAB skewed) 5) Formality, Arousal, Bias and Sentiment in the balanced setting (FABS balanced) 6) Formality, Arousal, Bias and Sentiment in skewed setting (FABS skewed). We construct the training data with the appropriate settings and then pass them through our experimental pipeline (illustrated in Figure 2) and quantitatively evaluate the style transfer results.
**Discussion:** Table 2 shows examples of style-transferred sentences, given a style-transfer goal from our experimental pipeline for both balanced and skewed settings. E.g., given the objective is to decrease Formality but increase arousal, the sentence " Did you hear about the soldier with 8 limbs? He was army" transforms to "He's an army soldier with 8 legs?". Here, the contraction "He's" indicates a formality decrease, and the replacement of limbs with legs indicates a decrease. The overall arousal of this sentence is higher when it transforms into a question.
Figure 3 illustrates that the _balanced setup always has a higher success percentage of style transfer (\(S_{c}\)) than the skewed setup_. We cannot compare the success percentage across cases because matching the exact target and transferred style buckets becomes difficult as the number of micro-styles increases. We can also observe through Table 2 that the _quality of the balanced transferred text aligns better with the style transfer goal than the skewed transferred text_.
In Figure 4, we compare the difference in representation percentage of specific style combinations in the test sample for a specific case where we consider Formality, Arousal, and Bias micro-styles. We observe that a _balanced joint distribution leads to more representation in the style combinations that are less likely to occur_. This is further accentuated as micro-styles increase, as reported in
Figure 4: Considering the micro-style combinations such that, Formality [formal = f, informal = i], Bias [biased = b, unbiased = u], and Arousal [aroused = e, un-aroused = n], we observe that the micro-style combinations that are rarer (e.g., informal unbiased neutral (iun)) have more representation in the “balanced” setting than the “skewed” setting.
Figure 3: Balancing micro-style distributions leads to a higher multi-style transfer percentage than in the Skewed setting in all the cases.
Appendix C. In Figure 4, we see that rarer style combinations [ibn, fun, iun] show more representation in the balanced case as compared to the skewed case. This supports our intuition that the style transfer model benefits from learning the representation of all possible style combinations that can occur together.
When we consider Formality, Arousal, and Bias micro styles together, the most represented category (30% of samples) is "formal unbiased aroused" (fue). The least represented category (as unlikely to occur together) is "informal unbiased unaroused" (iun) with 1%. We observe that the quantitative evaluation metrics are quite indicative when compared across style combinations. For instance, in Table 3, we observe that _perplexity increases in categories that are unlikely to occur together_ (iun). This indicates that the style transfer model is confused by the style distributions present for this style combination.
We do not claim that our method of balancing multiple styles will work even for entangled micro-style combinations, as that is out of the scope of the current paper. However, balancing considerably affects the multi-style transfer output for the range of micro-style combinations we considered, and that has an application in many NLP tasks. This result could hugely influence future studies exploring better ways to balance even the entangled micro-styles.
## 4 Conclusion
Multi-style text style transfer is a challenging problem predominantly plagued by the need for jointly annotated high-quality datasets. There is a clear need for more research about the marginal and joint distribution of inherent micro-styles present in the training dataset used for style transfer. Multi-style text-style transfer typically requires access to large, jointly labeled datasets and many computational resources under typical implementations. More importantly, we would not be able to conveniently tweak the input data distributions in other multi-style text style transfer methods.
In this paper, we implement a multi-style transfer pipeline that subverts the requirement of a jointly annotated dataset of multiple styles by constructing a pseudo-parallel dataset to which we introduce our contribution of constructing style distributions. We then use the modified pseudo-parallel datasets for multi-style transfer. Our modified pipeline effectively allows us to understand the importance of the joint distribution of micro styles in training data and is a substantial contribution.
We quantitatively explore the impact of joint micro-style distributions in the training dataset on the style-transferred output sentences. When the joint micro-style distributions are balanced, there is more control over style-transferred output than with a skewed distribution. These findings will likely inform the design of multi-style transfer datasets and encourage us to explore the micro-style relationships in our datasets.
## Limitations
In this research, though we employed automatic evaluation of our multi-style transferred text, we acknowledge that multi-style transfer is challenging to observe with the existing metrics for style transfer evaluation, and human evaluation should
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Style Transfer Goal** & **Input Text** \\ \hline \(\uparrow\) Formality \(\uparrow\) Arousal & Wouldu’t it be great if Trump went 3rd party and sucked away millions of Republican votes \\ & lot \\ \hline \(\uparrow\) Formality \(\downarrow\) Arousal & I didn’t know what happiness was until I got married. But by then it was too late. \\ \hline \(\downarrow\) Formality \(\uparrow\) Arousal & Did you hear about the soldier with 8 limbs? He \\ & was army \\ \hline \(\downarrow\) Formality \(\downarrow\) Arousal & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Balanced Transferred Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes \\ \hline \(\uparrow\) & Untill 1 got married, I didn’t even know \\ what happiness was. \\ \hline \(\downarrow\) & DoI you hear about the soldier with 8 legs? He \\ \hline \(\downarrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes \\ \hline \(\uparrow\) & Untill 1 got married, I didn’t even know \\ what happiness was. \\ \hline \(\downarrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular}
\begin{tabular}{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular} \begin{l|l} \hline \multicolumn{1}{c}{**Skewed Transfer Text**} \\ \hline \(\uparrow\) & Wouldu’t be nice if Trump went to the \\ third party and suowed millions of Republican votes? \\ \hline \(\uparrow\) & Yesh, I don’t understand all the hate. \\ \hline \hline \end{tabular}
\end{table}
Table 2: The table shows the style transferred sentences, given an input sentence and the intended style transfer goal, for both the balanced setting as well as the skewed setting.
be done as well. As this research paper focuses on exploring the impact of style distributions in the training data on style-transferred output rather than developing a superior multi-style text transfer model, we use quantitative evaluation in this iteration of our paper. We hope that the large sample size and the consistency of applied metrics make our automated approach a reasonable way of evaluating the style transfer output.
This iteration of our paper aims to achieve multi-style transfer across multiple micro styles taken into consideration together as our contribution would aid in constructing a training dataset for multiple micro-style style transfers. We did not explore another exciting question of how balancing multiple micro styles in the training dataset might influence individual style transfer, which could be a promising future direction for our study.
We acknowledge that the classifier's quality sets an upper bound on the best style transfer accuracy that is obtainable. However, the target task is quite complicated without a parallel dataset. Our objective was not to have the most accurate classification of micro styles but to find a means to get acceptable pseudo labels for the micro styles. Individually, all our micro style classifiers had a classification accuracy of 80% F1 and higher, and we deemed this good enough for pseudo-label creation.
We also focused on utilizing the present styles in the training data and classifying them to derive inherent training style distributions instead of dynamically tuning the proportion of styles present in the training dataset. However, tuning these style proportions using techniques such as PPLM (Dathathri et al., 2019) would give us greater control over our experimental pipeline and is an appropriate next step.
## Acknowledgement
We thank Vivek Aithal, Priyam Srivastava and Daniel McAndrew for their initial work on the pipeline for multi-style transfer. This was instrumental to our project and helped us get a kickstart on our research.
|
2309.03035 | Optimized strategies for the quantum-state preparation of single trapped
nitrogen molecular ions | This work examines optimized strategies for the preparation of single
molecular ions in well-defined rotational quantum states in an ion trap with
the example of the molecular nitrogen ion N2+. It advances a two-step approach
consisting of an initial threshold-photoionization stage which produces
molecular ions with a high probability in the target state, followed by a
measurement-based state purification of the sample. For this purpose, a
resonance-enhanced threshold photoionization scheme for producing N2+ in its
rovibrational ground state proposed by Gardner et al. [Sci. Rep. 9, 506 (2019)]
was characterized. The molecular state was measured using a recently developed
quantum-non-demolition state-detection method finding a total fidelity of
38(7)% for producing ground-state N2+ under the present experimental
conditions. By discarding ions from the trap not found to be in the target
state, essentially state-pure samples of single N2+ ions can be generated for
subsequent state-specific experiments. | Aleksandr Shlykov, Mikolaj Roguski, Stefan Willitsch | 2023-09-06T14:26:02Z | http://arxiv.org/abs/2309.03035v1 | # Optimized strategies for the quantum-state preparation of single trapped nitrogen molecular ions
###### Abstract
This work examines optimized strategies for the preparation of single molecular ions in well-defined rotational quantum states in an ion trap with the example of the molecular nitrogen ion N\({}_{2}^{+}\). It advances a two-step approach consisting of an initial threshold-photoionization stage which produces molecular ions with a high probability in the target state, followed by a measurement-based state purification of the sample. For this purpose, a resonance-enhanced threshold photoionization scheme for producing N\({}_{2}^{+}\) in its rovibrational ground state proposed by Gardner et al. [Sci. Rep. **9**, 506 (2019)] was characterized. The molecular state was measured using a recently developed quantum-non-demolition state-detection method finding a total fidelity of 38\(\pm\)7% for producing ground-state N\({}_{2}^{+}\) under the present experimental conditions. By discarding ions from the trap not found to be in the target state, essentially state-pure samples of single N\({}_{2}^{+}\) ions can be generated for subsequent state-specific experiments.
## I Introduction
In recent years, molecular-state detection using quantum-logic techniques opened up new horizons for the coherent manipulation of single molecular ions in traps [1, 2, 3]. Compared to previously employed destructive methods [4, 5, 6, 7], these approaches provide vastly higher experimental duty cycles and, therefore, higher measurement statistics and levels of precision. They unfold novel perspectives for utilizing precision-spectroscopic experiments to probe fundamental physics [8, 9], for precisely determining values of fundamental constants [10, 11] and for testing their possible temporal variation [12, 13], for developing new frequency standards based on molecular rovibrational transitions [14], for employing molecular ions as qubits [15] and for observing and controlling chemical reactions of single particles on the quantum level [16].
A necessary step in any coherent single-molecule experiment is the initialization of the molecule in a well-defined quantum state, often its electronic, vibrational, and rotational (rovibrational) ground state. Over the past decade, various approaches have been implemented towards that purpose. Polar molecular ions can be optically pumped to the rovibrational ground state using dipole-allowed spectroscopic transitions [17, 5, 18]. However, polar species are also exposed to a constant redistribution of level populations by ambient black body radiation and, therefore, require a cryogenic environment to preserve quantum states. Another potential approach is cryogenic-buffer-gas cooling which also requires a cryogenic setup [19]. Moreover, this technique is not entirely state selective because it leaves the ions in a (low-temperature) thermal distribution of level populations. In this case, the state purity of the sample can be increased by selectively discarding ions which are not in the target state [19, 20].
Resonance-enhanced multi-photon ionization (REMPI) is a widely applicable and highly selective method for the generation of molecular ions [21, 22, 23, 24, 25, 6]. Combined with threshold-photoionization techniques, i.e., using photon energies just above the ionization threshold of the neutral molecule, this approach, in principle, provides high state-preparation fidelities for molecular ions [26, 22, 6]. However, it is sensitive to external electric fields which shift the ionization thresholds of different rotational states [26, 27, 28].
In the present paper, we focus on optimized state-preparation strategies for single molecular nitrogen ions in radiofrequency (RF) ion traps. N\({}_{2}^{+}\) has been proposed as an attractive system for precision molecular spectroscopy as it features small systematic shifts, transitions with low sensitivity to magnetic fields and, as an apolar diatomic ion, its rovibrational levels in the electronic ground state possess a natural immunity to blackbody radiation [29, 15]. We have previously achieved the preparation of N\({}_{2}^{+}\) in specific rotational states in an ion trap by employing a \([2+1^{\prime}]\) REMPI scheme [30, 6].
However, the REMPI scheme used in Reference [22] suffers from spurious [2+1] one-color photoionization which, if not carefully suppressed, diminishes the state selectivity of the experiment. To mitigate this problem, an alternative \([2+1^{\prime}]\) REMPI scheme via the \(a^{1}\Pi_{g}(v^{\prime}=6)\) intermediate state of N\({}_{2}\) was put forward by Gardner et al. [31] (Figure 1a). In this approach, two photons at 255 nm are required for the excitation to the intermediate state and another photon at 212 nm for ionization. As the energy of the individual photons in the excitation step is smaller than the one for ionization, parasitic [2+1] ionization is suppressed.
In the present work, we evaluated this alternative REMPI scheme as a basis for the preparation of single N\({}_{2}^{+}\) ions in their rovibrational ground state within a radiofrequency ion trap. Using a recently developed, highly sensitive quantum-non-demolition (QND) state readout scheme [3], we demonstrate a total state-preparation fi
delity of \(38\pm 7\) % which is limited by RF-field-induced time-varying shifts of the ionization thresholds of N\({}_{2}\) and secondary ionization processes. Because of the non-destructive nature of the detection scheme, the state of the ion is preserved and ions not found in the target state can be discarded from the trap. Only ions in the desired state are kept which can be utilized for subsequent state-specific experiments. We advocate the presently adopted approach, i.e., threshold photoionization combined with subsequent QND state detection and post-selection of the ions, as a general technique for producing single trapped molecular ions in specific quantum states with essentially unit fidelity.
## II Methods
The present work comprised two sets of experiments. First, REMPI and photoionization spectra of N\({}_{2}\) via the \(a^{1}\Pi_{g}(v^{\prime}=6)\gets X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) transition [31] were recorded in a dedicated molecular-beam setup as a prerequisite for the implementation of the photoionization scheme in an ion-trap apparatus. Second, photoionization was subsequently carried out inside the trap and combined with a recently developed QND state detection method [3; 32] for both evaluating the state-selectivity of the photoionization and simultaneously performing projective state preparation of the ions.
### REMPI and photoionization spectroscopy of jet-cooled N\({}_{2}\)
The \([2+1^{\prime}]\) REMPI scheme (Figure 1a) was investigated using a molecular beam machine coupled to a Wiley-McLaren [33] time-of-flight mass spectrometer (Figure 1b).
A beam of internally cold nitrogen molecules was produced by supersonic expansion of N\({}_{2}\) gas from a pulsed piezo valve (Amsterdam Piezo Valve, MassSpecpecD BV) at 2 bar backing pressure and a repetition rate of 10 Hz. The beam passed a skimmer with 1 mm diameter (Model 2, BeamDynamics) before entering the ionization chamber at a base pressure of 3\(\times 10^{-9}\) mbar.
For the REMPI experiments, the output of two dye lasers (NarrowScan, Radiant Dyes) pumped by the third harmonic of an Nd:YAG laser (Spitlight 1500, Innolas) was used. One dye laser producing light at 255 nm after frequency doubling was employed to excite N\({}_{2}\) to the intermediate state, and the other frequency-doubled laser producing light at 212 nm was used for the ionization step. Laser wavelengths were measured using a wavemeter with an internal calibration source (WS-6, High Finesse). The powers of the excitation and ionization lasers were maintained at 1 mJ per pulse and 0.75 mJ per pulse, respectively, measured before entering the chamber. The two lasers were focused into the molecular beam and ions were extracted into the TOF tube 1 \(\mu\)s after ionization to be detected by a multi-channel plate (MCP) detector (APD-APTOF, Photonis).
Spectra were recorded by monitoring the integrated nitrogen ion signal as a function of the relevant laser wavenumber. For measuring spectra of the \(a^{1}\Pi_{g}(v^{\prime}=6)\leftarrow\)\(X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) two-photon transition in
Figure 1: a) Energy-level diagram of the \([2+1^{\prime}]\) REMPI scheme employed for the rovibrational-ground-state preparation of single N\({}_{2}^{+}\) ions [31]. b) Schematic of the setup used for the REMPI TOF-MS measurements. c) Ion-trap experiment employed in the present study. Colored arrows represent different laser beams used for ionization, cooling, and the coherent control of atomic and molecular ions. Dashed lines indicate atomic (yellow) and molecular (green) beams. See text for details.
N\({}_{2}\), the frequency of the ionization laser was fixed at 47175 cm\({}^{-1}\) and the excitation laser frequency was scanned within a 40 cm\({}^{-1}\) interval. To record photoionization spectra, the frequency of the excitation laser was fixed to the S(0), S(1), and Q(1) rotational components of \(a^{1}\Pi_{g}(v^{\prime}=6)\)\(\leftarrow\)\(X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) transition, and the frequency of the ionization laser was scanned.
Quantum-logic-spectroscopic characterization of the photoionization products and projective state preparation of single N\({}_{2}^{+}\) ions in the ion trap
The setup for quantum-logic spectroscopy of single trapped molecular ions consisted of a linear RF ion trap in an ultrahigh-vacuum chamber (base pressure \(2\times 10^{-11}\) mbar), shown schematically in Figure 1c, coupled to a molecular-beam machine through a skimmer with an orifice diameter of 0.5 mm (Model 2, BeamDynamics). The inscribed radius of the RF electrodes of the trap amounted to \(r=1.75\) mm and the distance between the endcap electrodes was \(d=5\) mm. Coulomb crystals of Doppler-laser-cooled Ca\({}^{+}\) ions were loaded into the trap from a skimmed atomic beam of neutral Ca atoms by a resonant photoionization scheme [34], followed by sympathetically cooling a single N\({}_{2}^{+}\) ion produced by the threshold-photoionization scheme discussed above. During Ca\({}^{+}\) loading and N\({}_{2}\) ionization, the RF ion trap was operated at the frequency \(\Omega_{RF}=2\pi\times 16.4\) MHz, an RF amplitude \(V_{RF}\approx 516\) V and a potential of \(V_{DC}=15\) V applied to the endcap electrodes. The axial frequency of an N\({}_{2}^{+}\) ion corresponding to these parameters was \(\omega=243\) kHz and the Mathieu q-parameter [35] was \(q=0.11\). After reducing the crystal to a N\({}_{2}^{+}\)-Ca\({}^{+}\) two-ion string [32], the ions were cooled to the motional ground state of their axial center-of-mass motion in the trap by repeatedly driving a red motional sideband of the (4s) \({}^{2}\)S\({}_{1/2}\rightarrow\)(3d) \({}^{2}\)D\({}_{5/2}\) clock transition at 729 nm in Ca\({}^{+}\)[32].
The quantum-non-demolition state-detection scheme employed to verify the rovibrational state of the N\({}_{2}^{+}\) ion and thus the state-preparation fidelity of the photoionization has been described in detail previously [3]. Briefly, an optical dipole force (ODF) was generated by a one-dimensional optical lattice of two counter-propagating laser beams around 787 nm detuned from each other by the frequency of the in-phase axial motional mode of the N\({}_{2}^{+}\)-Ca\({}^{+}\) string in the trap (680.5 kHz in the present experiments). Under these conditions, a strong ODF was generated for N\({}_{2}^{+}\) in the rovibrational ground state exciting the axial center-of-mass motion of the ions. During the application of the optical lattice, the Ca\({}^{+}\) ion was shelved in the (3d) \({}^{2}\)D\({}_{5/2},m=-5/2\) state to prevent spurious motional excitation from an ODF generated on the atom [16]. The coherent motional excitation was detected by Rabi thermometry on the (3d) \({}^{2}\)D\({}_{5/2},m=-5/2\rightarrow\) (4s) \({}^{2}\)S\({}_{1/2},m=-1/2\) blue motional sideband of the Ca\({}^{+}\) 729 nm clock transition, see the red trace in Figure 2 as a representative example. For all other rovibrational states of N\({}_{2}^{+}\), the optical lattice was too far detuned from any spectroscopic transition to generate a strong ODF, thus leaving the two-ion string in the motional ground state from which no Rabi flops on the sideband transition could be observed (green trace in Figure 2). The fidelity of the scheme for detecting the ion in the rotational ground state was demonstrated to be \(>99\%\)[3].
## III Results
A rotationally resolved spectrum of the \(a^{1}\Pi_{g}(v^{\prime}=6)\)\(\leftarrow\)\(X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) two-photon transition of jet-cooled N\({}_{2}\) recorded by scanning the excitation-laser and fixing the ionization-laser wavelength is shown in the red trace in Figure 3. Eight transitions corresponding to rotational components within the P, Q, R, and S branches were observed in the spectrum. The transition wavenumbers determined from the spectrum agree well with previous results from room-temperature spectroscopy [36; 31]. The black dashed trace in Figure 3 shows a simulation of the spectrum using PGOPHER [37]. Spectroscopic constants for the simulation were taken from the NIST database [38] where the band origin had to be shifted by
Figure 2: Population of the excited state \(P_{D}\) during Rabi flops on the (3d) \({}^{2}\)D\({}_{5/2},m=-5/2\rightarrow\) (4s) \({}^{2}\)S\({}_{1/2},m=-1/2\) blue motional sideband of the Ca\({}^{+}\) clock transition following coherent motional excitation of a Ca\({}^{+}\)-N\({}_{2}^{+}\) two-ion string by a state-dependent optical dipole force. The red and green traces show experiments with N\({}_{2}^{+}\) in its rovibrational ground state (\(N^{+}\)=0) and higher-lying rotational states (\(N^{+}\geq\)2), respectively. The purple trace represents the signal obtained without the application of an ODF. The red-shaded area indicates the interval of 729 nm laser pulse lengths for which maximum stated detection contrast was achieved. Error bars represent the standard deviation of 175, 75, and 150 measurements for the red, green, and purple traces. See text for further details.
0.5 cm\({}^{-1}\) to achieve the agreement with the experimental spectrum shown in the figure. The transition wavenumbers are summarized in Table 1. The best agreement of the simulated with the observed rotational intensities is obtained assuming a rotational temperature of 6 K for N\({}_{2}\) in the molecular beam.
Figure 4 shows photoionization spectra recorded by setting the excitation laser wavenumber to the S(0), S(1), and Q(1) rotational components of the \(a^{1}\Pi_{g}(v^{\prime}=6)\gets X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) transition and monitoring the total ion yield as a function of the photoionization-laser wavenumber. The dashed vertical lines indicate approximate ionization thresholds for different ionic rotational states in the present experiment. Because of nuclear-spin-symmetry conservation, ionization from even (odd) intermediate rotational levels only populates even (odd) rotational states in N\({}_{2}^{+}\)[40]. The spacing between the rotational thresholds was calculated using spectroscopic constants for N\({}_{2}^{+}\) from [38]. The theoretical values were shifted by 14 cm\({}^{-1}\) compared to the N\({}_{2}^{+}\) ionization energy previously reported in the literature [41] to account for a lowering of the ionization energies caused by a stray electric field estimated to be \(E_{str}\approx 5.3\) V/cm in our chamber [27].
For the generation of N\({}_{2}^{+}\) ions in their rotational ground state in the ion trap, the S(0) component of the \(a^{1}\Pi_{g}(v^{\prime}=6)\gets X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) band was excited and the ionization laser wavenumber was set to 47130 cm\({}^{-1}\), yielding a total ionization wavenumber of 125672.4 cm\({}^{-1}\) in between the nominal ionization thresholds corresponding to the \(N^{+}=0\) and \(N^{+}=2\) rotational levels of N\({}_{2}^{+}\). The pulse energies of the excitation and ionization lasers were maintained at 1.1 mJ and 0.5 mJ per pulse, respectively, focused to a spot of \(\approx 150\)\(\mu\)m diameter in the center of the trap. A single N\({}_{2}\) molecule from the beam was ionized with a single pair of laser pulses with a probability of 59\(\pm\)6 % with these parameters. A contribution of 11\(\pm\)2 % from one-color [2+2]-photon ionization [31] was determined under these conditions by blocking the ionization-laser beam and monitoring the ion yield. The resonant nature of the ionization process in the trap was verified by changing the frequency of the excitation laser to 78560 cm\({}^{-1}\), i.e., far detuned from spectroscopic transitions in N\({}_{2}\) (see Figure 3). Under these conditions, the non-resonant ionization probability, attributed to mainly originating from electron impact ionization by spurious
Figure 3: \([2+1^{\prime}]\) REMPI spectrum of the \(a^{1}\Pi_{g}(v^{\prime}=6)\gets X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) two-photon transition in N\({}_{2}\) recorded in a molecular beam. Rotational components of the transition are indicated with standard spectroscopic notation [39]. The black dashed line shows the simulation of the two-photon transition assuming a rotational temperature 6 K and spectroscopic constants from Reference [38] with the band origin shifted by -0.5 cm\({}^{-1}\).
\begin{table}
\begin{tabular}{l c c c} \hline Transition & Experiment & Literature\({}^{\mathrm{a}}\) & Simulation\({}^{\mathrm{b}}\) \\ \hline P(2) & 78524.00 & 78524.33 & 78523.96 \\ Q(2) & 78530.44 & 78530.18 & 78529.96 \\ Q(1) & 78532.44 & 78532.36 & 78531.92 \\ R(1) & 78538.40 & 78538.31 & 78537.92 \\ R(2) & 78539.36 & 78539.26 & 78538.95 \\ S(0) & 78542.40 & 78542.13 & 78541.90 \\ S(1) & 78547.36 & 78547.20 & 78546.91 \\ S(2) & 78551.40 & 78551.23 & 78550.95 \\ \hline \end{tabular} \({}^{\mathrm{a}}\) Reference [36]
\({}^{\mathrm{b}}\) PGOHER simulation [37] with constants from [38]
\end{table}
Table 1: Wavenumbers of rotational components of the \(a^{1}\Pi_{g}(v=6)\gets X^{1}\Sigma_{g}^{+}(v=0)\) transition in N\({}_{2}\) compared to literature and simulated values. See text for details.
Figure 4: Photoionization spectra of N\({}_{2}\) recorded via the S(0) (red trace), S(1) (blue trace), and Q(1) (green trace) rotational components of the \(a^{1}\Pi_{g}(v^{\prime}=6)\gets X^{1}\Sigma_{g}^{+}(v^{\prime\prime}=0)\) band. Approximate ionization thresholds for even and odd rotational states, shifted by 14 cm\({}^{-1}\) with respect to the literature values to account for stray electric fields in the apparatus, are indicated by black and grey dashed lines, respectively. See text for details.
electrons ejected from metal surfaces by stray laser light and accelerated by the electric fields in the trap, was measured to be 7\(\pm\)2 %. These results are summarized in Table 2.
The state selectivity of ionization was verified using the QND state-detection scheme outlined in Section II.2. In the present experiments, the state of seventy-two single N\({}_{2}^{+}\) ions was measured after ionization in the RF trap. Twenty-seven ions were detected in the rotational ground state implying a total state-preparation fidelity of \(38\pm 7\) %. These include ions produced by threshold photoionization as well as by the non-state-selective processes quantified in Table 2.
As discussed in detail in References [26; 28], RF and static electric fields inside the ion trap can lead to shifts of the ionization thresholds. If in such a scenario the ionization thresholds leading to excited rotational levels of the ion are shifted below the total photon energy provided by the ionization lasers, multiple ionic rotational levels can be produced compromising the fidelity for the preparation of the rotational ground state. Thus, the state-preparation fidelity critically depends on the instantaneous field strength during ionization. The amplitude of the oscillating RF fields in the trap is typically much larger than the strength of the static fields, thus the state-preparation fidelity is expected to primarily depend on the phase of the RF field at the time of ionization [28].
To characterize the influence of the phase of the RF field on the state-preparation fidelity, the trigger of the ionization laser pulses was synchronized to the phase of the RF source of the trap. Simultaneously, RF amplitude and the potential on the endcap electrodes were reduced to \(V_{RF}\approx 237\) V and \(V_{DC}=5\) V, respectively, to minimize further the influence of the fields on the ionization thresholds. The axial motional frequency for an N\({}_{2}^{+}\) ion corresponding to these parameters was \(\omega_{x}=148\) kHz at a Mathieu parameter [35]\(q=0.05\).
Due to the unknown phase shift between the RF source and the effective RF field inside the ion trap, the laser trigger was scanned with respect to the nominal phase of the RF source in order to identify the optimal working point. The maximum rovibrational-state-preparation fidelity is expected when the ionization laser pulse impinges on the molecular beam at a time around the zero crossing of the RF cycle. State-preparation fidelities at five different synchronization phases across an RF cycle were examined. At every phase, the state of twenty N\({}_{2}^{+}\) ions generated by photoionization nominally above the \(N^{+}=0\) threshold was measured inside the ion trap. As can be seen in Figure 5, at none of the five synchronization points an increase of the ground-state preparation fidelity above the value achieved without synchronization was obtained within the uncertainty limits. We thus conclude that the state-preparation fidelity is effectively independent of the RF phase under the conditions of the present experiment and the uncertainty limits of the measurement.
## IV Discussion
The weak sensitivity of the ground-state preparation fidelity to the phase of the RF field observed in Figure 5 is striking and requires further analysis. To evaluate the shifts of the ionization thresholds of the rotational levels in the N\({}_{2}^{+}\)\(X^{+2}\Sigma_{g}^{+}(v^{+}=0)\) ground state Figure 6a, the electric field at maximum RF amplitude and the static-field contribution produced by the trap endcaps were simulated using the COMSOL Multiphysics software [42] assuming the trapping configuration employed experimentally during phase-synchronized ionization. The shifts of the ionization thresholds \(\Delta\) [cm\({}^{-1}\)] were calculated according to \(\Delta=\alpha\sqrt{E}\), where a worst-case scenario of \(\alpha=6.1\) cm\({}^{-1/2}\)V\({}^{-1/2}\)[43; 27] was assumed and \(E\) [V/cm] is the electric field experienced by the molecule. Fig
\begin{table}
\begin{tabular}{l c c c} Method & Ion. rate p.p. [\%] & Std. error [\%] & Pulses \\ \hline
2-color res. ion. & 59 & \(\pm\) 6 & 162 \\
1-color res. ion. & 11 & \(\pm\) 2 & 285 \\
2-color off-res. ion. & 7 & \(\pm\) 2 & 242 \\ \end{tabular}
\end{table}
Table 2: Two- and one-color resonant and off-resonant ionization probabilities per laser pulse for the production of single N\({}_{2}^{+}\) ions in the RF ion trap. The last column indicates the number of single laser pulses used in these measurements.
Figure 5: Dependence of the state-preparation fidelity of photoionization on the relative phase of the ionization-laser trigger with respect to the RF source of the ion trap. The green trace indicates a value of 38% rovibrational-ground-state preparation fidelity measured without synchronization to the RF source within its uncertainty limits (green shaded area). The red data points indicate the ground-state preparation fidelity at five different fixed phases of the RF cycle delivered by the source (blue line). The green shaded area and red error bars are Poissonian standard deviations. See text for details.
ure 6b,c shows the spatial variation of the shifts of the ionization thresholds inside the trap in the plane along the propagation direction of the molecular beam (\(X\)-axis) and the vertical trap axis (\(Y\)-axis) caused by the RF and static electric fields, respectively. One can see that for the molecular beam with a diameter of \(\approx 500\)\(\mu\)m passing through the center of the trap and the ionization laser, focused to a spot size of \(\approx 150\)\(\mu\)m diameter in the center of the trap perpendicularly to the molecular beam, the shift on the edge of the ionization region reaches \(\approx 39\) cm\({}^{-1}\) and \(\approx 1.7\) cm\({}^{-1}\) from the RF and static field, respectively. Therefore, we conclude that the main contribution to the shift of the ionization thresholds in the present experiment is indeed produced by the RF field.
The effect of the synchronization of the zero crossing of the RF field with the timing of the ionization event has to be put into the context of the RF period of the trap compared to the duration of the ionization laser pulse. For our trap frequency of \(\Omega_{RF}=2\pi\times 16.4\) MHz, the half-period of the RF-field wave is \(30.5\) ns. Given that the temporal width at half maximum of the ionization laser pulse is \(\tau\approx 5\) ns, one obtains a spread in the phase of \(\tau\times\Omega_{RF}=0.16\times 2\pi\). That means that even if the maximum of the laser pulse was perfectly synchronized with the RF zero crossings, ionization events occurring "on the edges of the pulse" still experienced about \(25\) % of the full RF amplitude. Apparently, such events occurred sufficiently frequently in the experiments to mitigate the effect of the synchronization. This effect has been discussed theoretically in Reference [28] and was confirmed experimentally here.
Generally, as shown here and also discussed elsewhere [26; 28] the fidelity of rotational-state preparation using threshold photoionization in traps strongly depends on the specific field configuration of the trapping environment. In large "soft" traps, state-preparation fidelities in threshold ionization exceeding \(90\%\) have been demonstrated [6; 26]. However, in smaller traps with "stiff" trapping potentials which are required in experiments involving the cooling of the translational motion of the ions into the quantum regime [1; 2; 32; 3], as in the present study, field effects become more important.
In such cases, complete elimination of field effects could be obtained by temporarily quenching the RF potentials inside the trap in order to ionize the molecules in a completely field-free regime, as proposed in Reference [28]. Alternatively, the ions could be generated outside the trap under field-free conditions and subsequently be inserted into the RF field. The efficiency of such approaches remains to be demonstrated. However, the state-preparation fidelities of about \(40\%\) achieved here are readily high enough to serve as a starting point for an additional measurement-based state-selection step [16]. The state measurements on the single trapped ions using the scheme described in Section II.2 project the molecules into a specific quantum state which can be determined with near-unit (\(>99\%\)) fidelity as shown in Reference [3]. Indeed, any ion measured to be in a specific state is _known_ to be in that state after the measurement and is thus available for subsequent state-selected experiments. Conversely, ions which are found not to be in the target state can be discarded and the experiment re-initialized until an ion in the appropriate state is detected.
In this spirit, threshold photoionization represents a fitting approach to increase the initial state purity of the ions after which all ions not found to be in the desired state are eliminated by post-selection. This approach can increase the duty cycle of the experiment by orders of magnitude compared to studies which rely on ions prepared with thermal state populations at room tempera
Figure 6: a) Energy diagram of the rotational levels of N\({}_{2}^{+}\) in its \(X^{+}\,{}^{2}\Sigma_{g}^{+}(v^{+}=0)\) vibronic ground state. Wavenumbers are referenced to the lowest ionization threshold (IT=125667.03 cm\({}^{-1}\)[41]). Levels which cannot be accessed in the present ionization scheme for symmetry reasons are indicated in grey. b) and c) Calculated shifts of the ionization thresholds from (b) the RF field at maximum amplitude and (c) the static field produced by the endcap electrodes near the center of the trap (taken as the origin of the coordinate system). White circles and dashed white lines indicate the approximate boundaries of the laser focus and the molecular beam, respectively. See text for details.
ture [1; 2] and thus crucially enhance the sensitivity of experiments that require state-selected ions.
## V Summary
In the present study, we evaluated strategies for the preparation of single molecular ions in well-defined rotational quantum states in an ion trap. We tested a \([2+1^{\prime}]\) REMPI scheme recently proposed by Gardner et al. [31] for the generation of trapped N\({}_{2}^{+}\) ions in their rotational ground state using a novel non-destructive state-detection scheme to characterize the fidelity of the state preparation on the single-ion level. Under the present experimental conditions, \(38\pm 7\) % of the ions produced were found in the rotational ground state limited by the influence of the inhomogeneous time-varying RF field in the ion trap and non-selective ionization processes. As the state of the ion is generally preserved in the measurements, ions found not to be in the target state can be discarded from the trap, leaving only state-selected ions for subsequent experiments. Thus, threshold photoionization combined with post-selection of the ions after QND state detection can serve as a highly efficient, widely applicable approach for experiments requiring state-selected molecular ions.
## VI Data availability
The data that support the findings of this study are openly available on Zenodo at DOI 10.5281/zenodo.8273494.
###### Acknowledgements.
The authors acknowledge support from the Swiss National Science Foundation (grant nr. 200021_204123) and the University of Basel.
|
2303.01679 | Automated Machine Learning for Deep Learning based Malware Detection | Deep learning (DL) has proven to be effective in detecting sophisticated
malware that is constantly evolving. Even though deep learning has alleviated
the feature engineering problem, finding the most optimal DL model, in terms of
neural architecture search (NAS) and the model's optimal set of
hyper-parameters, remains a challenge that requires domain expertise. In
addition, many of the proposed state-of-the-art models are very complex and may
not be the best fit for different datasets. A promising approach, known as
Automated Machine Learning (AutoML), can reduce the domain expertise required
to implement a custom DL model. AutoML reduces the amount of human
trial-and-error involved in designing DL models, and in more recent
implementations can find new model architectures with relatively low
computational overhead.
This work provides a comprehensive analysis and insights on using AutoML for
static and online malware detection. For static, our analysis is performed on
two widely used malware datasets: SOREL-20M to demonstrate efficacy on large
datasets; and EMBER-2018, a smaller dataset specifically curated to hinder the
performance of machine learning models. In addition, we show the effects of
tuning the NAS process parameters on finding a more optimal malware detection
model on these static analysis datasets. Further, we also demonstrate that
AutoML is performant in online malware detection scenarios using Convolutional
Neural Networks (CNNs) for cloud IaaS. We compare an AutoML technique to six
existing state-of-the-art CNNs using a newly generated online malware dataset
with and without other applications running in the background during malware
execution.In general, our experimental results show that the performance of
AutoML based static and online malware detection models are on par or even
better than state-of-the-art models or hand-designed models presented in
literature. | Austin Brown, Maanak Gupta, Mahmoud Abdelsalam | 2023-03-03T02:46:53Z | http://arxiv.org/abs/2303.01679v2 | # Automated Machine Learning for
###### Abstract
Deep learning (DL) has proven to be effective in detecting sophisticated malware that is constantly evolving. Even though deep learning has alleviated the feature engineering problem, finding the most optimal DL model, in terms of neural architecture search (NAS) and the models optimal set of hyper-parameters, remains a challenge that requires domain expertise. In addition, many of the proposed state-of-the-art models are very complex and may not be the best fit for different datasets. A promising approach, known as Automated Machine Learning (AutoML), can reduce the domain expertise required to implement a custom DL model. AutoML reduces the amount of human trial-and-error involved in designing DL models, and in more recent implementations can find new model architectures with relatively low computational overhead.
Research on the feasibility of using AutoML for malware detection is very limited. This work provides a comprehensive analysis and insights on using AutoML for static and online malware detection. For static, our analysis is performed on two widely used malware datasets: SOREL-20M to demonstrate efficacy on large datasets; and EMBER-2018, a smaller dataset specifically curated to hinder the performance of machine learning models. In addition, we show the effects of tuning the NAS process parameters on finding a more optimal malware detection model on these static analysis datasets. Further, we also demonstrate that AutoML is performant in online malware detection scenarios using Convolutional Neural Networks (CNNs) for cloud laa. We compare an AutoML technique to six existing state-of-the-art CNNs using a newly generated online malware dataset with and without other applications running in the background during malware execution. We show that the AutoML technique is more performant than the state-of-the-art CNNs with little overhead in finding the architecture. In general, our experimental results show that the performance of AutoML based static and online malware detection models are on par or even better than state-of-the-art models or hand-designed models presented in literature.
Malware Detection; Automated Machine Learning; Deep Learning; Cloud Security; Static Malware Analysis, Online Malware Analysis
## 1 Introduction
### _Overview and Motivation_
Malware is becoming a more profitable domain for malicious actors with the rise of digital connectivity and the growing critical infrastructures reliance. These cyberattacks have cost the industry billions [1] of dollars. The increase and impact of such cyberattacks has called for novel and sophisticated defense mechanisms in response from those that wish to protect digital assets from malware attack.
There are several existing approaches for malware analysis, including static [2, 3], dynamic [4, 5, 6], and online analysis [7, 8, 9, 10]. Each of these analysis methods collect different features from the file or system in question, ranging from details of the file header in static analysis, to holistic operating system level performance metrics in the case of online analysis. The reasons to use a specific analysis approach depends on the use case and availability of data. For simple file scanning, static analysis is the fastest method, since there is no need to run the executable in question. On the other hand, collecting data from a running executable in dynamic analysis may give more insight into the true behavior and intent of a questionable executable.
Machine learning (ML), especially deep learning (DL), has become a popular technique to develop malware detection solutions, and has shown promising results [11] because of its ability to learn generalized patterns to identify unseen malware. As such, research works [12, 13, 14, 15, 16, 17, 18, 19, 5, 10] have employed different types of ML models to detect malware on a variety of systems and data sources, depending on the use case. These proposed solutions have utilized both traditional ML algorithms and, more recently, deep learning algorithms. Approaches [20, 21] that rely on traditional machine learning models require domain experts for feature engineering, which, in most cases, is burdensome and laborious. On the contrary, deep learning based approaches [22, 23, 24, 25, 26, 27, 28, 29, 30, 31] eliminate the feature engineering step and are gaining more traction. Some works [8, 32] have utilized state-of-the-art DL models (e.g., ResNet [33], DenseNet [34], and VGG16 [35]) that perform well in general and train it on malware data; however, these models are usually very complex and require a rigorous tuning process to achieve the desired high performance. In addition, such models are usually designed for tasks like image, text, or voice recognition and can be inadequate for malware detection. Consequentially, works [24] have focused on manually crafting model architectures that fit the malware detection domain. However, these approaches not only require heavy tuning, but also require high technical skills in both the machine learning and the malware domains.
Automated Machine Learning (AutoML) [36] seeks to automate the process of finding an optimal model architecture for the given data and tuning this model to achieve higher performance. In addition, it can also reduce the work needed to redesign a malware detection model as malware and data sources evolve overtime. Even though AutoML pipelines require more computational time to produce a model, they significantly reduce the work hours and ex
pertise needed to find a performant model. The utilization of AutoML for malware detection can be a very promising strategy since it can automate both the process of finding an optimal ML architecture that is specially designed for malware detection and also the process of fine-tuning this optimal model. However, AutoML is still at its nascent state and its application in various domains is yet to be studied. In particular, studies on the use of AutoML for malware detection are still lacking. With the growth in malware sophistication and machine learning complexity, especially in deep learning, finding the most performant deep neural architecture without a significant increase in human hours spent or domain expertise is critical.
This paper aims to study the feasibility of integrating AutoML into the malware detection pipeline to remove the need to hand design and tune ML models. In particular, we focus on using deep learning, specifically Feed Forward Neural Networks (FFNNs) and Convolutional Neural Networks (CNNs). FFNNs have a high level of expressive power and require much less feature engineering than traditional machine learning approaches. Convolutional Neural Networks can model complex functions with image shaped data as input with little to no feature engineering, only requiring the data to be placed in a 2 dimensional input. Further, we focus on both data that is gathered through static analysis, specifically in regard to portable executable (PE) files which are the predominant executable format in the Windows operating system, and online data captured from running, internet connected, Linux servers in cloud laa5 with malware executed on them. The _main contributions_ of this work are:
* We study the feasibility of using AutoML for deep learning based static malware detection and demonstrate the effectiveness of the produced AutoML Deep FFNN models by showing that they are comparable to manually crafted models, even without significantly tuning the AutoML pipeline.
* We provide insights and analysis of the automation parameters of the AutoML process on static malware data, and show how these parameters can affect the performance of the found optimal model.
* We show that AutoML derived Convolutional Neural Networks can preform better than state-of-the-art Convolutional Neural Networks on online malware data, with little overhead in deriving the model architecture.
* We discuss ideas and future directions for improving the efficiency and performance of AutoML models that are designed for malware detection.
The rest of this work is organized as follows. Section 2 discusses necessary background and related works in this domain. Section 3 shows the application of AutoML in two popular static malware datasets, with comparison to other works and discussion of the presented AutoML methodology. Section 6 focuses on one-shot AutoML applied to CNNs to detect malware in an online cloud laaS, with comparison to detection results of state-of-the-art CNNs on the same dataset. Section 7 presents ideas for future work and improvements, as well as the conclusion to the findings in this work.
## 2 Background and Related Works
### _Malware Detection_
#### 2.1.1 Static Analysis
Static analysis involves analyzing features that can be observed in a binary without running the binary executable. Static analysis methods may include observations such as: file entropy; n-gram analysis of byte sequences in a binary; imports and API calls; strings found within the binary; header information. The major benefit of static analysis is its speed and low overhead, since the binary is not executed.
One of the most simple forms of static analysis for malware detection is looking up the signature of a binary, most often the file hash. This method is extremely efficient if the binary's hash is documented, but has no ability to detect modified or new malware. A more popular method of static analysis looks at n-grams of bytes in the binary. Authors in [20] measured frequency of common n-gram bytes in Windows binaries to determine if the binary is malicious. The frequency of n-grams across both malicious and benign binaries were used to train a K-nearest-neighbors classifier. While this approach showed good results (at the time published), it is unclear if it would show as good of results in modern malware detection. This approach additionally has proven to be computationally expensive and offers diminishing returns as \(n\) increases [37]. Another work [21] has taken it a step further from n-gram byte analysis to analyzing instruction sequences in questionable binaries.
To reduce the overhead imposed by the essential feature engineering in traditional ML, some researchers have focused on deep learning approaches. Authors in [38] used recurrent neural networks to analyze the first 300 bytes of the header of Windows PE files. Work in [24] utilizes convolutional layers within a neural network to extract information on Windows PE headers to determine binary intent. Authors in [22] implemented what they call _WindowsStatic-Brain-Droid_, which implements multiple architectures in a voting scheme. The features for the architecture are both raw byte information and parsed features from the binary. The raw byte features feed into various architectures based on [24]. The parsed features feed into multiple traditional classifiers and a FFNN. Section 3 will focus on developing an optimized neural architecture similar to [22]'s FFNN.
#### 2.1.2 Dynamic Analysis
Unlike static analysis, dynamic analysis executes a binary to monitor its behavior. This is most often carried out in a sandboxed environment to restrict the binaries access to other resources which a malware could attack. Data collected from the execution behavior of malware allows for greater insight into a questionable binary's intent and nature. Authors in [21, 31] utilized system calls captured during execution to detect malware. Work in [21] utilizes traditional machine learning approaches, while [31] uses neural networks for classification. Authors in [5], look at API calls made in 5 minute intervals to classify binaries as benign or malicious. These calls were passed to a CNN for classification. In [39], authors use FFFNs to classify binaries based on extracted API calls from dynamic execution.
Compared to static analysis, these methodologies require extra computational overhead and time to detect malware.
However, dynamic analysis will not be able to detect sophisticated malware that can detect the presence of an emulation sandbox or the lack of network connectivity that is often found with isolated emulation environments.
#### 2.1.3 Online Analysis
Where dynamic analysis only analyzes the execution of a single binary, online analysis collects data from an entire system to monitor (in real time) for malware execution. This allows for continuous monitoring of an open system (not in a sandbox), with full access to all resources. Additionally, this allows for collection of execution details that extend beyond that of a single binary. This can include both knowledge of normal execution of a given system as well has effects to adjacent processes from live malware execution.
The authors in [40, 41] utilize performance counters from an entire system to detect the presence of malware. Guan _et al._[42] proposed using system calls to detect malware in online systems with ensembles of Bayesian predictors and decision trees. Others have proposed using memory features [43]. McDole et al. [7] and Abdelsalam et al. [44] show that per-process performance metrics from Ubuntu machines can provide high detection performance when ingested with a CNN. The process data is structured in the shape of an image, with the rows denoting different processes and the columns denoting different performance for each process, collected from the target machine. Abdelsalam et al. achieves 89.5% detection accuracy using shallow CNNs, while McDole et al. achieves 92.9% detection accuracy using state-of-the-art CNNs on the same dataset. Jeffery et al. [9] uses recurrent neural networks (RNNs) on the same dataset as McDole et al. and Abdelsalam et al., but organizes the inputs to the RNN as sequences of unique process features, all from the same time slice. They achieve 99.61% detection accuracy with this technique. Online malware detection can incur high overhead with continuous monitoring of systems, but provides real-time detection performance on evasive and low-lying malware in a live environment without requiring the identification of a suspicious executable.
### _Deep Learning for Malware Detection_
Using deep learning for malware detection has been researched extensively and spans approaches that utilize various types of deep learning algorithms including CNNs [26, 27, 28, 28, 45, 8], RNNs [9, 29, 30], feed forward neural networks (FFNNs) [22, 23, 25], etc. Deep learning approaches presented in these works have the advantage over traditional ML models as they do not require hand designed features in order to be performant. Although these approaches impose additional performance overhead as compared to some traditional ML algorithms, many have shown to be more performant under some conditions [46], with high accuracy in malware detection [10].
Despite the fact that deep learning approaches have shown tremendous results for malware detection, most of these works fall short because either (1) they utilize state-of-the-art models that are not tailored specifically for malware detection and may not be optimal for the data available, or (2) they have hand designed their models specifically for malware detection, without AutoML, which requires extensive domain experts' knowledge and hand tuning. Fortunately, AutoML can help overcome these obstacles and attain higher optimal results; however, the feasibility of utilizing AutoML for malware detection is hardly explored.
### _Automated ML for Malware Detection_
Automated machine learning has recently been used in a variety of fields. Several AutoML works have been designed for specific domains, such as processes developed for the computer vision domain [47, 48]. However, AutoML is still at nascent stage which is yet to see wider adoption and application in cybersecurity.
The field of malware detection has barely seen the presence of AutoML, and to the best of our knowledge has only been presented in a few works. Research in [49] tested both AutoGluon-Tabular1 and Microsoft NNI2 on the EMBER-2018 dataset [50], a malware dataset based on static analysis. These frameworks are used to tune hyper-parameters of a LightGBM model to best classify binaries from the dataset. Authors also used a proprietary dataset to evaluate the AutoML frameworks. This approach yielded a 3.2% increase of True Positive Rate above the EMBER-2018 baseline results with the same classifier. AutoGluon-Tabular produced these results vs a 2.2% increase with Microsoft NNI. Their approach largely used traditional machine learning methods as well as a FFNN in the ensemble offered with AutoGluon-Tabular. The authors of [51] use AutoML to detect malware from encrypted network traffic. They used TLS fields as parameters to form their AutoML process. This work used a python package _mljar-supervised3_, utilizing many traditional ML models as well as a deep neural network in an ensemble.
Footnote 1: [https://auto.gluon.ai/stable/index.html](https://auto.gluon.ai/stable/index.html)
Footnote 2: [https://github.com/Microsoft/mi](https://github.com/Microsoft/mi)
Footnote 3: [https://supervised.mljar.com/](https://supervised.mljar.com/)
### _Methodology and Experimental Setup_
#### 2.4.1 Neural Architecture Search
The performance of a model is highly dependent on the design of its architecture. A neural architecture search aims to find the architecture design that achieves the highest performance on unseen validation data. We consider a change in architecture design to constitute a change to the number or configuration of trainable parameters, or the layers' activation function.
#### 2.4.2 One-Shot Search Methodology
Many recent NAS methodologies focus on the computer vision domain. This field is heavily dependent on convolutional neural networks. Many types of layers within these networks, given the same shape of input, will produce the same shape of output. A network whose layers meet this condition is, intuitively, easily mutable; layers can be swapped out interchangeably, allowing the next layer to accept any chosen layer type's input since they share the same output shape as shown in Figure 1. Many types of layers can be substituted for _Layer N_ and maintain the same output shape of (1, 16, 16)_. This property allows for an algorithm to test multiple layer choices at each layer
of a network to find the best architecture configuration. The work presented in [48] can create a _super-graph_ which contains multiple _sub-graphs_ representing all permutations of networks given the choices of each layer. A similar work, [47] relaxes the constraints of the categorical layer choice to a softmax choice, such that the categorical choice is now continuous, and a gradient can be used to find the best layer choices through training by backpropagation.
These NAS methodologies are used to learn an entire network architecture or learn the architecture of a cell that is repeated throughout the network. As long as each of the operations (layer) choices produce the same shape of output, the specific operation choices within a cell can be anything. This possibility allows for designing not only convolutional cells, but also recurrent cells. This allows the algorithm to find both CNNs and RNNs, or a combination of both.
These algorithms, known as One-Shot algorithms, find the most performant network configuration in "one-shot", without the need to train the network from scratch multiple times, by leveraging theoutput shape property.
#### 2.4.3 Multi-Trial Search Methodology
Multi-Trial NAS solutions, as opposed to one-shot, require many trials of different network configurations to find a performant architecture. In the past, before the invention of clever one-shot methodologies, this was the only way to test out different network architectures. Today, some types of networks still rely on multi-trial NAS, such as networks that can't easily swap out layers because of layer output shapes. There has been some work to improve the efficiency of this process, such as [52], through weight sharing. This allows each trial to run for a much shorter amount of time by leveraging the learned parameters from previous trials, and only optimizing for new parameters. However, these algorithms, if not carefully controlled, can become unstable in later trials. For this reason, our work with deep feed forward neural networks in Section 3 utilizes the more primitive multi-trial methodology in searching for the most performant architecture.
#### 2.4.4 NAS Search Space
The NAS search space is the total space containing the values of all valid model configurations. During the NAS process, architecture configurations are drawn from this space and evaluated. The search space is arbitrarily large, so reasonable constraints are placed in order to bound the cost of time required to search and limit the complexity of the chosen model. For example, a model depth of 1,000 layers is a valid choice for an architecture, but it will produce a very complex model with a considerably high training time. For this reason, upper boundaries are usually provided. For example, in our proposed approach in Section 3, we set the number of layers' upper bound to 14 to limit the complexity of the model architectures available within the search space. Beside bounding the range of the search space, we also considered the sampling granularity and distribution of the search space. For example, in Section 3 we set the granularity in selecting a layer's width to 128 neurons in order to limit the number of available selections while still maintaining an appropriate level of expression of its effect on model performance. In order to simplify the NAS process, when a parameter value is selected from the search space, we fix this value throughout the model, instead of on a per-layer basis.
## 3 Automated Machine Learning for Static Malware Detection
### _Deep Feed Forward Neural Networks_
Deep Feed Forward Neural Networks (FFNNs) are an extension of the simple perceptron network, except they often contain one or more hidden layers. FFNNs without convolutional or recurrent layers can also be referred to as Multi-Layer Perceptrons (MLPs).
FFNNs pass input data through each layer in the model sequentially, applying each layer's function to the previous layer's output, forming what can be seen as an acyclic graph from input to output with data flowing only one way. Figure 2 shows an example FFNN. Each node within a layer can apply an activation function to the sum of each of its inputs, shown as the function lines within each node in the figure. Each connection between nodes has a specific weight applied to the output of a specific node into another node. \(W_{1}\) and \(W_{2}\) represent the set of weights between each layer, each weight in each set corresponding to a connection between two unique nodes. The network can have any number of hidden layers.
Deep FFNNs require backpropagation through gradient descent to train weights of each layer of the network sequentially, backward through the network, from output to input. Through the processes of backpropagation, activation functions such as sigmoid and t
Fig. 1: Example Convolutional Layer Output Shapes
Fig. 2: Feed Forward Neural Network
called vanishing gradients. This occurs because repeatedly taking the gradient of these functions, as backpropagation occurs, results in a value that approaches zero. For this reason, idempotent activation functions such as Rectified Linear Unit (ReLU) are often used in hidden layers of deep networks to solve the vanishing gradient problem. Additionally, functions like ReLU and Exponential Linear Unit (ELU) are cheaper to compute than sigmoid and tanh, but still allow for the network to learn non-linear functions. However, sigmoid like functions allow for an output to be mapped to a probability, and are often used on the output layer of a network to allow for each output neuron to produce a binary decision. Figure 2 shows the input and hidden layer activation functions as ReLU, and the output layer's activation function as sigmoid.
### _Search Methodology_
In this section, during the neural architecture search, an architecture selected from the search space is evaluated using an evaluation metric (F1-score in this section), indicating the model performance based on which a strategy is employed to select the next architecture choice for subsequent evaluation. The next architecture selection in this work is a section on a random selection strategy, where, regardless of the previous result, each new architecture choice is randomly selected without duplication. The NAS selects a number of random architecture configurations from the search space. These are referred to as trials. In each trial, a model is trained based on the selected architecture for a predefined number of epochs. Afterward, the model is evaluated at the end of every epoch using the evaluation score of the validation set. The model configuration that achieves the highest score will pass to the next phase: hyper-parameter tuning.
#### 3.2.1 Hyper-Parameter Tuning
Once an architecture is selected during the NAS phase, the next phase as shown in Figure 3 searches for the optimal hyper-parameters of the chosen architecture. The hyper-parameters of the model are tune-able values that affect the model performance but do not alter the architecture of the model itself. This can include the batch size, optimizer, learning rate, dropout rate, etc. Just as in the NAS phase, the hyper-parameter tuning phase also has a bounded search space with a defined sampling granularity. With the hyper-parameter search space, we also define the sampling distribution. The hyper-parameter search phase uses the Tree-structured Parzen Estimator (TPE) strategy [53] in selecting the next set of hyper-parameters to test.
#### 3.2.2 Final Model Selection
After the hyper-parameter tuning phase is complete, the results from the NAS phase and hyper-parameter tuning phase are combined to be the final model configuration. The model is then trained and evaluated after every epoch using the evaluation score of the validation set, and the highest performing epoch is saved as the final trained model to be evaluated on the test set.
### _Static Malware Data Sources_
We use two popular static malware datasets - EMBER-2018 [50] and SOREL-20M [54], extensively used in the literature. We use these in this section with more primitive AutoML to explore the results of this methodology on datasets that have had high result benchmarks set.
#### 3.3.1 EMBER-2018 Dataset [50]
EMBER is considered to be the first attempt to create an appropriately large static malware dataset. The dataset contains features extracted from benign and malicious portable executable (PE) files using the 'Library to Instrument executable Formats' (LIEF) [55]. The samples in the dataset are labeled as either malicious, benign, or unknown. Only the samples labeled malicious or benign are considered in this work. There are approximately 600K samples in the training set and 200K samples in the test set. Since there is no validation set provided, we excluded and used the last 20% of the training set (i.e. 120K samples) as the validation set. There are two versions of the EMBER dataset: EMBER-2017 and EMBER-2018. EMBER-2018 was specifically curated so that the training and testing sets would be harder to classify. We used EMBER-2018 in our experiments. However, to fairly compare our results to other works that used EMBER-2017, we test and report our model's (found with EMBER-2018) performance against EMBER-2017 dataset.
Fig. 3: Automated Machine Learning Process
#### 3.3.2. SOREL-20M Dataset [54]
Sophos Labs4 released SOREL-20M dataset in 2020 to address some shortcomings of EMBER dataset. This dataset contains 12,699,013 training samples, 2,495,822 validation samples, and 4,195,042 testing samples. SOREL-20M uses the same features from the EMBER-2018 dataset. The samples in the SOREL-20M dataset contain the same binary malicious label as EMBER-2018, but also contain extra metadata, including the number of anti-virus vendors that flagged a sample as malicious and the tags that anti-virus vendors associated with a sample. Included in these tags are labels such as _droper_, _adware_, _downloader_, etc. Authors in [25] have shown that the use of this metadata can help to improve performance, and our work in this section will allow the possibility of a model to use this auxiliary information in the training process.
Footnote 4: [https://www.sophos.com/en-us/labs](https://www.sophos.com/en-us/labs)
## 4. AutoML Tuning and Training
### _NAS Phase Configuration_
The full architecture search space for both the EMBER-2018 and SOREL-20M experiments is shown in Table I. The available options for _Activation_ and _Tag Head Activation_ are not applicable since the choices are either Rectified Linear Unit (ReLU) or Exponential Linear Unit (ELU). Similarly, for _Use Counts_ and _Use Tags_, the choices are either _True_ or _False_.
As mentioned previously, the SOREL-20M dataset has readily available labels each containing a binary malicious label, an encoding of the vendor tags, and a numerical count of the vendors flagging the sample as malicious. These additional labels were made available during the architecture search process through the use of additional output heads of the model to predict the count of the vendors flagging the sample malicious and predict any tags associated with the sample from anti-virus vendors. These additional heads were made optional through the use of two additional architecture search parameters: _Use Counts_ and _Use Tags_, as shown in Table I. Design for additional heads, their respective loss functions, and the network design is inspired by [25]. The architecture search selects 150 random architecture configurations from the search space. The number of trials was chosen to cover both the search space and minimize cost. However, further investigation is required to analyze the effects of the number of trials on the selected models' performance as explained in subsection 5.2. The SOREL-20 and EMBER-2018 NAS was run for 10 and 25 epochs, respectively.
The highest achieved F1-score of a model during any point of its trial (instead of the F1-score of the final epoch) is chosen as the fitness score so that a model configuration's ability is more accurately represented, as the model's performance may fluctuate during the training process. Even though random search has been shown to give adequate results with a sufficient amount of trials [56, 57], trial count remains a parameter to be investigated in future work.
### _Hyper-Parameter Tuning Phase Configuration_
The full hyper-parameter search space is shown in Table II. The _uniform_ distribution behaves like the sampling granularity in the NAS phase. The _loguniform_ samples from a logarithmic distribution such that the logarithm of the values returned will be uniformly distributed. Learning rate is sampled from this distribution to allow smaller values to be as likely sampled as larger values. We set the batch size minimum, maximum, and sampling granularity larger for the SOREL-20M experiments due to the size of the dataset as compared to EMBER-2018. Similar to the NAS phase parameters configuration, we believe that the minimum, maximum, and sampling granularity values requires further investigation.
In [25], using the SOREL-20M dataset, the authors use a loss weight of 0.1 for the vendor count head and vendor tag head, and a 1.0 loss weight for the malicious decision head. These loss weights can be considered a hyper-parameter available for tuning since altering the value does not change the architecture of the model. In our work, the malicious decision head loss weight is fixed to 1.0 while the auxiliary loss head weights are variable between 0.0 and 1.0 each. Note, only the tag head loss weight is included in Table II because the highest achieving model during the SOREL-20M NAS phase did not have a vendor count head, and therefore did not utilize that parameter. The models are trained for 10 and 25 epochs in the case of SOREL-20M and EMBER, respectively. F1-score is again used as the evaluation metric in selecting the highest performing model.
## 5 Results and Discussion
### _Evaluation Metrics_
We use four evaluation metrics along with receiver operating characteristic (ROC) and area under the curve (AUC).
\[Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{1}\]
\[Precision=\frac{TP}{TP+FP} \tag{2}\]
\[Recall=\frac{TP}{TP+FN} \tag{3}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Parameter** & _Minimum_ & _Maximum_ & _Granularity_ \\ \hline Depth & 1 & 14 & 1 \\ \hline Width & 128 & 1920 & 128 \\ \hline Activation & - & - & - \\ \hline Tag Head Depth* & 1 & 3 & 1 \\ \hline Tag Head Width* & 16 & 112 & 16 \\ \hline Tag Head Activation* & - & - & - \\ \hline Use Counts* & - & - & - \\ \hline Use Tags* & - & - & - \\ \hline \multicolumn{4}{l}{*SOREL-20M Models Only} \\ \end{tabular}
\end{table} TABLE I: Architecture Search Space
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Parameter** & _Minimum_ & _Maximum_ & _Granularity_ & _Distribution_ \\ \hline Batch Size (SOREL-20M) & 128 & 16384 & 1024 & guniform \\ \hline Batch Size (EMBER) & 32 & 8592 & 32 & guniform \\ \hline Learning Rate & 0.0001 & 1.0 & - & beginning \\ \hline Dropout & 0.0 & 0.50 & 0.05 & guniform \\ \hline Tag Loss Weight* & 0.0 & 1.0 &.05 & guniform \\ \hline \multicolumn{4}{l}{*SOREL-20M Models Only} \\ \end{tabular}
\end{table} TABLE II: Hyper Parameter Search Space
\[F1-score=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{4}\]
Positive refers to a malicious sample, whereas, negative refers to a benign sample. _TP, FP, TN_ and _FN_ are true positives, false positives, true negatives and false negatives, respectively. Precision suffers when benign samples are labeled as malicious (high FP), while recall suffers when malicious samples are labeled as benign (high FN). F1-score is the harmonic mean of precision and recall, so it signifies models that have both high precision and recall. If a model has high precision but low recall or vice versa, the F1-score will be low.
#### 5.1.1 Results
After the experiments, using F1-score as an evaluation metric at each phase of the process, the found architectures and hyper-parameters are shown in Table III.
The detection results are listed in Table IV and Table V for SOREL-20M and EMBER datasets, respectively. Also included in this table is the AUC with a maximum false positive rate (FPR) of 0.1%, the accuracy, F1-score, true positive rate (TPR) at 0.1% FPR, and TPR at 1% FPR. The table also contains results using loss as a performance metric for SOREL-20M and EMBER-2018; this is to show the difference in F1-score and loss as a performance metric in the final stage, this will be brought up in the discussion section of this section. Some other works shown in the tables only report a subset of the metrics, but are still shown for comparison.
In particular, for SOREL-20M, Table IV shows our AUC results are on par with the FFNN ensemble from [58] and slightly exceeds [25], the work that presented the auxiliary model heads for SOREL-20M. Our model significantly exceeds the AUC under 0.1% FPR of the only other work [58], which reported this parameter. The accuracy of our model is similar but higher than [58]. We reported TPR at 0.1% and 1% FPR for comparison to [25], where it can be seen our model performed better in both.
With respect to EMBER-2018 in Table V, [60] performs slightly better in their reported metrics, AUC and accuracy, whereas the rest of their metrics are not reported. Our model chosen in the final training phase is similar to other results in AUC and accuracy, surpassing [59] in AUC, and surpassing both [58, 59] in accuracy. The AUC under 0.1% FPR of our model far surpasses the results of [58]. The TPR at 0.1% is the only reported metric of [49], which is significantly higher than our results. Due to limited metrics provided by other related works, it is difficult to compare the efficacy of our AutoML method in a holistic sense. The results from EMBER-2017 (with the optimal parameters from EMBER-2018) are reported in the bottom of Table V. The authors in [22] and [24] only reported accuracy and F1-score of their results. Our model's accuracy and F1-score are slightly higher than their results, but with metrics close to 100%, this is significant.
The results show that models developed with our proposed AutoML pipeline are similar to those found with hand designed solutions, and sometimes even exceed the performance. This shows the efficacy of integrating AutoML into a malware detection pipeline, eliminating the need to hand designed models, which is difficult, time consuming, and requires high technical skills.
Figure 4 shows the ROC curve for both the EMBER-2018 and SOREL-20M experiments, note the logarithmic scale on the x-axis denoting the FPR. An ROC curve shows the TPR for each respective FPR. Given the magnitude increase of training data in SOREL-20M over EMBER-2018, it is no surprise the TPR of SOREL-20M is higher than EMBER-2018 at any FPR. SOREL-20M TPR falls off much slower than EMBER-2018, and never goes below 0.8 TPR in the graph.
### _Discussion and Analysis_
#### 5.2.1 Meta-Hyper-Parameter Selection
As mentioned earlier, many of the parameters governing the NAS and Hyper-Parameter tuning phases are selected based on our experience to simplify the process and provide a balance between the cost of training and detection results. We discuss below some of these parameters.
#### 5.2.2 Epochs per Trial
The number of epochs per trial is an important parameter that directly affect the NAS process. This was especially a consideration for the SOREL-20M trials, since the dataset is an order of magnitude larger than the EMBER-2018 dataset and therefore took much longer to train.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameter** & **SOREL-20M** & **EMBER-2018** \\ \hline Depth & 8 & 3 \\ \hline Width & 1920 & 1664 \\ \hline Activation & ReLU & ReLU \\ \hline Dropout & 0.15 & 0.30 \\ \hline Learning Rate & 0.000439 & 0.000269 \\ \hline Batch Size & 3072 & 1440 \\ \hline Use Count Head & False & - \\ \hline Use Head & True & - \\ \hline Tag Head Depth & 1 & - \\ \hline Tag Head Width & 112 & - \\ \hline Tag Head Activation & ELU & - \\ \hline Tag Head Loss Weight & 0.70 & - \\ \hline \end{tabular}
\end{table} TABLE III: Found Optimal Parameters
Fig. 4: ROC using F1-Score for Selection
Initially, the number of epochs for the SOREL-20M NAS trials was set to 3. This implies that the model configurations with the highest performance after training for 3 epochs would perform the best overall. To test this, we increased the number of epochs to 10 and 20 to help in better understanding of the impact of epochs per trial on the selected models' performance during the NAS. The results of these experiments are shown in Figure 5. The primary consideration here is with the performance trend of SOREL-20M, but EMBER-2018 is shown as well.
This graph shows the F1-score average of the top 30 selected models at any given epoch. The F1-score for each model is calculated as the highest F1-score reached up to and including a given epoch. At each epoch, the 30 models with the highest F1-score, as described above, are averaged together. At any epoch, the top 30 set of models may be different if any model in the experiment achieves results that puts the model in the top 30 for that epoch. It can be seen that there is a correlation between top model performance and the number of epochs, in a seemingly logarithmic relationship. As long as a model is not so complex that it over-fits the training data, a more complex model should, intuitively, preform as well as or better than a less complex model. However, a good choice for the number of epochs should be where the curve start to straighten so that the model doesn't become too complex and, in turn, require a massive amount of training time. Figure 6 shows that as the number of training epochs per trial increases so does the average complexity of the top 30 performing models. The model complexity in the figure represents the average product of _width_ and _depth_ of the hidden layers of the top 30 model configurations during the NAS phase, which results
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Work** & _Perf. Metric_ & _AUC_ & _AUC \(\leq\) 0.1\% FPR_ & _Accuracy_ & F1-Score_ & _TPR: 0.1\% FPR_ & _TPR: 1\% FPR_ \\ \hline \hline ALOHA [25] & - & 0.997 & - & - & - & 0.922 & 0.972 \\ \hline FFNN Ensemble [58] & - & **0.998** & 0.0927 & 0.988 & - & - & - \\ \hline LightGBM Ensemble [58] & - & 0.984 & 0.0446 & 0.861 & - & - & - \\ \hline Our Work & F1-Score & **0.998** & 0.966 & **0.990** & **0.984** & **0.965** & 0.993 \\ \hline Our Work & Loss & **0.998** & **0.969** & **0.990** & **0.984** & 0.963 & **0.995** \\ \hline \end{tabular}
\end{table} TABLE IV: SOREL-20M Dataset Results
Fig. 5: Top 30 Preforming Models Average F1 by Epoch
Fig. 6: Top 30 Preforming Models Average Complexity by Epoch
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Work** & _Perf. Metric_ & _AUC_ & _AUC \(\leq\) 0.1\% FPR_ & _Accuracy_ & F1-Score_ & _TPR: 0.1\% FPR_ & _TPR: 1\% FPR_ \\ \hline \hline AutoGluon Ensemble [49] & - & - & - & - & - & 0.900 & - \\ \hline Malconv w/ GCG [59] & - & 0.980 & - & 0.933 & - & - & - \\ \hline LightGBM Ensemble [58] & - & 0.986 & 0.0605 & 0.940 & - & - & - \\ \hline Detection Pipeline [60] & - & **0.995** & - & **0.969** & - & - & - \\ \hline Our Work & F1-Score & 0.984 & **0.614** & 0.958 & **0.958** & 0.417 & **0.969** \\ \hline Our Work & Loss & 0.981 & 0.573 & 0.918 & 0.921 & 0.188 & 0.951 \\ \hline \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Work** & _Perf. Metric_ & _AUC_ & _AUC \(\leq\) 0.1\% FPR_ & _Accuracy_ & _F1-Score_ & _TPR: 0.1\% FPR_ & _TPR: 1\% FPR_ \\ \hline \hline DeepMalNet [22] & - & - & - & 0.989 & 0.989 & - & - \\ \hline MalConv [24] & - & - & - & 0.988 & 0.988 & - & - \\ \hline Our Work & F1-Score & **0.999** & **0.916** & **0.992** & **0.992** & **0.956** & **0.997** \\ \hline \end{tabular}
\end{table} TABLE V: EMBER Dataset Results
in the number of trainable parameters in a given model.
#### 5.2.3 NAS and Tuning-parameters Phases Evaluation Metric
We use F1-score as an evaluation metric to select the models that have both high recall and precision. After getting the final selected model during the NAS and hyper-tunning-parameter phases, we train and evaluate the model using both F1-score and binary cross-entropy loss. The results shown in the ROC curves in Figure 7 shows that both metrics can reach comparable results. This indicates that, besides F1-score, other metrics could also be explored, including accuracy, AOC, binary cross-entropy loss, etc. and are left to future work.
#### 5.2.4 Search Space Bounding and Strategy
The search space values are one of the most important factors in the AutoML process. As shown in Table III, the width parameter (i.e. 1664) of selected model for EMBER-2018 dataset is found to be less than the maximum value (i.e. 1920). However, the width of the SOREL-20M model was the maximum available value in the bounded search space. This indicates that an even wider model might preform better than the found model had the search space been bigger. Selecting optimal search space values is still an open question. The choice of the search strategy for the NAS and tuning-parameter phases are random search and TPE, respectively. In this section, these strategies were chosen because of their simplicity. However, a more adequate strategy tailored to malware detection could potentially result in better selected AutoML models.
#### 5.2.5 Cost of Current Implementation
The SOREL-20M experiments took \(\approx\)30 minutes per epoch to run, with 16 experiments running simultaneously. The EMBER-2018 experiments took \(\approx\)5 minutes per epoch, with 24 experiments running simultaneously. Overall, the experiments took \(\approx\)5600 minutes and \(\approx\)1560 minutes to run both SOREL-20M and EMBER-2018 experiments, respectively. This is the time to run the NAS and hyper-parameter phases of the process, excluding the final model training. The time to train the final model is not reported, as the computational cost is insignificant compared to the previous two phases.
The current implementation of the proposed methodology uses a _multi-trial NAS_, where each set of model parameters selected from the NAS search space are trained to the specified epoch limit. Other implementations of multi-trial NAS try to optimize this process through early stopping and weight sharing [61]. Even though these methods may introduce instability into the process, they can reduce the computational cost.
It can be concluded that it is more expensive to use AutoML than to train hand-designed models. This cost trade-off should be taken into account as the proposed methodology becomes more refined. Future implementations may significantly reduce the time to complete the AutoML process. This can be achieved through more sophisticated NAS implementations and intelligent search strategies that can reduce the number of trials required or the number of epochs required per trial.
## 6 Automated Machine Learning for Online Malware Detection
This section will focus on using one-shot AutoML for malware detection in online cloud environments using Convolutional Neural Networks (CNNs).
### _Convolutional Neural Networks_
CNNs are a widely used type of deep learning model designed for image type data. CNNs work differently than regular deep FFNNs, where the output of every node is passed into every node of the next layer. CNNs receive a 3 dimensional input (channels,height,width). CNNs have filters, whose values are learned, that convolve across input channels to detect edges. The primitive edges detected in earlier layers can be combined in later layers to learn more complex shapes. The core of CNN layers fall into two categories: normal and reduction convolutional layers. Normal convolutional layers use filters to convolve across the input to produce data with more channels, keeping the same height and width. Reduction cells to reduce the width and height of its input data to reduce the number of trainable parameters in the next layer or cell. The output of the convolutional layers is passed through a pooling layer to reduce the input dimension to 1 for dense layers to produce the network output (prediction).
CNNs are used in this section because process performance metric data can be grouped together in the form of an image, with rows denoting unique processes and columns denoting performance features of these processes.
### _Online Cloud Testbed_
Figure 8 illustrate the testbed utilized to generate the online malware dataset in an OpenStack5 instance hosted by the University of Texas at San Antonio. All virtual machines used to create this dataset had open and unrestricted internet access, as well as a public IP address. Each virtual
Fig. 7: ROC using Loss vs F1 for Selection
machine is running a fully up-to-date Ubuntu 18.04 instance. The experiments are controlled and data gathered by a controller node within the OpenStack testbed. Each VM contains programs to collect data from their respective sources, which at the end of the experiment is collected by the controller node. Before each experiment, each target VM is reset to a clean state. Each virtual machine has 2 CPU cores, 4 gigabytes of RAM, and 40 gigabytes of disk space.
### _Application and Baseline Sets_
To best understand the behavior of malware on a full, online system, it may help to include malware data when the machine is idle and fully operational. For the purposes of this dataset, the fully operational server will be an Apache web server hosting a WordPress application, with a MySQL database on the backend. To model real world end users of the server, an on ON/OFF Pareto distribution following NS26 parameters is utilized to mimic the distribution of client requests to the webserver. All malware was run with only user level privileges.
Footnote 6: [http://www.isi.edu/nsnam/ns/doc/node509.html](http://www.isi.edu/nsnam/ns/doc/node509.html)
#### 6.3.1 Mahware Source and Selection
The malware selected for this data came from a variety of sources, including VirusTotal7, MalShare8, VirusShare9, Linux-Malware-Samples10, and MalwareBazarr11. The gathered samples were tested for ability to execute on the target hardware in case the mutable header field of the malware had been altered, in which case the malware may not run on the target hardware. Also, samples that lead to corruption of the collected data during the experimentation process were removed from consideration after the fact. In total 4077 malware samples were considered.
Footnote 7: [https://www.virustotal.com/](https://www.virustotal.com/)
Footnote 8: [https://www.malshare.com/](https://www.malshare.com/)
Footnote 9: [https://virusshare.com/](https://virusshare.com/)
#### 6.3.2 Data Collection
The experiment length for this dataset is 10 minutes - meaning data is collected for the entirety of 10 minutes. Halfway through a given experiment, the malware being tested is executed. Therefore, every experiment contains an equal amount of benign and malicious activity. This can be seen in Figure 9. There are multiple random benign SSH connections made to each target box throughout the experiment to mask the SSH connection used to spawn the malware execution.
The methods by which different sources of data are collected contain both continuous and discrete collection. Network data is collected continuously throughout the experiment, and starts 10 seconds early to allow for a delta to be taken, since the collection is a running total of network activity per process. Per-process data is collected at an interval of every 10 seconds, taking the instantaneous value of the monitored metrics. The collection over time for each data source as shown in Figure 10. Specifics of each type of data collected will be discussed in the following subsections.
#### 6.3.3 Per-Processes Performance Data
Performance metrics are collected on a per-process basis. This data is collected every 10 seconds for the duration of the experiment. The python library psutil is used to collect this data.
Fig. 8: Cloud Testbed Setup
Fig. 10: Data Collection Phases
Fig. 9: Experiment Phases
Process IDs (PIDs) would, at first, seem like an easy way to identify a unique process thought the experiment, but this doesn't hold true. A Linux kernel by default has a maximum PID of 32768, at which point PIDs begin getting re-used. Therefore, it is feasible that in a highly active system that creates many new processes and closes old ones, that a single PID may identify more than one process during the experiment run-time. Instead, a tuple of the entire command line (including arguments) of the process and a hash of the executable (if applicable) is collected. This is much less likely to collide with the identifier of another process.
#### 6.3.4 Per-Process Network Data
Many data collection tools do not allow for the collection of network traffic statistics in a per-process basis. However, the tool Nethogs12 allows for the grouping of bandwidth by process, and is used to collect network bandwidth data in the experiments. A python wrapper is used to interact with the Nethogs library for data collection.
Footnote 12: [https://github.com/raboof/nethogs](https://github.com/raboof/nethogs)
The network bandwidth data (bytes in/out) per process is recorded as a running total, therefore network data collection is started 10 seconds early, and the delta between each record is used in post-processing. In order to match network data to process data, the PID at a given timestamp in the network data can be compared to the records in the process data, which ultimately holds the primary key to denote a unique process.
#### 6.3.5 Combined Data and Representation
In order to include network data with per-process performance metrics, the data is combined. First, any record of the data collection agents is removed from the per-process performance data. The data that is left in process data will be the basis by which network usage is searched in network data. The discrete process data is grouped by collection time (every 10 seconds), and any matching network data between collection times is added to the latter process data collection record. That is for a given unique process record \(p\) taken at collection time \(N\), any matching network data records for \(p\) between the previous collection time \(N\)-\(I\) and current collection time \(N\), will be added to the process record of \(p\) at collection time \(N\). A sample feature table for a unique process is shown here in Table VI.
To feed the data to models, the data is represented as a single channel (grayscale) image. The columns of this image are the collected performance metrics and the rows are unique processes. The image dimensions are represented as (channels, rows, columns) and are selected to be (1, 64, 64). The first 26 columns and second 26 columns each contain performance metrics for the rows of processes. There are 12 blank columns on the right side of the image that are used as padding so the image can maintain a square shape. The image shape is selected to be square and a power of 2 to ensure there are no dimensionality problems when feeding the data into a variety of CNN models. A total of 128 unique processes can be included in an image, and the top 32 processes that occur very frequently throughout the data will always be placed in the same row and column throughout all samples. Refer to Figure 11.
### _Methodology_
We used one-shot learning to find a performant CNN to detect malware from the performance metric data. The Darts [47] AutoML methodology is applied to search for an optimal CNN architecture from the training data. The code for this is adapted from the Microsoft NNI implementation of Darts. Darts works to find normal and reduction convolutional cells by figuring out layer connections between nodes in the repeated cells. The found architecture will be a normal and reduction convolutional layer in a CNN with a specified number of layers (cells), nodes per cell, and channels per node. Increasing the number of nodes, and even more so increasing the number channels per node, can create large memory overhead in the neural architecture search. Darts finds the connections between nodes in a cell by posing the probability of a connection being the best as a softmax, so the best connections can be found using gradient descent.
The choices for connections between nodes in a cell are _skip connect_ (identity for normal cells and factorized reduction for reduction cells), _dilated convolution_ (5x5 or 3x3), _separable convolution_ (3x3 or 5x5), _average pooling_ (3x3), or _max pooling_ (3x3). These were the choices in the original Darts paper and are also used here. Stochastic Gradient Descent (SGD) optimizer and a learning rate scheduler are both used, with the same parameters as described in [47].
The general architecture for the entire network is shown in Figure 12. The CNN part of the model is either be found by with the Darts methodology or is a state-of-the art CNN for comparison.
Fig. 11: Input Data Shape
Fig. 12: General Architecture
### _Training and Results_
#### 6.5.1 Data Splits
Given 4077 total malware experiments per set (baseline/application), running for 10 minutes each, with data points at every 10 seconds, 246,620 total samples are available in the baseline and application dataset. 80% of the experiments are used for training, 10% for validation, and 10% for testing. No experiment (malware sample) is contained in more than one set (training/validation/testing). Also, the baseline training set consists of the same malware as the application training set, and the same is true for validation and test sets. A mean and standard deviation are calculated using the training set in the baseline and application set, and is used to normalize each of the respective datasets.
#### 6.5.2 Neural Architecture Search
The Darts network for the baseline data is found, with the meta network parameters set at 5 layers, 5 nodes per cell, and 5 channels per node. Due to a performance decrease when the same Darts parameters are applied to the application dataset, the Darts model for the application set is fixed at 7 layers, 5 nodes per cell, and 9 channels per node. These choices are somewhat arbitrary, but have direct impact on memory usage during the NAS and the complexity and predictive performance of the found architecture. The selections made are to allow the model to fit on a single GPU while achieving good predictive performance. The impact of these choices are discussed [47].
A dropout rate of 0.30 is used in the neural architecture search, the same as used in all the rest of the model training. The Darts architecture search is run for 30 epochs (approximately 13 hours), using the training data. A batch size of 96 is used, the same as the original Darts paper. The found architecture is then trained using the same hyper parameters as the models it is compared to, described next.
#### 6.5.3 Training Parameters
In order to compare the performance of Darts to state-of-the-art CNNs, these models will be trained the same way as the found Darts models: _Resnet18, Resnet50, Resnet101, Densenet121, Densenet169,_ and _Densenet201_. All the considered models share the same hyper parameters. The models are each trained for 100 epochs, use the Adam optimizer with a learning rate of 0.0005, learn on a batch size of 512, and have a dropout rate of 0.30. For each model, the epoch with the lowest validation loss is used on the test set to produce the final results for that model.
#### 6.5.4 Results
The best found normal and reduction convolutional cells structures in the baseline darts model are shown in Figures 13 and 14, respectively. The two input nodes in each cell are the outputs of the previous two cells, or in the case of the first cell the duplicated output of the first layer of Darts. All the node outputs are concatenated to be the cell output.
The predictive results of the test set are shown in Table VII. This table shows the accuracy, precision, recall, F1-score, and Area Under the Curve (AUC) for each model. Additionally, to model a real world scenario, a threshold is calculated from the validation set, such that the the validation false positive rate is 1.00%. This models a scenario where many false positives can become overwhelming for analysts to deal with, so an effort is made to minimize them by increasing the detection threshold of the malware detection model. When the threshold is increased, this can create a delay in a positive malware detection, in real time, through false negatives at the beginning of malware execution. This is shown in the table as _Delay @ Low FPR_, and is the average number of seconds elapsed before a successful detection after the malware injection point. Also shown in this section
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Metric** & **Value** & **Metric** & **Value** & **Metric** & **Value** \\ \hline num\_fds & 78 & cpu\_percent & 0.0 & cpu\_time\_user & 0.15 \\ cpu\_time\_system & 1.7 & cpu\_time\_children\_user & 7.64 & cpu\_time\_children\_system & 3.1 \\ context\_switches\_voluntary & 1390 & context\_switches\_ involuntary & 430 & num\_finds & 1 \\ memory\_info\_rs & 9113600 & memory\_info\_vms & 163598336 & memory\_info\_shared & 69759264 \\ memory\_info\_next & 1376256 & memory\_info\_lib & 0 & memory\_info\_data & 18956288 \\ memory\_info\_dirty & 0 & memory\_info\_pss & 2922496 & memory\_info\_swap & 0 \\ io\_read\_count & 53242 & io\_write\_count & 18782 & io\_read\_bytes & 320275456 \\ io\_write\_bytes & 113713152 & io\_read\_chars & 248760749 & io\_write\_chars & 152977520 \\ sent\_bytes & 0.0 & recv\_bytes & 0.0 & & \\ \hline \end{tabular}
\end{table} TABLE VI: Features Sample
Fig. 13: Found Normal Cell
of the table is True Positive Rate _TPR_ and False Positive Rate _FPR_ at the high detection threshold (low FPR) on the test set.
Both of the Darts models that were tried are shown in the Application section of Table VII. The first model has 5 layers, 5 nodes per cell, and 7 channels per node. The second Darts model has 7 layers, 5 nodes per cell, and 9 channels per node. The Darts models in both baseline and application datasets perform better in almost every area than state-of-the-art models. In the baseline set, Resnet18 and Resnet50 show better precision and recall than the Darts model, respectively. It can, however, be seen that the Darts model has a higher F1-score signifying that the Darts model better balances precision and recall on the test set better than either of the other models. The Darts model also has the lowest delay, and is under 10 seconds, meaning that most of the malware in each execution experiment was detected in the first time slice after injection. Additionally, many of the state-of-the-art models are shown to impose a significant delay in the detection of the malware, with some averaging over 2 time slices, or over 20 seconds for a successful detection. The Darts models don't always have the lowest FPR at the high detection threshold, but all results in this column are shown to be close to the 1% target to validate the delay and TPR results.
_Accuracy, Precision, Recall,_ and _F1-Score_ are shown for each model in both sets in Figures 15 and 16. The average malicious prediction delay is also shown in Figures 17 and 18. The higher performance difference between the Darts models and state-of-the-art models in the application set vs the baseline set, suggests the need for AutoML derived models as data becomes more complex. Data from a server during real world use is more noisy and allows for malware execution to better hide within this noise. The neural architectures that are specifically derived based on this more complex data for this use case are more performant at identifying malware execution than generic architectures.
## 7 Future Work and Conclusion
### _Future Work_
This work describes the usefulness of AutoML for malware detection. Future works can expand on the ideas of this work with different search algorithms and malware data sources, as well as create tools to even further automate the process to make layman use of these methodologies easier.
#### 7.1.1 Recurrent Neural Networks
Recurrent Neural Networks have shown near perfect results with online per-process performance metric data [9]. The Darts methodology can also be used to derive recurrent
\begin{table}
\begin{tabular}{|c|c|c|c|c||c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{**Model**}} & \multicolumn{6}{c|}{**Baseline**} \\ & _Accuracy_ & _Precision_ & _Recall_ & _F1-Score_ & _AUC_ & _Delay @ Low FPR_ & TPR @ Low FPR & FPR @ Low FPR \\ \hline \hline Resnet18 & 0.97463 & **0.99387** & 0.95511 & 0.97411 & 0.99877 & 10.56373 s & 0.96321 & 0.00735 \\ \hline Resnet50 & 0.97913 & 0.96266 & **0.99689** & 0.97948 & 0.99892 & 9.60784 s & 0.96681 & 0.00759 \\ \hline Resnet101 & 0.98897 & 0.98586 & 0.98937 & 0.98897 & 0.99927 & 3.60249 s & 0.98814 & 0.01045 \\ \hline Densenet121 & 0.93588 & 0.99079 & 0.97621 & 0.98344 & 0.99896 & 7.81863 s & 0.97367 & 0.00816 \\ \hline Densenet169 & 0.98346 & 0.97972 & 0.98733 & 0.98351 & 0.99838 & 6.66667 s & 0.97490 & 0.01086 \\ \hline Densenet201 & 0.98570 & 0.99148 & 0.97981 & 0.98561 & 0.99907 & 5.90686 s & 0.9805 & **0.00898** \\ \hline Darts AutoML & **0.98917** & 0.98674 & 0.99166 & **0.98919** & **0.99954** & 3.08922 s & 0.98986 & 0.01094 \\ \hline \hline \multicolumn{1}{|c|}{**Application**} & \multicolumn{6}{c|}{**Application**} \\ & _Accuracy_ & _Precision_ & _Recall_ & _F1-Score_ & _AUC_ & _Delay @ Low FPR_ & _TPR @ Low FPR_ & _FPR @ Low FPR_ \\ \hline \hline Resnet18 & 0.96246 & 0.94401 & 0.98709 & 0.96507 & 0.98417 & 21.33995 s & 0.92564 & 0.01945 \\ \hline Resnet50 & 0.96667 & 0.96196 & 0.97512 & 0.96850 & 0.99480 & 14.54094 s & 0.94084 & 0.01541 \\ \hline Resnet101 & 0.97953 & 0.97627 & 0.98500 & 0.98061 & 0.99728 & 10.19851 & 0.96470 & 0.01446 \\ \hline Densenet121 & 0.97239 & 0.97116 & 0.97644 & 0.97379 & 0.99414 & 13.44913 s & 0.95592 & 0.01937 \\ \hline Densenet169 & 0.96164 & 0.94939 & 0.98534 & 0.96429 & 0.99248 & 28.31266 s & 0.90368 & 0.01386 \\ \hline Densenet201 & 0.96078 & 0.94631 & 0.98103 & 0.96336 & 0.95726 & 27.87808 s & 0.90671 & 0.01558 \\ \hline Darts AutoML (5 Layer) & 0.97672 & 0.97755 & 0.97815 & 0.97785 & 0.99659 & 13.15136 s & 0.95232 & **0.01171** \\ \hline Darts AutoML (7 Layer) & **0.98611** & **0.98520** & **0.98842** & **0.98681** & **0.99907** & **4.01985 s** & **0.98694** & 0.01532 \\ \hline \end{tabular}
\end{table} TABLE VII: Online Detection Results
Fig. 14: Found Reduction Cell
cells, and this methodology should be examined on the dataset from Section 6 in the future.
#### 7.1.2 Per-Layer Granularity
In our work in Section 3, once the width of the hidden layer is selected from the search space, it is fixed throughout the hidden layers of a model leading to a rectangular shape of the hidden layers in the model. However, an equivalent or a more optimal model may contain variable size layers with potentially fewer trainable parameters. A NAS process that allowed this level of granularity without an explosion of the NAS search space would prove valuable.
#### 7.1.3 Refinement of Meta-Hyper-Parameters
The set values of the meta-hyper-parameters have a significant effect on the efficacy of the AutoML process. Works such as [62] have developed methods to optimize a search strategy within the given confines of the meta-hyper-parameters in a data driven way. Finding the appropriate bounds of these parameters, specifically tailored to the malware detection domain, is yet to be explored.
Addition of auxiliary output heads to the NAS search space can also be considered meta-hyper-parameters. One of the potential labels that can be given to this data may not be of use in a strictly detection setting, but may help derive a more performant model for the required objective with auxiliary loss, just as discussed in [25]. Automatic inclusion of these in the search space based on label data would be valuable in automatic model searching. If hyper-parameter tuning is also performed as part of the AutoML process, the tuning algorithm can also be considered a meta-hyper-parameter. Depending on the evaluation metric, or rather intended performance (low false positive rate, high accuracy, etc), the found optimal parameters may differ. Algorithms such as differentiable evolution can also allow for optimization for multiple objectives (evaluation metrics).
#### 7.1.4 Deep Learning Types and Ensemble Learning
In Section 3, we only used FFNNs for the SOREL-20M and EMBER-2018 datasets. An analysis of using various deep learning models can be very useful. Further, malware data can be extracted in many forms and types of data (e.g., time series and image data). Training a machine learning model on combined dynamic time series data and statically extracted tabular data can enhance the model's detection ability. However, designing such a model can very difficult and, as such, AutoML is the perfect candidate for this task. An AutoML system that can intelligently conform to other sources of heterogeneous data is an area for future work.
In addition, AutoML can be utilized for ensemble learning. For instance, an AutoML system that can train multiple sub-models of different types and ensemble the sentiment of the sub-models would allow for more robust application in practice. Works such as [63] ensemble many types of machine learning models, including FFNNS, to achieve better results. Extending this to other deep learning model types could prove beneficial for malware detection.
#### 7.1.5 User Friendly AutoML
Designing AutoML models can be easier than designing a deep learning model from scratch, but an even more automated deep learning approach would be helpful for those
Fig. 16: Application Results
Fig. 17: Baseline Delay
Fig. 18: Application Delay
with knowledge of their own data, but not necessarily deep learning. An AutoML system that could be instantiated with only training data inputs, type of data (vector, image, time-series), and primary and auxiliary labels would allow even broader access to malware detection solutions using deep learning. This framework would be able to automatically select a model type of deep learning architectures and use AutoML techniques to find a performant architecture to suit the data, making maximal use of any provided auxiliary information. Ideally, this would combine the methodologies and discussions from both Sections 3 and 6. It would pre-form all phases of the AutoML process efficiently, and be able to set applicable meta-hyper-parameters from details of the provided training data.
### _Conclusion_
In conclusion, we conjecture that Automated Machine Learning is a performant solution for detecting malware in both static and an online cloud IaaS. We found that AutoML generated models can preform just as good or better than state-of-the-art models or models that have been handcrafted by experts with domain knowledge in machine learning and malware. We explored the performance of AutoML on two popular datasets static malware datasets in Section 3, SOREL-20M used to demonstrate efficacy on large datasets; and EMBER-2018, a dataset that was specifically curated to hinder the performance of machine learning models; with results in Tables 4 and 5. Our work on static malware datasets showed the feasibility of using AutoML as a tool for malware detection while reducing the external complexity and expertise required to train DL models.
We further explored one-shot AutoML on a new online cloud IaaS malware dataset using CNNs. Our results show that AutoML approaches can be utilized by cloud service providers and malware detection vendors to find custom deep learning models for malware detection utilizing any of a variety of data sources. The online approach we have shown can derive a custom CNN that is more capable than state-of-the-art models and contains cells that are more complex than what can feasibly be derived by hand. Importantly, we demonstrated that the difference in detection ability between AutoML models and state-of-the-art models becomes greater as the noise of input data becomes greater, and closer to noise of real-world world application. We also elaborate on future directions to mature the use of AutoML research towards cybersecurity domains.
## Acknowledgements
This work is partially funded by the National Science Foundation grants 2025682, 2043324 at Tennessee Tech University, and 2150297 at North Carolina A&T State University.
|
2308.02002 | Outflowing helium from a mature mini-Neptune | We announce the detection of escaping helium from TOI 2134b, a mini-Neptune a
few Gyr old. The average in-transit absorption spectrum shows a peak of 0.37 +-
0.05% and an equivalent width of $W_{\rm avg}=3.3 \pm 0.3$ m$\r{A}$. Among all
planets with helium detections, TOI 2134b is the only mature mini-Neptune, has
the smallest helium signal, and experiences the lowest XUV flux. Putting TOI
2134b in the context of all other helium detections, we report the detection of
a strong (p=3.0e-5) and theoretically expected correlation between $F_{\rm
XUV}/\rho_{\rm XUV}$ (proportional to the energy-limited mass loss rate) and
$R_* W_{\rm avg}$ (roughly proportional to the observationally inferred mass
loss rate). Here, $W_{\rm avg}$ is the equivalent width of the helium
absorption and $\rho_{\rm XUV}$ is the density of the planet within the XUV
photosphere, but the correlation is similarly strong if we use the optical
photosphere. TOI 2134b anchors the relation, having the lowest value on both
axes. We encourage further observations to fill in missing regions of this
parameter space and improve estimates of $F_{\rm XUV}$. | Michael Zhang, Fei Dai, Jacob L. Bean, Heather A. Knutson, Federica Rescigno | 2023-08-03T19:34:28Z | http://arxiv.org/abs/2308.02002v1 | # Outflowing helium from a mature mini-Neptune
###### Abstract
We announce the detection of escaping helium from TOI 2134b, a mini-Neptune a few Gyr old. The average in-transit absorption spectrum shows a peak of \(0.37\pm 0.05\%\) and an equivalent width of \(W_{\rm avg}=3.3\pm 0.3\) mA. Among all planets with helium detections, TOI 2134b is the only mature mini-Neptune, has the smallest helium signal, and experiences the lowest XUV flux. Putting TOI 2134b in the context of all other helium detections, we report the detection of a strong (p=3.0\(\times 10^{-5}\)) and theoretically expected correlation between \(F_{\rm XUV}/\rho_{\rm XUV}\) (proportional to the energy-limited mass loss rate) and \(R_{*}W_{\rm avg}\) (roughly proportional to the observationally inferred mass loss rate). Here, \(W_{\rm avg}\) is the equivalent width of the helium absorption and \(\rho_{\rm XUV}\) is the density of the planet within the XUV photosphere, but the correlation is similarly strong if we use the optical photosphere. TOI 2134b anchors the relation, having the lowest value on both axes. We encourage further observations to fill in missing regions of this parameter space and improve estimates of \(F_{\rm XUV}\).
Mini Neptunes (1063), Exoplanet atmospheres (487), Exoplanet atmospheric evolution (2308) 0000-0002-4882-8888]Michael Zhang
0000-0002-8878-7888]Fei Dai
0000-0002-4883-0708]Jacob L. Bean
0000-0002-4883-0888]Heather A. Knutson
0000-0002-4883-0888]Federica Rescigno
## 1 Introduction
Atmospheric escape fundamentally shapes the properties of exoplanets. It likely carves the radius gap that separates the small, dense super-Earths from the larger and puffier mini-Neptunes (Fulton et al., 2017; Fulton and Petigura, 2018), either through photoevaporation by stellar XUV (Lopez and Fortney, 2013; Owen and Wu, 2017; Mills and Mazeh, 2017), or by core-powered mass loss powered by the planet's own cooling luminosity (Ginzburg et al., 2018; Gupta and Schlichting, 2019). Among terrestrial planets, atmospheric escape has momentous implications for habitability: planets that have lost their atmospheres are unlikely to have liquid water on their surfaces.
The first escaping atmosphere was detected in Ly\(\alpha\) absorption 20 years ago (Vidal-Madjar et al., 2003), but only a handful of other Ly\(\alpha\) detections have followed. The year 2018 saw the first successful use of an alternate mass loss probe: the 1083 nm transition between the metastable triplet ground state and a triplet excited state (Spake et al., 2022). Only a few helium atoms per million are in the triplet ground state in the best of circumstances, and not all stellar types are equally effective at population this state (Oklopcic, 2019). Nevertheless, the accessibility of the line from the ground, the copious stellar photons at this wavelength, and the lack of interstellar extinction more than make up for these downsides, and more than a dozen outflows have been definitively detected in this line (Dos Santos, 2022).
Recently, we detected the first helium outflow from a young mini-Neptune (Zhang et al., 2022), followed by detections from three other young mini-Neptunes (Zhang et al., 2023). The widths of the helium absorption signals suggest a photoevaporative outflow while disfavoring the core-powered mass loss scenario, and
the equivalent widths imply mass loss rates sufficient to strip a substantial fraction of the atmosphere on Gyr timescales. These observations are important for testing mass loss models, which suffer from large theoretical uncertainties. For example, mini-Neptunes may have very high metallicity atmospheres (Kempton et al., 2023), which have lower hydrogen/helium abundance, slower outflows (because of the higher mean molecular weight), and higher temperatures, suppressing the outflow (Zhang et al., 2022). Planetary magnetic fields can affect mass loss in complex ways (e.g. Schreyer et al., 2023; Ramstad and Barabash, 2021). Even with strong mass loss, it is possible for the atmosphere to be replenished by outgassing, for example from hydrogen and water dissolved in magma (Chachan and Stevenson, 2018; Kite et al., 2020). The large uncertainties in all of these processes make atmospheric escape difficult to model. It is therefore important to catch mini-Neptunes of different ages and irradiation levels in the process of losing their envelopes, in order to have observational data to nail down theoretical models.
In this paper, we present the first detection of escaping helium from TOI 2134b, a warm mini-Neptune orbiting a nearby (23 pc) X-ray-quiet K dwarf (Rescigno et al., 2023). Although we initially targeted it as part of our program to observe young mini-Neptunes (Zhang et al., 2023), additional data have shown that it is a mature planet (\(\sim\)2 Gyr). For convenience, Table 1 presents relevant stellar and planetary properties. We describe the observations and reduction in Section 2 and the helium outflow's properties in Section 3, before comparing TOI 2134b to other helium detections in Section 4 and concluding in Section 5.
## 2 Observations and Data Reduction
Over the past two years, we have been carrying out a survey of escaping helium from young (\(\lesssim\)1 Gyr) mini-Neptunes orbiting nearby K dwarfs. The survey, described in Zhang et al. (2023), uses Keck's high-resolution NIRSPEC spectrograph to detect 1083 nm helium absorption, the TESS light curve to measure the star's rotation period, and XMM-Newton data to measure the star's X-ray spectrum. The survey has detected helium from all four of its first four targets. TOI 2134 was the fifth target to be observed as part of this survey.
We observed TOI 2134b with Keck/NIRSPEC from 2022-06-18 09:44 UTC to 14:35 UTC, consisting of 1.3 h of pre-ingress baseline, the 3.0 h transit, and 0.5 h of post-egress baseline. As usual, we used the 12 x 0.432\({}^{\prime\prime}\) slit, giving us a spectral resolution of 32,000. Also as usual, we took 60 s exposures in an ABBA nod pattern. TOI 2134 is brighter than the four targets in Zhang et al. (2023), giving us a typical SNR of 250 per spectral pixel in the middle of the helium line. From 1.8 to 1.5 h before mid-transit, the SNR plummeted to 50-100, before recovering shortly before ingress. Due to human error, we took no data for the 15 minutes centered on 1.06 h before mid-transit, and for the 4 minutes centered on 0.66 h after mid-transit. These gaps add up to only 11% of the transit duration, and do not significantly affect the results.
The Keck/NIRSPEC data were reduced using the pipeline and methodology described in Zhang et al. (2022) and Zhang et al. (2023). Briefly, we generate a median dark and a median flat; produce A-B difference images and divide them by the flat; use optimal extraction to extract the spectra; use a combined stellar and telluric model to obtain the wavelength solution and continuum for each spectrum; and use molecfit(Smette et al., 2015) to correct for telluric absorption. For our observations, there is no significant telluric absorption that overlaps with the helium line. Telluric absorption only begins to pick up redward of 10834.5A in the stellar rest frame, corresponding to a star-relative redshift of 33 km/s.
In addition to the Keck/NIRSPEC data, we obtained a 18.2 ks XMM-Newton observation of the star on 2022-09-12 (observation ID 0903000301, PI: Michael Zhang), about three months after the helium observations. We analyzed the EPIC X-ray data using SAS and fit a single-temperature thin plasma model (APEC) using XSPEC, following the same methodology as Zhang et al. (2022). Simultaneously with the X-ray observations, XMM
\begin{table}
\begin{tabular}{l r} \hline Property & Value \\ \hline \(R_{*}(R_{\odot})\) & \(0.709\pm 0.017\) \\ \(M_{*}(M_{\odot})\) & \(0.744\pm 0.027\) \\ \(T_{\rm eff}(K)\) & \(4580\pm 54\) \\ \(log(g)\) & \(4.8\pm 0.3\) \\ [Fe/H] & \(0.12\pm 0.02\) \\ \(P(d)\) & \(9.2292004\pm 6.3\times 10^{-6}\) \\ \(R_{p}/R_{*}\) & \(0.03475\pm 0.00034\) \\ \(R_{p}(R_{\earth})\) & \(2.69\pm 0.16\) \\ \(a/R_{*}\) & \(23.66\pm 0.52\) \\ \(a\)(AU) & \(0.078\pm 0.0009\) \\ \(b\) & \(0.20\pm 0.12\) \\ \(e\) & \(0.06^{+0.04}_{-0.04}\) \\ \(T_{eq}\) & \(666\pm 8\) \\ \(M_{p}(M_{\earth})\) & \(9.13^{+0.78}_{-0.76}\) \\ \(D(pc)\) & \(22.655\pm 0.007\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of system properties from Rescigno et al. (2023)
Newton's Optical Monitor measures the mid-ultraviolet (MUV) flux in two bandpasses: UVW2 (212 nm, width 50 nm) and UVM2 (231 nm, width 48 nm). MUV ionizes metastable helium, making the this flux an important input to models of helium absorption.
Finally, we examined publicly available photometry from the Transiting Exoplanet Survey Satellite (TESS). TOI 2134 was observed in 5 sectors: 26, 40, 52, 53, and 54. By coincidence, TESS was observing the star simultaneously with our Keck/NIRSPEC helium observations. We see no flares or other evidence of stellar variability in the TESS data during the transit or within several hours of it, with the possible exception of a \(\sim\)300 ppm drop in flux 3.5 h after mid-transit. However, this drop would have happened well after the end of our NIRSPEC observations. Any flare 0.1% or bigger would have been easily visible, so we rule these out with high confidence.
## 3 Results
### Stellar age
We originally included TOI 2134b in our sample because its rotational period and X-ray luminosity suggest an age \(\lesssim\)1 Gyr. However, isochrone fitting by Rescigno et al. (2023) reveal a much older age of 3.8\({}^{+5.5}_{-2.7}\) Gyr.
The discrepancy arises because the Lomb-Scargle periodogram of the TESS light curve for sectors 26 and 40 show a broad peak from 10-14 days, while the Second ROSAT All-Sky Survey (2RXS) reports an X-ray flux of 1\(\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) (power law fit) or 7\(\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) (blackbody fit), based on 21.2 \(\pm\) 7.5 background-corrected counts. Using the relations found by Mamajek & Hillenbrand (2008), the rotation period implies an age of 350-640 Myr, while the X-ray flux implies an age of 1.1-1.4 Gyr. Since we began the survey, TESS sectors 52, 53, and 54 have become available. Unfortunately, adding these data weakens the Lomb-Scargle peak, and even though visual inspection of the light curve reveals an unmistakable variability with a RMS of 0.15%, it does not reveal any obvious rotation period. Rescigno et al. (2023) likewise did not securely detect the rotation signal in WASP data spanning \(\sim\)850 days.
The XMM-Newton observations we obtained three months after our helium observations reveal an unexpectedly low X-ray flux of 2.3\(\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) in the 5-100 A bandpass, or 0.5 erg s\({}^{-1}\) cm\({}^{-2}\) at 1 AU. This is 3-4 times lower than the flux measured by ROSAT in August 1990, suggesting that the star has become significantly quieter in the intervening 32 years, and/or that ROSAT detected an upward statistical fluctuation. Combining this flux with the Mamajek & Hillenbrand (2008) relation between age and X-ray flux, we obtain a significantly older age of 2.8 Gyr.
Finally, we estimated the age via the star's \(\log R^{\prime}_{\rm HK}\) of \(-4.83\pm 0.45\)(Rescigno et al., 2023), which translates to an age of 3.5 Gyr using the age-\(\log R^{\prime}_{\rm HK}\) relation in Mamajek & Hillenbrand (2008). Taking the geometric mean of all estimates, we arrive at an age of \(2.3\pm 1.2\) Gyr. The advanced age is confirmed by the low \(\rm vsin(i)\) of \(0.78\pm 0.09\) km/s, and by the isochrone-derived age (Rescigno et al., 2023).
### Helium absorption
The excess absorption as a function of wavelength and time is shown in Figure 1. In previous papers, we masked strong stellar and telluric lines in plots to avoid confusing the reader, but here we leave all wavelengths unmasked in order to show the variability in these lines. The variability around 10836 A is due to the one and only strong telluric absorption region within this bandpass. molecfit mostly succeeds in correcting the 8% absorption, but leaves behind residuals of \(\sim\)0.5%. There are two strong photospheric lines: Si I 10830 A and Na I 10838 A. Both stellar lines show an abrupt increase in flux around 0.65 h before mid-transit, but the helium line appears unaffected. This increase is uncorrelated with the planet-it occurs well after ingress, when helium absorption is already evident, and unlike the helium absorption, it does not reverse at egress. This percent-level variability in equivalent width was previously seen for TOI 1430b, but not for 560b or 2076b (1683b's SNR was too low for a proper comparison); however, variability in the Si line shape was also seen for 560b and 2076b, and variability in the Na line shape was visually evident for 560b. We suspect the variability may be related to changes in the instrumental line spread profile, but cannot rule out stellar variability. Although we have not seen increased systematics in the helium line that correlate with variability in the Si and Na lines, further work will be necessary to explain the cause of the variability and estimate its potential impact on the helium results.
Figure 2 (left) shows the average in-transit excess absorption spectrum. We integrate the part of the excess absorption spectrum between 10831 and 10835 A to obtain the equivalent width, \(W_{\rm avg}=3.3\pm 0.3\) mA. In the absorption spectrum, the peak value is \(0.37\pm 0.05\)% and occurs at a redshift of \(7\pm 3\) km/s. This redshift can also be seen in the 2D excess absorption plot (Figure 1). Radial velocity data (Rescigno et al., 2023) shows that the planet's radial velocity at mid-transit is -\(Ke\cos\omega=1.3^{+2.5}_{-1.3}\) km/s, consistent with the measured redshift to 1.5\(\sigma\). The absorption spectrum also shows a secondary peak (from the third line of the he
lium triplet) with a \(10\pm 3\) km/s redshift and a peak absorption of \(0.13\pm 0.05\%\). We consider the secondary peak detection to be likely but not conclusive because there are \(\sim 0.05\%\) peaks and valleys in the spectrum due to correlated noise, including a \(0.09\%\) bump redward of the main helium peak. If the secondary peak is real, the ratio between the two peaks would be \(0.35\pm 0.13\)-in between the optically thin limit of \(0.125\) and the perfectly optically thick limit of \(1\). For comparison, of the four planets in Zhang et al. (2023), only TOI 2076b had a peak ratio inconsistent with an optically thin outflow. If the outflow is not entirely optically thin, the secondary peak would trace gas closer to the planet, which could explain part of the (statistically insignificant) difference in redshift between the two peaks.
Figure 2 (right) shows the light curve of a \(1.5\) A region centered on the helium line. The pre-transit flux appears to fluctuate more than the theoretical error bars would suggest, probably due to a combination of systematics, weather, and stellar activity, but shows no trend. The flux drops after ingress, but does not reach its minimum until almost \(1\) h after mid-transit. After egress, the flux quickly recovers-though not quite to its pre-transit value, which could be due to either stellar activity or a long tail of outflowing gas. The light curve asymmetry is also reflected in the 2D excess absorption plot. For example, the region between ingress and the data gap shows less absorption than the equivalent region right before egress.
Stellar variability in the helium line is poorly understood. To help assess the extent to which stellar variability might have affected our observations, we collected \(1.7\) h of out-of-transit monitoring data on July 4, 2023. This data shows limited stellar variability, with the band-integrated light curve exhibiting a standard deviation of \(0.063\%\) and no secular trend (Appendix C).
To estimate the mass loss rate implied by our observations, we use the same two methods as Zhang et al. (2023): an order-of-magnitude (OOM) method, and a Parker wind method that uses the 1D spherically symmetric isothermal model of Oklopcic (2019). We do not expect either method to be accurate to more than a factor of a few. The order-of-magnitude method assumes the outflow is optically thin, that the outflow speed is always the sound speed \(c_{s}\), and that \(f=10^{-6}\) of the helium atoms are in the metastable ground state. Under these assumptions, the mass loss rate can be derived from the equivalent width of the helium absorp
Figure 1: Percent excess absorption from TOI 2134b as a function of time and wavelength (stellar rest frame). The dashed cyan line indicates the beginning of the white light transit, while the solid cyan line indicates the end. The red lines show the wavelengths of planetary helium absorption. The white bars indicate gaps in the data. Note the variability in the stellar Si I line at 10830 Å and Na I line at 10838 Å, as well as the uncorrected telluric variability around 10836 Å. There is no stellar photospheric line or telluric line near the 10833 Å helium absorption.
tion. (If the outflow is not optically thin, the mass loss rate would be underestimated, potentially by a factor of a few.) To recap the derivation in Zhang et al. (2023), the equivalent width gives the number of metastable helium atoms currently in front of the star; dividing by the replacement timescale (roughly, the stellar radius divided by the outflow speed) gives the mass loss rate of metastable helium, while further dividing by the mass fraction of metastable helium gives the total mass loss rate of the whole outflow. The result is:
\[\dot{m}_{\rm obs}=\frac{R_{*}m_{e}m_{H}c_{s}c^{2}W_{\rm avg}}{0.25fe^{2} \lambda_{0}^{2}\sum g_{l}f_{l}}, \tag{1}\]
where \(W_{\rm avg}\) is the equivalent width, we assume \(c_{s}\)=10 km/s, and the 0.25 comes from the assumption that 25% of the mass of the outflow is in helium atoms or ions. \(\sum g_{l}f_{l}\), the sum of the product of the degeneracy and oscillator strength, is either 1.44 (if summed over the two inseparable lines) or 1.62 (if summed over all three lines). We adopt 1.62, but the 12% difference is negligible compared to the uncertainties in the other quantities.
The Parker wind method requires a stellar spectrum. We construct one using the same methodology as in Zhang et al. (2023), obtaining a spectrum (Figure 5) with \(F_{X}=0.45\), \(F_{\rm EUV_{He}}=2.4\) (100-504 A), \(F_{\rm EUV}=3.5\) (100-912 A), and \(F_{\rm MUV}=7.8\) (1230-2588 A), all reported at 1 AU in units of erg s\({}^{-1}\) cm\({}^{-2}\). The combined X-ray and EUV flux at the planet, 650 erg s\({}^{-1}\) cm\({}^{-2}\), is several times lower than the 5000-12,000 experienced by the four young mini-Neptunes in Zhang et al. (2023). To our knowledge, it is the lowest XUV flux of any planet with a escaping helium detection (see catalog in Appendix), the second lowest being HD 209458b's 1000 erg s\({}^{-1}\) cm\({}^{-2}\)(Alonso-Floriano et al., 2019). It is important to note that all EUV fluxes are reconstructed because no current telescope can observe even the EUV from the nearest stars, and the EUV reconstruction is uncertain by an order of magnitude (France et al., 2022). After obtaining the stellar spectrum, we run the Parker wind model of Oklopcic (2019) in a nested sampling framework with the following parameters: the log of the mass loss rate, the temperature, and a blueshift. Table 2 shows the inferred parameters. As with the planets reported in Zhang et al. (2023), the width of the line implies an outflow with a temperature of several thousand K, consistent with photoevaporation but not with core-powered mass loss. This conclusion assumes that no mechanism significantly broadens the absorption beyond the width predicted by a Parker wind.
\begin{table}
\begin{tabular}{c c} \hline Peak absorption & \(0.38\pm 0.05\)\% \\ Redshift* & \(6.7\pm 2.7\) km/s \\ Peak ratio & \(0.34\pm 0.13\) \\ W\({}_{\rm avg}\) & \(3.3\pm 0.3\) mÅ \\ \(\dot{m}_{\rm OOM}\) & 5.5\(\times 10^{9}\) g/s \\ \(\dot{m}_{\rm Parker}\) & \(5.5\pm 0.9\)\(\times 10^{9}\) g/s \\ \(T_{\rm Parker}\) & \(4640\pm 230\) K \\ Redshift\({}_{\rm Parker}\)* & \(4.7\pm 0.5\) km/s \\ \hline \end{tabular}
* 1.3\({}^{+2.4}_{-1.3}\) km/s of this redshift is due to the planet’s eccentricity
\end{table}
Table 2: Outflow properties
Figure 2: Left: average in-transit excess absorption in the planet rest frame. The deep silicon line at 10830 Å poses difficulties for spectral extraction and may be inherently variable, causing anomalies in the 0.6 Å surrounding that wavelength (gray region). Right: light curve of the helium line, defined as the region within 0.75 Å of 10833.3 Å in the stellar rest frame. This bandpass only captures the stronger peak. The two vertical lines mark the beginning and end of white light transit.
Largely coincidentally, the OOM method and the Parker wind method both estimate a mass loss rate of 5.5\(\times 10^{9}\) g/s. Assuming an envelope fraction of 1% and no change in the mass loss rate, the envelope lifetime would be 3.1 Gyr. The decrease in stellar high-energy flux with age and the shrinking of the planet as its envelope is stripped should both decrease the mass loss rate in the future, while increasing it in the past. Nevertheless, it is reassuring that as with the other four planets, the lifetimes we infer are comparable to the age of the planet.
The mass loss rate can be compared with the energy-limited mass loss rate, a theoretical maximum which assumes all the energy from the incoming XUV flux goes into lifting gas out of the gravity well:
\[\dot{m}_{\rm theory} =\pi\frac{R_{\rm XUV}^{3}F_{\rm XUV}}{GM_{p}} \tag{2}\] \[=\frac{3F_{\rm XUV}}{4G\rho_{\rm XUV}} \tag{3}\]
\(R_{\rm XUV}\) is the XUV photosphere radius. To roughly estimate this radius, we slightly follow Wang and Dai (2018) in assuming that the EUV photosphere is at \(\rho=10^{-13}\) g cm\({}^{-3}\) and that the atmosphere is isothermal between the optical photosphere and the XUV photosphere. With these assumptions, we calculate \(R_{\rm EUV}=1.25R_{p}\) and \(\dot{m}=3.9\times 10^{9}\) g/s. This theoretical maximum is \(\sim\)2x smaller than the order-of-magnitude "observed" mass loss rate. Given the very large error bars on all quantities, we consider the observations consistent with an efficient outflow. A highly efficient outflow is expected for a planet of such low gravitational potential and low XUV flux. For example, Caldiroli et al. (2022) ran a suite of 1D hydrodynamic simulations using their ATES code and found an efficiency of 96% for a hypothetical Neptune-like planet with similar XUV irradiation and gravitational potential as TOI 2134b (their Figure 2).
## 4 Discussion
We have two equations for the mass loss rate: the "order-of-magnitude" expression proportional to equivalent width (Equation 1) and the energy-limited formula (Equation 2). The two should therefore be equal. If we drop the constants and allow for some dependence of \(\eta\) on \(F_{\rm XUV}\) and \(\rho_{\rm XUV}\), we can make the weaker prediction that \(R_{*}W_{\rm avg}\) must be positively correlated with \(F_{\rm XUV}/\rho_{\rm XUV}\) among planets with helium detections. We note that Vissapragada et al. (2022) plotted the mass loss rate inferred from the Parker wind model against \(F_{\rm XUV}/\rho\) for their 7 helium survey targets, but the large error bars and limited sample size prevented the detection of any trend.
To test our prediction, we gathered \(F_{\rm XUV}/\rho_{\rm XUV}\) and \(R_{*}W_{\rm avg}\) for all published helium detections. As we describe in the Appendix, this was challenging to do in an accurate and consistent way, but we made an effort to maximize consistency without reanalyzing each detection. Figure 3 plots the data we collected, and shows a strong positive correlation between \(\log(\dot{m}_{\rm theory})\) and \(\log(\dot{m}_{\rm obs})\). We fit a power law to the data (\(\dot{m}_{\rm obs}=\eta_{0}(\dot{m}_{\rm theory})^{P}\)) by taking the log of both sides and using scipy's Orthogonal Distance Regression (ODR), which takes into account errors in both the independent and dependent variables. ODR results do not change if all errors are inflated or deflated by the same factor. We assume equal errors in log space for all data points and both variables-a concession to the fact that the XUV flux, mass loss efficiency, and conversion factor between equivalent width and mass loss rate all have large uncertainties of at least a factor of a few, and that these uncertainties swamp the observational error. With these approximations, we obtain \(\eta_{0}=0.30\pm 0.06\) and \(P=0.50\pm 0.08\); applying Student's t-test, we find that a zero slope is ruled out with p=2.3\(\times 10^{-5}\). Using the Spearman test, which only tests for monotonicity and does not assume a linear relationship, the trend remains significant (p=1.7\(\times 10^{-4}\)). The trend also remains significant even after removing the two extreme points, TOI 2134b and HAT-P-32b (ODR p=0.0036, Spearman p=0.005).
Another notable fact about Figure 3 is that the efficiency appears to decrease with increasing \(F_{\rm XUV}/\rho_{\rm XUV}\). This would not be surprising, as many previous works have found that efficiency is lower at high irradiation levels (e.g. Caldiroli et al., 2022; Zhang et al., 2022), a consequence of most of the energy being radiated away by recombination in this "recombination-limited" regime (Lampon et al., 2021). However, the sub-linearity of the relation (\(p<1\)) could also be an artifact of our assumption that \(10^{-6}\) of all helium atoms are in the triplet state. HAT-P-32b, a planet orbiting a relatively hot star (\(6207\pm 88\) K; Bonomo et al., 2017), should have a much lower triplet helium fraction than the K dwarf planet TOI 2134b. We attempt a crude correction for the dependence of the triplet helium fraction on stellar temperature and semimajor axis, finding that even though it does result in a linear relationship between \(R_{*}W_{avg}\) and \(F_{XUV}/\rho_{XUV}\), it also weakens the correlation. In the Appendix, we describe this variant of the correlation in more detail, in addition to describing attempts to account for the star's gravitational potential, differing mass loss efficiencies, and different estimates of planet density. We find that the relation is statistically significant regardless of these choices.
Finally, it is informative to examine the non-detections in Figure 3. We excluded detections that were claimed to be tentative (notably the tentative GJ 1214b detection of Orell-Miquel et al., 2022), and included only the most sensitive non-detection where multiple exist. For spectroscopic non-detections where only an upper limit on the percent excess absorption is reported, we multiply by an effective width of 1 A (a typical value for the spectroscopic detections) to obtain an upper limit on the equivalent width. Non-detections of planets smaller than 2 \(R_{\oplus}\) (55 Cnc e, TRAPPIST-1 b/e/f) are unsurprising as these planets likely have no H/He atmosphere. Most non-detections are unconstraining, as they fall above the trend-line (e.g. the mini Neptunes Kepler-68b and HD 63433c). The non-detections of HD 63433b and HD 97658b are weakly constraining, and multiple papers have been written on the non-detection of WASP-80b (Fossati et al., 2022; Vissapragada et al., 2022; Fossati et al., 2023). The high XUV flux calculated by Fossati et al. (2022) would make the non-detection surprising, while the low XUV flux calculated by Fossati et al. (2023) would not. However, some non-detections seem highly constraining, including the Kasper et al. (2020) non-detection of GJ 1214b (also see the non-detection of Spake et al., 2022), which falls 1.1 dex below the trend line, and that of GJ 9827d, which falls 0.5 dex below even the \(\eta=0.1\) line. Recent JWST/MIRI observations of GJ 1214b suggest its atmosphere is of high mean molecular weight (Kempton et al., 2023); however, Orell-Miquel et al. (2022) report a tentative detection of escaping helium. GJ 9827d is a small planet (2.0 \(R_{\oplus}\)) orbiting a several Gyr old star (Rice et al., 2019), and Carleo et al. (2021) suggests that it may have lost any H/He envelope, although its low density of 2.5 g cm\({}^{-3}\) would be puzzling if that were the case. We encourage further helium observations of
Figure 3: Relationship between the “observed” and “energy-limited” mass loss rates, with mini-Neptunes highlighted in red (TOI 2134b) or blue (all others). Stripping off the constants, this is a relationship between \(R_{*}W_{\rm avg}\) and \(F_{\rm XUV}/\rho_{\rm XUV}\). For consistency with previous literature, we define XUV to be 5–504 Å. The dashed lines indicate what the relationship should be for 100% efficiency and 10% efficiency. Omitted for clarity is the very large (potentially order-of-magnitude) uncertainty on the XUV flux, and therefore on the x axis values, for all data points.
mini-Neptunes around mature stars to determine which ones have helium outflows and which ones do not.
## 5 Conclusion
In this paper, we presented the fifth detection of escaping helium from a mini-Neptune, and the first definitive detection from a mature mini-Neptune. Among all helium detections, it has the lowest equivalent width, and comes from the planet receiving the lowest XUV flux. The width of the helium signal implies a photoevaporative origin, while the equivalent width implies a mass loss timescale in the Gyr range.
Putting TOI 2134b in the context of other helium detections, we observe the theoretically expected positive correlation between \(F_{\rm XUV}/\rho_{\rm XUV}\) and \(R_{*}W_{\rm avg}\) to high statistical significance. TOI 2134b, which has the lowest value along both axes, anchors the lower left side of the relation. This relation demonstrates that currently published helium measurements are sufficient to detect statistical patterns, and may be sufficient to test mass loss simulations at the population level. We encourage further observations to fill in the 1.4 dex gap in \(F_{\rm XUV}/\rho_{\rm XUV}\) between WASP-52b and HAT-P-32b at the high end, and if possible, to fill in the smaller gap between TOI 2134b and HAT-P-11b at the low end. We also encourage researchers reporting new helium detections to report the equivalent width in addition to the peak excess absorption, because the EW is both more directly correlated with the mass loss rate and less sensitive to differing instrumental resolutions. Finally, as is well known, the stellar XUV flux is highly uncertain. We encourage further efforts to characterize the high-energy output of these stars, whether by measuring the X-ray spectrum (e.g. Foster et al., 2022), measuring Ly\(\alpha\) and metal lines in the FUV (e.. Bourrier et al., 2018), or launching a space telescope that can obtain direct measurements of the EUV flux (France et al., 2022).
numpy (van der Walt et al., 2011), scipy (Virtanen et al., 2020), matplotlib (Hunter, 2007), dynesty (Speagle, 2020), SAS (Gabriel et al., 2004), XSPEC (Arnaud, 1996), (Smette et al., 2015)
## 6 Acknowledgments
We thank Dakotah Tyler for his help in collecting the observations. We thank Jaume Orell-Miquell and colleagues for helpful discussions about 1D modelling.
The helium data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
We used observations obtained with XMM-Newton (observation ID 0903000301), an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. We acknowledge funding from XMM-Newton grant 80NSSC22K0742.
Funding for the TESS mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center.
MZ acknowledges support from the 51 Pegasi b Fellowship funded by the Heising-Simons Foundation.
FR is funded by the University of Exeter's College of Engineering, Maths and Physical Sciences, UK.
## Appendix A Telluric Correction
Figure 4 shows the impact of telluric correction by molecfit. As can be seen, there are no strong telluric absorption lines near the helium line. There is a weak telluric emission line overlapping the helium line with an amplitude of \(\sim\)20 electrons/pixel, compared to \(\sim\)11,000 electrons/pixel on the trace. Emission lines are not a concern because they are subtracted out very effectively by the ABBA nod pattern.
## Appendix B High-Energy Spectrum
Figure 5 shows the reconstructed stellar spectrum. The X-ray spectrum is derived from XMM-Newton observations, and the MUV flux is consistent with XMM-Newton photometric observations to within 13%. Ly\(\alpha\) and EUV are reconstructed from the X-ray luminosity.
As in Zhang et al. (2023), we used XSPEC to fit a thin plasma model (APEC) to the X-ray observations, obtaining \(kT=0.219\pm 0.013\) keV, EM=\(0.44\pm 0.05\times 10^{50}\) cm\({}^{-3}\), and \(F_{X}=2.1\pm 0.3\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) (0.124-2.48 keV). Note that EM is roughly inversely proportional to metallicity, and because we do not fit the metallicity, the EM value should only be trusted if the coronal metallicity is solar.
There are a few differences between our analysis in this paper and in Zhang et al. (2023). First, for unknown reasons, the pn detector spectrum that SAS generates with default settings is nearly zero. We therefore reran SAS
after manually defining source and background regions for all three detectors: the source region is a circle of radius 20 arcsec centered on the star's expected coordinates (computed with Gaia DR3 data), while the background region is an annulus centered on the same point, with an inner radius of 30 arcsec and an outer radius of 60 arcsec. Second, we fix the coronal metallicity to solar instead of fitting it because there were not enough photons to give a good constraint. Third, we take into account interstellar photoelectric absorption, with a fixed hydrogen column density of \(10^{18.25}\) g/cm\({}^{2}\) (near the upper end for stars at similar distances; Wood et al., 2005). Including interstellar absorption increased the inferred flux by less than a few percent.
## Appendix C Out of Transit Activity Monitoring
On July 4, 2023, when TOI 2134b was days away from the nearest transit, we observed TOI 2134 for 1.7 h using Keck/NIRSPEC to monitor the out-of-transit variability of the stellar helium line. We used exactly the same settings, and analyzed the data in exactly the same way, as for the science observations. In the middle of the observations, a telescope fault occurred, causing a 12 minute gap in the data.
Figure 4: The stellar spectrum in the terrestrial frame, before and after correction for telluric absorption.
Figure 5: The reconstructed stellar spectrum.
Figure 6 shows the excess absorption (relative to the median) as a function of time and wavelength, while Figure 7 shows the light curve of the helium line. The helium line shows no signs of variability in the 2D excess absorption plot, although the Si I 10830 A line is again variable at the 1% level. The light curve exhibits variability of 0.063%-slightly higher than the typical photon error of 0.054%, but far lower than the 0.3% helium absorption observed during the science observations. The monitoring observations are consistent intrinsic stellar variability of 0-0.08%.
Figure 6: Percent excess absorption (relative to median) as a function of time and wavelength, while the planet was days away from transit. This plot is of the same format, and has the same wavelength range and colorbar scale, as 1.
Figure 7: Light curve of the helium line (within a half-width of 0.75 Å) while the planet was far from transit. This plot is the analogue of Figure 2 (right), and the y axis has the same scale.
## Appendix D Parker Wind Fit
In Figure 8, we show the best fit from the Parker wind model; in Figure 9, we show the posterior distribution.
In Zhang et al. (2023), we imposed a cutoff of one Hill radius on the outflow, for the reason that the spherically symmetric model may break down beyond that point. This meant that we ignored all helium absorption originating from outside the cutoff. However, gas that flows beyond the cutoff does not disappear, and may not even become significantly unspherical (see e.g. the 3D simulations of Khodachenko et al., 2019; Zhang et al., 2022). For this paper, we therefore choose a cutoff radius of 1 \(R_{*}\), much larger than the Hill sphere of 0.54 \(R_{*}\). Had we chosen the Hill sphere as the cutoff radius, the inferred mass loss rate would have been \(1.1\times 10^{10}\) g/s, 2.2x our fiducial value.
## Appendix E The Trend
Table 3 shows the helium detections and non-detections plotted in Figure 3.
### The difficulties of collating helium detections
Creating Table 3 in a consistent and accurate way was not easy. Although X-ray measurements exist for many of the stars, EUV measurements do not because no EUV observatory exists, resulting in order-of-magnitude uncertainties in EUV flux and large discrepancies between different EUV estimation techniques (c.f. France et al., 2022). In addition, some authors define EUV as 5-912 A (with the hydrogen ionization energy as the upper limit), but most have chosen to define EUV as 5-504 A (with the helium first ionization energy as the upper limit). Fortunately, most papers that have followed the former convention are ours, so we succeeded in calculating the 5-504 A flux for all stars and adopted it as the XUV flux. The equivalent widths were even more difficult to obtain. For the very few papers that report the equivalent width directly (e.g. HAT-P-32b; Czesla et al., 2022), we use the reported value. For photometric measurements (e.g. HAT-P-26b and HAT-P-18b; Vissapragada et al., 2022), we obtain the equivalent width by multiplying the excess absorption by the FWHM bandwidth. For spectroscopic measurements where the authors report no equivalent width, we integrate the excess absorption spectrum over wavelength, using the fitted
Figure 8: The best fit from our Parker wind model, compared to the data and the residuals.
model as the spectrum if one is provided (e.g. WASP-69b; Nortmann et al., 2018), or using the data points directly if no model is provided. For the detections we published (HD 189733b, TOI 560b/1430b/1683.01/2076b/2134b), we integrate the data points in the excess absorption spectrum from 10831 to 10835 A. Note that this is not how we calculated the equivalent width in Zhang et al. (2022), in which we used the bottom of a 1.5A bandpass, but we adopt this method for consistency (no other authors report their light curves in the exact same bandpass). When multiple measurements are reported for the same target, we adopt the one with higher SNR if they are consistent, or take an average if they are inconsistent.
### Other variants of the correlation
To test the robustness of the correlation in Figure 3, we have experimented with other variants. For example, as mentioned in the main text, we attempt to relax the assumption that \(10^{-6}\) of the helium atoms are in the metastable helium state. Oklopcic (2019) simulated the triplet helium fraction \(f\) for a HAT-P-11b-like planet orbiting host stars of 6 different spectral types (their Figure 2), as well as at 6 different semimajor axes from a K1 dwarf (their Figure 5). For each planet, we first linearly interpolate in \(T_{\rm eff}-\log(f)\) space to obtain an estimate of \(f\), and then multiply the estimate by a correction factor obtained from interpolating in \(a-\log(f)\) space and dividing by the \(f\) at HAT-P-11b's semimajor axis. For both corrections, we somewhat arbitrarily use the \(f\) that Oklopcic (2019) calculates at 3 planetary
Figure 9: 2D posteriors from nested sampling for the Parker wind model.
radii. Except at a=0.01 AU, \(f\) changes very little from 2.5 to 5 \(R_{p}\), and our conclusions do not change when we tried 4 \(R_{p}\) as the reference distance.
Figure 10 shows the relation between \(\log(\dot{m}_{\rm theory})\) and \(\log(\dot{m}_{\rm obs})\) after these corrections to \(f\). Both TOI 2134b and HAT-P-32b now have similar mass loss efficiencies. The power \(p=0.96\pm 0.18\) is consistent with 1, and the implied typical mass loss efficiency \(\eta=0.20^{+0.09}_{-0.06}\). On the other hand, the correlation as a whole becomes substantially weaker (ODR p=1.5\(\times 10^{-4}\), Spearman p=0.0054). Eliminating TOI 2134b and HAT-P-32b makes the correlation barely statistically insignificant (ODR p=0.043, Spearman p=0.078). We conclude that with the possible exception of the two extremes, our crude corrections likely do more harm than good, and that the triplet fraction is unlikely
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Planet & \(F_{\rm XUV}(5-504\rm\AA)\) & \(R_{p}\) & \(\rho_{\rm XUV}\) & \(W_{\rm avg}\) & \(\sigma_{W}\) & Detected & \(R_{*}\) & Ref \\ & (\(10^{3}\) erg s\({}^{-1}\)cm\({}^{-2}\)) & (\(R_{\oplus}\)) & (g/cm\({}^{3}\)) & (mÅ) & (mÅ) & & (\(R_{\odot}\)) & \\ \hline WASP-69b & 4.17 & 11.85 & 0.21 & 28.5 & 1.5 & True & 0.86 & Nortmann et al. (2018) \\ HD 189733b & 19.2 & 12.54 & 0.88 & 11.0 & 2.8 & True & 0.75 & Zhang et al. (2022a) \\ HD 209458b & 1.004 & 15.23 & 0.27 & 3.65 & 0.4 & True & 1.19 & Alonso-Floriano et al. (2019) \\ HAT-P-11b & 2.109 & 4.36 & 1.17 & 12.0 & 0.56 & True & 0.68 & Allart et al. (2018) \\ WASP-107b & 2.664 & 10.4 & 0.06 & 100.0 & 3.3 & True & 0.67 & Kirk et al. (2020) \\ GJ 3470b & 1.435 & 4.57 & 0.37 & 20.1 & 4.1 & True & 0.55 & Palle et al. (2020) \\ GJ 1214b & 0.64 & 2.742 & 1.10 & 1.3 & 0.79 & False & 0.21 & Kasper et al. (2020) \\ HAT-P-32b & 162.0 & 22.19 & 0.06 & 118.4 & 7.1 & True & 1.37 & Czesla et al. (2022) \\ WASP-52b & 25.0 & 13.71 & 0.21 & 42.0 & 9.0 & True & 0.79 & Kirk et al. (2022) \\ GJ 436b & 0.197 & 4.191 & 1.08 & 4.1 & 2.05 & False & 0.46 & Nortmann et al. (2018) \\ KELT-9b & 0.15 & 19.99 & 0.49 & 3.3 & 1.65 & False & 2.36 & Nortmann et al. (2018) \\ WASP-127b & 0.058 & 14.69 & 0.02 & 8.7 & 3.6 & False & 1.33 & dos Santos et al. (2020) \\ GJ 9827d & 2.45 & 2.022 & 0.52 & 0.67 & 0.41 & False & 0.602 & Kasper et al. (2020) \\ HD 97658b & 1.1 & 2.4 & 1.58 & 2.1 & 1.3 & False & 0.73 & Kasper et al. (2020) \\
55 Cnc e & 5.8 & 1.9 & 1.13 & 0.27 & 0.16 & False & 0.94 & Zhang et al. (2021) \\ HAT-P-18b & 0.7 & 11.1 & 0.16 & 44.0 & 10.0 & True & 0.749 & Vissapragada et al. (2022) \\ HAT-P-26b & 2.4 & 6.3 & 0.13 & 19.7 & 6.4 & True & 0.788 & Vissapragada et al. (2022) \\ HD 63433b & 10.3 & 2.08 & 0.84 & 10.0 & 2.0 & False & 0.897 & Zhang et al. (2022b) \\ HD 63433c & 2.5 & 2.57 & 1.01 & 10.0 & 2.0 & False & 0.897 & Zhang et al. (2022b) \\ TRAPPIST-1b & 9.6 & 1.116 & 1.59 & 3.467 & 1.7335 & False & 0.1192 & Krishnamurthy et al. (2021) \\ TRAPPIST-1e & 1.5 & 0.92 & 1.32 & 10.458 & 5.229 & False & 0.1192 & Krishnamurthy et al. (2021) \\ TRAPPIST-1f & 0.87 & 1.045 & 2.24 & 4.143 & 2.0715 & False & 0.1192 & Krishnamurthy et al. (2021) \\ WASP-80b & 1.721 & 11.1 & 0.58 & 7.0 & 3.5 & False & 0.61 & Fossati et al. (2022) \\ HAT-P-3b & 7.968 & 10.5 & 0.74 & 19.0 & 6.3 & False & 0.87 & Guilluy et al. (2023) \\ HAT-P-33b & 6.195 & 20.7 & 0.08 & 14.0 & 4.7 & False & 1.91 & Ibid \\ HAT-P-49b & 14.51 & 17.8 & 0.44 & 6.0 & 2.0 & False & 1.833 & Ibid \\ HD89345b & 0.244 & 7.4 & 0.23 & 7.0 & 2.3 & False & 1.657 & Ibid \\ K2-105b & 14.69 & 3.59 & 2.48 & 23.3 & 7.8 & False & 0.97 & Ibid \\ Kepler-25c & 1.019 & 5.217 & 0.16 & 18.6 & 6.2 & False & 1.34 & Ibid \\ Kepler-68b & 1.176 & 2.357 & 0.97 & 7.2 & 2.4 & False & 1.243 & Ibid \\ WASP-47d & 0.577 & 3.567 & 0.75 & 32.9 & 11.0 & False & 1.137 & Ibid \\ TOI 560b & 3.1 & 2.79 & 1.34 & 7.6 & 0.44 & True & 0.65 & Zhang et al. (2022a) \\ TOI 1430b & 4.3 & 2.2 & 1.41 & 7.3 & 0.4 & True & 0.784 & Ibid, Orell-Miquel et al. (2023) \\ TOI 1683.01 & 7.4 & 2.6 & 0.77 & 9.22 & 0.98 & True & 0.636 & Ibid \\ TOI 2076b & 6.7 & 2.52 & 1.34 & 8.73 & 0.3 & True & 0.762 & Ibid \\ TOI 2134b & 0.46 & 2.69 & 1.33 & 3.32 & 0.29 & True & 0.709 & This work \\ \hline \end{tabular}
\end{table}
Table 3: Helium detections and non-detections. Tentative detections are excluded, as are targets with no reported XUV flux, and giant planets without mass measurements. All \(F_{\rm XUV}\) estimates have uncertainties of at least a factor of a few.
to be a separable function of \(T_{\rm eff}\) and \(a\). Either planet-specific simulations or grid simulations that map out the multidimensional parameter space are likely necessary to obtain accurate estimates of \(f\).
Aside from attempting to estimate the metastable fraction, we tried other variants of the correlation. For example, we tried using the white light radius instead of the estimated XUV radius to estimate the planet density, finding that it weakens the correlation only slightly (ODR p=3.5\(\times 10^{-5}\), Spearman p=2.1\(\times 10^{-3}\)). Dividing the energy-limited mass loss formula by the K-factor, which accounts for the gravitational potential of the star (Erkaev et al., 2007), has a negligible effect on the strength of the correlation (ODR p=2.7\(\times 10^{-5}\), Spearman p=3.8\(\times 10^{-4}\)). Next, we multiplied the energy-limited mass loss rate by an estimate of the efficiency in addition to dividing it by K. Caldiroli et al. (2022) ran a suite of 1D hydrodynamic simulations using their ATES code and derived an analytic approximation to the mass loss efficiency as a function of \(F_{\rm XUV}/\rho\) and the modified gravitational potential \(KGM/R\). After taking this efficiency into account, the correlation slightly strengthens according to the linear regression test (p=9\(\times 10^{-6}\)), but slightly weakens according to Spearman's test (p=5.2\(\times 10^{-4}\)).
|
2310.07330 | Functional Generalized Canonical Correlation Analysis for studying
multiple longitudinal variables | In this paper, we introduce Functional Generalized Canonical Correlation
Analysis (FGCCA), a new framework for exploring associations between multiple
random processes observed jointly. The framework is based on the multiblock
Regularized Generalized Canonical Correlation Analysis (RGCCA) framework. It is
robust to sparsely and irregularly observed data, making it applicable in many
settings. We establish the monotonic property of the solving procedure and
introduce a Bayesian approach for estimating canonical components. We propose
an extension of the framework that allows the integration of a univariate or
multivariate response into the analysis, paving the way for predictive
applications. We evaluate the method's efficiency in simulation studies and
present a use case on a longitudinal dataset. | Lucas Sort, Laurent Le Brusquet, Arthur Tenenhaus | 2023-10-11T09:21:31Z | http://arxiv.org/abs/2310.07330v1 | # Functional Generalized Canonical Correlation Analysis for studying multiple longitudinal variables
###### Abstract
In this paper, we introduce Functional Generalized Canonical Correlation Analysis (FGCCA), a new framework for exploring associations between multiple random processes observed jointly. The framework is based on the multiblock Regularized Generalized Canonical Correlation Analysis (RGCCA) framework. It is robust to sparsely and irregularly observed data, making it applicable in many settings. We establish the monotonic property of the solving procedure and introduce a Bayesian approach for estimating canonical components. We propose an extension of the framework that allows the integration of a univariate or multivariate response into the analysis, paving the way for predictive applications. We evaluate the method's efficiency in simulation studies and present a use case on a longitudinal dataset.
_Keywords: Longitudinal data, Functional Data, Generalized Canonical Correlation Analysis_
Introduction
Measuring multiple biomarkers jointly over time is common in observational studies and clinical trials. As they characterize various biological processes which are often interdependent, those biomarkers are usually correlated. Hence, separately analyzing those longitudinal variables may hide parts of the biological mechanisms at stake and give redundant information. Furthermore, as subjects often miss one or more visits, the biomarkers may be observed sparsely and irregularly. Therefore, along with the complex time-dependent continuous structure of the data, the statistical analysis of multiple longitudinal variables requires using specific methodologies for efficiently harvesting information and integrating the interaction between the variables.
In the multivariate setting, the analysis of data coming from multiple sources, usually represented by multiple sets of variables, is often referred to as "multiblock" or "multi-set" analysis. Canonical Correlation Analysis (CCA) (Hotelling (1936)) is one of the most notorious approaches, but it is limited to exploring associations between only two sets of variables. Therefore, various methods were proposed to generalize the CCA problem to more than two sets of variables (Horst (1961), Carroll (1968), Kettenring (1971)). More recently, Tenenhaus and Tenenhaus (2011) introduced Regularized Generalized Canonical Correlation Analysis (RGCCA), a regularized and more flexible framework for studying multiple sets of variables, giving birth to a new generation of methods (Tenenhaus et al. (2014, 2015), Singh et al. (2019)). Various extensions of RGCCA were proposed to handle emerging data types, such as tensor data (Girka et al. (2023)). However, most are still limited to finite-dimensional Euclidian spaces and are not designed to handle large amounts of missing observations.
In the longitudinal literature, approaches based on linear mixed-effects models have been widely employed over the past decades for studying longitudinal biomarkers (Rizopoulos (2011), German et al. (2021)). However, for adapting multivariate data analysis methods to the longitudinal setting, functional approaches have been preferred as functional spaces are easy to handle and allow to describe the underlying smooth structure of the time-dependent variables. In this context, adaptations of Principal Component Analysis (PCA) to the longitudinal setting have flourished (Rice and Silverman (1991)). Most notably, Yao et al. (2005) proposed an adaptation using a covariance-based procedure and a Bayesian approach to estimate the principal components. The method is thus robust to sparse and irregular data, making it applicable to numerous problems.
Multiple extensions of CCA were proposed to explore associations between two longitudinal variables (Leurgans et al. (1993); He et al. (2003); Zhou et al. (2008); Shin and Lee (2015)). Regularization is crucial in this infinite-dimensional context as CCA requires inverting covariance matrices. Inspired by Yao et al. (2005) approach, Yang et al. (2011) introduced the Functional Singular Value Decomposition (FSVD), which moves the CCA criterion to a covariance criterion and also uses a Bayesian approach to estimate the canonical components. However, as in the multivariate setting, those adaptations are limited to a pair of longitudinal variables. Few methods can go beyond this limitation. To our knowledge, Hwang et al. (2011) proposed the first approach to find associations between any number of longitudinal variables using a homogenous components criterion. More recently, Gorecki et al. (2020) proposed to adapt Horst (1961) approach to functional spaces using basis decomposition. Although it is not designed to explore association among several longitudinal variables, it is worth mentioning Multivariate Functional Principal Component Analysis (MFPCA) (Happ and Greven (2018)), designed to retrieve the principal modes of variation on multivariate longitudinal data. The method can also handle sparse and irregular data.
In this context, we propose Functional Generalized Canonical Correlation Analysis (FGCCA), a framework based on RGCCA that allows exploring and studying associations between several longitudinal markers in a flexible way. The method proposed is robust to sparse and irregular longitudinal data. Furthermore, it is designed so it can integrate a multivariate block in the analysis to perform, for instance, supervised learning. As RGCCA, the framework provided by FGCCA is so vast that it encompasses many existing methods, notably Yao et al. (2005) FPCA, FSVD, or Functional Partial Least Squares (FPLS) as presented by Preda et al. (2007).
The paper is organized as follows. First, in Section 2.1, we recall the Regularized Generalized Canonical Correlation Analysis (RGCCA) framework. Then, in Section 2.2, we introduce the Functional Generalized Canonical Correlation Analysis (FGCCA) context and optimization problem. We present the solving procedure of FGCCA in Section 2.3 and introduce a scheme for retrieving higher-order functions and estimating components in Section 2.4 and 2.5 respectively. We validate our method on simulation studies in Section 3.1 and propose an application to a real dataset in Section 3.2. Finally, our approach's limitations and possible extensions are discussed in Section 4.
Proofs of propositions are given in Supplementary Materials. The code used to run the experiments and the R implementation of FGCCA are freely available on github: [https://github.com/Sort-L/FGCCA-Code](https://github.com/Sort-L/FGCCA-Code).
Method
### Regularized Generalized Canonical Correlation Analysis (RGCCA)
Regularized Generalized Canonical Correlation Analysis (RGCCA) (Tenenhaus et al. (2017)) is an optimization and statistical framework for studying associations between multiple sets of random variables. Denoting \(\mathrm{x}_{1},\ldots,\mathrm{x}_{J}\) the sets of respectively \(p_{1},\ldots,p_{J}\) random variables, and \(\boldsymbol{\Sigma}_{jj^{\prime}}\) the \(p_{j}\times p_{j^{\prime}}\) matrix of (cross-)covariance between \(\mathrm{x}_{j}\) and \(\mathrm{x}_{j^{\prime}}\), the RGCCA optimization problem can be expressed as:
\[\underset{\mathrm{a}_{1},\ldots,\mathrm{a}_{J}\in\Omega_{1}\times\cdots \times\Omega_{J}}{\mathrm{argmax}}\sum_{j\neq j^{\prime}}c_{j,j^{\prime}}g( \mathrm{a}_{j}^{\top}\,\boldsymbol{\Sigma}_{jj^{\prime}}\,\mathrm{a}_{j^{ \prime}}) \tag{1}\]
where \(\Omega_{j}\) is defined as \(\Omega_{j}=\big{\{}\mathrm{a}_{j}\in\mathbb{R}^{p_{j}}\mid\mathrm{a}_{j}^{ \top}\,\boldsymbol{M}_{j}\,\mathrm{a}_{j}=1\big{\}}\), with \(\boldsymbol{M}_{j}\) being a symmetric-positive matrix, \(g\) is a convex differentiable function and the matrix \(C=(c_{j,j^{\prime}})\) is a \(J\times J\) symmetric matrix with positive elements specifying the desired connection design to study associations between the blocks. Classically we set \(c_{j,j^{\prime}}=1\) if we want to consider the interaction between the blocks \(j\) and \(j^{\prime}\) and \(c_{j,j^{\prime}}=0\) otherwise. Additionally, we often consider \(\boldsymbol{M}_{j}=\tau_{j}\boldsymbol{I}_{p_{j}}+(1-\tau_{j})\boldsymbol{ \Sigma}_{jj}\) with \(\tau_{j}\in[0,1]\) to interpolate smoothly the criterion between a correlation criterion, when \(\tau_{j}=0\), and a covariance criterion, when \(\tau_{j}=1\). In this context, the goal of RGCCA is to retrieve block weight vectors \(\mathrm{a}_{j}\) giving block components \(\mathrm{y}_{j}=\mathrm{x}_{j}^{\top}\,\mathrm{a}_{j}\), which are a compromise of the information from each set of variables and the information shared with the other sets of variables.
Finally, two strategies are often used to compute higher-level weight vectors for each set of variables. The first strategy leads to new weight vectors associated with components uncorrelated to the previous ones. The second strategy yields new weight vectors orthogonal to the previous ones. Both strategies require transforming the original sets of variables \(\mathrm{x}_{j}\) into new sets of variables \(\mathrm{x}_{j}^{\prime}\) called "deflated" vectors. The first transformation, more often used, consists in regressing out from each set \(\mathrm{x}_{j}\) its associated component \(\mathrm{y}_{j}=\mathrm{x}_{j}^{\top}\,a_{j}\): the transformation can be written \(\mathrm{x}_{j}^{\prime}=\mathrm{x}_{j}-(a_{j}^{\top}\boldsymbol{\Sigma}_{jj}a _{j})^{-1}\boldsymbol{\Sigma}_{jj}a_{j}a_{j}^{\top}\,\mathrm{x}_{j}\). The second transformation consists in projecting each set \(\mathrm{x}_{j}\) onto the orthogonal of the space spanned by the previous weight vectors: it is defined as \(\mathrm{x}_{j}^{\prime}=\mathrm{x}_{j}-\mathrm{a}_{j}\,\mathrm{a}_{j}^{\top} \,\mathrm{x}_{j}\). To retrieve new weight vectors and new components with the desired properties, the solving procedure is rerun, replacing original vectors \(\mathrm{x}_{j}\) with deflated vectors \(\mathrm{x}_{j}^{\prime}\). The transformation equations, often called "deflation" equations, can be used repeatedly to retrieve multiple weight vectors and components.
As demonstrated in Tenenhaus et al. (2017), the framework of RGCCA is very general and subsumes many notorious data analysis methods such as Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA) (Hotelling (1936)), Partial Least Squares (PLS) regression (Wold et al. (2001)), and Generalized Canonical Correlation Analysis (GCCA) (Carroll (1968); Horst (1961); Kettenring (1971)), to name a few. Many extensions and adaptations have been proposed to tackle a wide variety of problems, but, to our knowledge, none exists to integrate the time continuous structure of longitudinal data and, especially, to handle highly sparse and irregular observations.
### Functional Generalized Canonical Correlation Analysis (FGCCA)
We now consider multiple time-dependent variables, such as longitudinal biomarkers. We propose to adapt the previous framework to functional spaces, and more precisely square-integrable random processes, since random processes can represent the time continuous structure of the data. Our approach, named Functional Generalized Canonical Correlation Analysis (FGCCA) is now introduced.
#### 2.2.1 Notations and definitions
From now on, we consider \(\mathrm{X}_{1},\ldots,\mathrm{X}_{J}\), \(J\) square-integrable random processes defined on compact intervals of \(\mathbb{R}\), \(I_{1},\ldots,I_{J}\) respectively. Note that the random objects are thus part of infinite-dimensional Hilbert spaces \(L^{2}(I_{1}),\ldots,L^{2}(I_{J})\). In this context, we define for the process \(j\) the mean function \(\mu_{j}\) as \(\mu_{j}(t)=\mathbb{E}(\mathrm{X}_{j}(t))\) for \(t\in I_{j}\). Additionally, the (cross-)covariance function (or "surface") \(\Sigma_{jj^{\prime}}\) between the processes \(j\) and \(j^{\prime}\) is defined as \(\Sigma_{jj^{\prime}}(s,t)=\mathbb{E}((\mathrm{X}_{j}(s)-\mu_{j}(s))(\mathrm{ X}_{j^{\prime}}(t)-\mu_{j^{\prime}}(t)))\) with \(s\in I_{j}\) and \(t\in I_{j^{\prime}}\). Finally, the (cross-)covariance operator between the processes \(j\) and \(j^{\prime}\), \(\mathbf{\Sigma}_{jj^{\prime}}\), is defined as :
\[\mathbf{\Sigma}_{jj^{\prime}}:L^{2}(I_{j^{\prime}})\to L^{2}(I_{j}),\;f \mapsto g,\;g(s)=\int_{I_{j^{\prime}}}\Sigma_{jj^{\prime}}(s,t)f(t)\mathrm{dt}\]
#### 2.2.2 Model
Following the optimization problem (1), defining RGCCA, we define Functional Generalized Canonical Correlation Analysis (FGCCA) optimization problem by moving from the multivariate setting of \(\mathbb{R}^{p_{j}}\) spaces to the functional setting of \(L^{2}(I_{j})\) spaces, replacing the sets of variables \(\mathrm{x}_{j}\) by the random processes \(\mathrm{X}_{j}\) and, therefore, the euclidean dot product \(a^{\top}b\) by the functional scalar product
defined by \(\langle f,g\rangle_{L^{2}}=\int fg\). The FGCCA optimization problem can therefore be written:
\[\operatorname*{argmax}_{f_{1},\ldots,f_{J}\in\Omega_{1}\times\cdots\times\Omega _{J}}\sum_{j\neq j^{\prime}}c_{j,j^{\prime}}g(\langle f_{j},\mathbf{\Sigma}_{jj^{ \prime}}f_{j^{\prime}}\rangle_{L^{2}}) \tag{2}\]
where \(\Omega_{j}\) is defined as \(\Omega_{j}=\left\{f_{j}\in L^{2}(I_{j})\mid\langle f_{j},\mathbf{M}_{j}f_{j} \rangle_{L^{2}}=1\right\}\) with \(\mathbf{M}_{j}\) being a symmetric positive-definite operator, and where \(g\) and \(\mathbf{C}=(c_{j,j^{\prime}})\) are defined similarly as before. Similarly to RGCCA, we suggest setting \(\mathbf{M}_{j}=\tau_{j}\mathbf{I}_{I_{j}}+(1-\tau_{j})\mathbf{\Sigma}_{jj}\). However, to ensure the positive definiteness of the operator \(\mathbf{M}_{j}\), regularization parameters \(\tau_{j}\) must be strictly superior to 0 (and thus, lie in \(]0,1]\)) as covariance operators \(\mathbf{\Sigma}_{jj}\) are not necessarily definite in the infinite-dimensional setting. Functions \(f_{j}\) and components \(\mathrm{y}_{j}=\langle\mathrm{X}_{j},f_{j},\rangle_{L^{2}}\), allow capturing information for each process which, depending on the model parameters, is a summary of both the information from each process and the information shared with the others.
#### 2.2.3 (Cross-)Covariance estimation
In the multivariate setting, the most straightforward estimation for (cross-)covariance matrices \(\mathbf{\Sigma}_{jj^{\prime}}\) is the sample covariance matrix, which is fast and easy to compute. However, in the functional setting, it is usually preferable to use alternative strategies that integrate the data's time-continuous structure.
Various methods have been proposed over the past decades to integrate this structure. In the functional data analysis literature, (cross-)covariance operators \(\mathbf{\Sigma}_{jj^{\prime}}\) are often discretized and estimated on dense & regular grids using kernel smoothing methods (Yao et al. (2005); Yang et al. (2011)) or Generative Additive Models (GAM), which are easy to implement. In this context, estimation methods often have several hyperparameters that must be set. Usually, those hyperparameters are manually specified prior to the analysis. However, cross-validation procedures, such as leave-one-out cross-validation or criterion-based procedures, can be used for selecting them (Leurgans et al. (1993)). Additionally, due to approximation errors, estimated operators are rarely positive in practice, making the choice of the regularization parameters \(\tau_{j}\) in FGCCA crucial, as the interval they are defined on may not be clearly identified. Consequently, we advise setting regularization parameters to 1 as it will always prevent the optimization problem from being ill-posed.
Finally, we recommend normalizing the different processes before estimating (cross-)covariance operators. Like in the multivariate setting, considerable differences in the variance of the processes
may lead to biased results. For this purpose, we suggest using the normalization presented in a similar context by Happ and Greven (2018), enforcing the integrated variance to be the same for each process. To achieve this, each process \(j\) is multiplied by the following normalization quantity:
\[w_{j}=\left(\int_{I_{j}}\text{var}(\text{X}_{j}(t))\text{dt}\right)^{-1/2}\]
### Resolution
We now introduce a procedure to retrieve solutions to the FGCCA optimization problem. Convergence properties are given, ensuring the stability of the solving procedure.
#### 2.3.1 Procedure
Let \(\Psi\) be the objective function:
\[\Psi(f_{1},\ldots,f_{J})=\Psi(\mathbf{f})=\sum_{j\neq j^{\prime}}c_{j,j^{\prime}}g( \langle f_{j},\mathbf{\Sigma}_{jj^{\prime}}f_{j^{\prime}}\rangle_{L^{2}})\]
As \(\Psi\) is a function of multiple arguments, we suggest using a block coordinate ascent (BCA) strategy (de Leeuw (1994)) for finding solutions to the maximization problem. This strategy consists in maximizing \(\Psi\) argument by argument until convergence is reached. The properties of \(g\) implies that the objective function is differentiable and multi-convex, meaning that it is convex with respect to each argument \(f_{j}\) when all the others are fixed. Note also that we consider here the "functional differentiability" since \(\Psi\) has functional arguments. A proof of the definition of the gradient is given in Supplementary Materials. From these properties we can derive the following inequality for \(\tilde{f}_{j}\in\Omega_{j}\):
\[\Psi(f_{1},\ldots,\tilde{f}_{j},\ldots,f_{J})\geq\Psi(\mathbf{f})+\langle\nabla_{j }\Psi(\mathbf{f}),\tilde{f}_{j}-f_{j}\rangle=m_{j}(\mathbf{f},\tilde{f}_{j}) \tag{3}\]
where \(\nabla_{j}\Psi(\mathbf{f})\) is the functional partial derivative of \(\Psi\) with respect to the \(j\)th function. With this expression, we notice that maximizing \(\Psi\) for the \(j\)th argument can be achieved by maximizing the minorizing function \(m_{j}\). In this minorizing function, only the term \(\langle\nabla_{j}\Psi(\mathbf{f}),\tilde{f}_{j}\rangle\) is relevant since all the others are fixed. Therefore, the maximum of \(m_{j}\) under the constraint that \(\tilde{f}_{j}\in\Omega_{j}\) is reached
for:
\[\hat{f}_{j}=\operatorname*{argmax}_{\tilde{f}_{j}\in\Omega_{j}}\langle\nabla_{j} \Psi(\mathbf{f}),\tilde{f}_{j}\rangle=\frac{\mathbf{M}_{j}^{-1}\nabla_{j}\Psi(\mathbf{f})}{ ||\mathbf{M}_{j}^{-1/2}\nabla_{j}\Psi(\mathbf{f})||}:=r_{j}(\mathbf{f}) \tag{4}\]
where the partial derivative can be expressed as:
\[\nabla_{j}\Psi(\mathbf{f})=2\sum_{\begin{subarray}{c}j^{\prime}=1\\ j^{\prime}\neq j\end{subarray}}^{J}c_{j,j^{\prime}}g^{\prime}(\langle f_{j}, \mathbf{\Sigma}_{jj^{\prime}}f_{j^{\prime}}\rangle)\mathbf{\Sigma}_{jj^{\prime}}f_{j^ {\prime}} \tag{5}\]
From this, we propose the Algorithm (1) for retrieving solutions to optimization problem (13).
```
Data:\((\mathbf{\Sigma}_{jj^{\prime}})_{1\leq j,j^{\prime}\leq J}\), \(g,C,\epsilon,\mathbf{f}^{0}\) Result:\(f_{1}^{s+1},\ldots,f_{J}^{s+1}\) repeat for\(j=1\)to\(J\)do \[\tilde{f}_{j}^{s+1}=\frac{\mathbf{M}_{j}^{-1}\nabla_{l}\Psi(f_{1}^{s+1},\ldots,f_ {j-1}^{s+1},f_{j}^{s},f_{j+1}^{s},\ldots,f_{J}^{s})}{||\mathbf{M}_{j}^{-1/2}\nabla _{l}\Psi(f_{1}^{s+1},\ldots,f_{j-1}^{s+1},f_{j}^{s},f_{j+1}^{s},\ldots,f_{J}^ {s})||}\]
end for\(s=s+1\) until\(\Psi(f_{1}^{s+1},\ldots,f_{J}^{s+1})-\Psi(f_{1}^{s},\ldots,f_{J}^{s})<\epsilon\);
```
**Algorithm 1** FGCCA algorithm
#### 2.3.2 Monotone convergence
Denoting \(\Omega=\Omega_{1}\times\cdots\times\Omega_{J}\) we define \(c_{j}:\Omega\to\Omega\) as the operator \(c_{j}(\mathbf{f})=(f_{1},\ldots;f_{j-1},r_{j}(\mathbf{f}),f_{j+1},\ldots,f_{J})\) with \(r_{j}(\mathbf{f})\) being the update function for the \(j\)th function of the solving procedure in Section 2.3.1. We also define \(c:\Omega\to\Omega\) as the operator \(c=c_{J}\circ\cdots\circ c_{1}\)
We consider the sequence \(\{\mathbf{f}^{s}=(f_{1}^{s},\ldots,f_{J}^{s})\}\) generated by \(\mathbf{f}^{s+1}=c(\mathbf{f}^{s})\). The following proposition states the monotone convergence of the generated sequence \(\{\mathbf{f}^{s}\}_{s=0}^{\infty}\) and holds as long as the update \(r_{j}(\mathbf{f})\) exists, is unique and \(\Omega\) is bounded:
**Proposition 1**.: _Considering any sequence \(\{\mathbf{f}^{s}\}_{s=0}^{\infty}\) generated recursively by the relation \(\mathbf{f}^{s+1}=c(\mathbf{f}^{s})\) with \(\mathbf{f}^{0}\in\Omega\). The sequence \(\{\Psi(\mathbf{f}^{s})\}\) is monotonically increasing and therefore convergent as \(\Psi\) is bounded on \(\Omega\), implying the convergence of the FGCCA algorithm._
Using this proposition, since \(\Omega\) is bounded and \(r_{j}(\mathbf{f})\) properly and uniquely defined for all \(j\) with the functional gradient, we can conclude that the Algorithm (1) is monotone and convergent.
### Retrieving higher-order orthogonal functions
Solving the optimization problem as described previously only yields one function per block. However, it is often preferable to retrieve multiple functions leading to multiple components. For this purpose, we suggest, as in the RGCCA framework, using a deflation strategy.
First, we propose considering a deflation strategy for retrieving orthogonal vectors. As detailed in Section 2.1, the deflation equation associated with this strategy is \(\mathrm{x}^{\prime}_{j}=\mathrm{x}_{j}-\mathrm{a}_{j}\,\mathrm{a}^{\top}_{j}\, \mathrm{x}_{j}\). This expression may be unusable in the functional setting, with sparse and irregular observations, as processes \(\mathrm{X}_{j}\) are not fully observed. Moreover, only the (cross-)covariance operators are involved in the solving procedure, making the deflation of the blocks appear as an unnecessary step in the procedure. Therefore, we establish the following proposition, allowing us to deflate the (cross-)covariance operators directly without involving the possibly ill-defined blocks:
**Proposition 2**.: _Denoting \(\mathbf{\Sigma}^{\prime}_{jj^{\prime}}\) the deflated (cross-)covariance operator of \(\mathbf{\Sigma}_{jj^{\prime}}\), obtained after projecting the processes onto the orthogonal of the space spanned by their associated vectors. The following equality holds:_
\[\mathbf{\Sigma}^{\prime}_{jj^{\prime}}=(\mathbf{I}_{I_{j}}-\mathbf{\Phi}_{j})\mathbf{\Sigma}_{ jj^{\prime}}(\mathbf{I}_{I_{j^{\prime}}}-\mathbf{\Phi}_{j^{\prime}}) \tag{6}\]
_where \(\mathbf{\Phi}_{j}:L^{2}(I_{j})\to L^{2}(I_{j})\) is the operator defined by :_
\[(\mathbf{\Phi}_{j})(f)=(f_{j}\otimes f_{j})(f)=\langle f_{j},f\rangle f_{j}\]
To retrieve new orthogonal functions, deflated operators are plugged into the optimization problem and Algorithm 1 is run again. The deflation procedure can be repeated multiple times, allowing to retrieve a set of canonical functions \(\{f^{m}_{j}\}_{1\leq m\leq M}\) for each random process. The number of canonical functions to retrieve is often manually set but it can be chosen using cross-validation or criterion-based approaches. Finally, in this context, the set of components \(\{\mathrm{y}^{m}_{j}\}_{1\leq m\leq M}\) obtained for each process can be directly estimated from the original processes \(\mathrm{X}_{j}\) without using the deflated processes \(\mathrm{X}^{m}_{j}\), a desirable property in our setting as it may be difficult to compute and manipulate the deflated processes. Indeed, we can demonstrate easily that \(\langle\mathrm{X}_{j},f^{m}_{j}\rangle_{L^{2}}=\langle\mathrm{X}^{m}_{j},f^{m} _{j}\rangle_{L^{2}}=\mathrm{y}^{m}_{j}\), where \(\mathrm{X}^{m}_{j}\) stands for the \(m\)th deflation of process \(j\).
### Estimating components
Computing the components \(\mathrm{y}_{j}^{m}=\langle\mathrm{X}_{j},f_{j}^{m}\rangle_{L^{2}}\) may be difficult in the sparse and irregular setting, as the numerical estimation of the \(L^{2}\) scalar product can be unstable and untracktable, particularly if the number of observations is small. In this context, inspired from Yao et al. (2005) and Yang et al. (2011), we propose to estimate the components using a Bayesian approach.
#### 2.5.1 Notations
In the following, subscripts \(i\), \(j\), \(k\) denote respectively the subject number, the process number, and the observation number. We denote \(n_{ij}\) the number of observations, \(\mathbf{U}_{ij}=(U_{ij1},\ldots,U_{ijn_{ij}})^{\top}\in\mathbb{R}^{n_{ij} \times 1}\) the observations, and \(t_{ij}=(t_{ij1},\ldots,t_{ijn_{ij}})\) the observation time points. Finally the observations are modeled as:
\[U_{ijk}=X_{ij}(t_{ijk})+\varepsilon_{ijk} \tag{7}\]
where \(X_{ij}\) is the realization for the subject \(i\) of the random process \(\mathrm{X}_{j}\) and \(\varepsilon_{ijk}\) is a measurement error. The measurement errors are supposed i.i.d and following a normal distribution \(\mathcal{N}(0,\sigma_{j}^{2})\).
#### 2.5.2 Process modeling
As previously stated, considering the set of orthonormal canonical functions \(\{f_{j}^{m}\}_{1\leq m\leq M}\), each process \(j\) from any subject \(i\) can be decomposed as:
\[X_{ij}(t)=\mu_{j}(t)+\sum_{m=1}^{M}\xi_{ij}^{m}f_{j}^{m}(t) \tag{8}\]
where the coefficients \(\xi_{ij}^{m}\) are the basis coefficients associated with the basis \(\{f_{j}^{m}\}_{1\leq m\leq M}\). Therefore, at the sample level we have:
\[U_{ijk}=\mu_{j}(t_{ijk})+\sum_{m=1}^{M}\xi_{ij}^{m}f_{j}^{m}(t_{ijk})+\varepsilon _{ijk} \tag{9}\]
This formulation allows to see the basis decomposition as a linear mixed-effects model with the fixed-effects part being the mean term and the random-effects part being the decomposition term. Moreover, as \(\xi_{ij}^{m}=\langle X_{ij},f_{j}^{m}\rangle_{L^{2}}\) and since the deflation strategy introduced in Section 2.4 leads as previously stated to \(\langle\mathrm{X}_{j},f_{j}^{m}\rangle_{L^{2}}=\langle\mathrm{X}_{j}^{m},f_{j }^{m}\rangle_{L^{2}}=\mathrm{y}_{j}^{m}\) we have that \(\xi_{ij}^{m}=\mathrm{y}_{ij}^{m}\). Therefore, estimating
basis coefficients is equivalent to estimating components.
#### 2.5.3 Estimation
For simplifying expressions, denoting \(N_{i}=\sum_{j}n_{ij}\), we write the vector of observations \(\mathbf{U}_{i}=(\mathbf{U}_{i1}^{\top},\ldots,\mathbf{U}_{iJ}^{\top})^{\top}\in \mathbb{R}^{N_{i}\times 1}\), and the mean function vector (at the observation time points) \(\boldsymbol{\mu}_{i}=(\boldsymbol{\mu}_{i,1}^{\top},\ldots,\boldsymbol{\mu}_{ i,J}^{\top})^{\top}\in\mathbb{R}^{N_{i}\times 1}\) with \(\boldsymbol{\mu}_{i,j}=(\mu_{i}(t_{ij1}),\ldots,\mu_{i}(t_{ijn_{ij}}))^{\top} \in\mathbb{R}^{n_{ij}\times 1}\). We also write \(\mathbf{F}_{ij}^{m}=(f_{j}^{m}(t_{ij1}),\ldots,f_{j}^{m}(t_{ijn_{ij}}))^{\top} \in\mathbb{R}^{n_{ij}\times 1}\) and \(\mathbf{F}_{ij}=(\mathbf{F}_{ij}^{1},\ldots,\mathbf{F}_{ij}^{M})^{\top}\in \mathbb{R}^{n_{ij}\times M}\), the matrix of the \(M\) canonical functions at the observation time points for subject \(i\), process \(j\), and finally \(\boldsymbol{\xi}_{j}=(\xi_{j}^{1},\ldots,\xi_{j}^{M})^{\top}\in\mathbb{R}^{M \times 1}\), \(\boldsymbol{\xi}=(\boldsymbol{\xi}_{1}^{\top},\ldots,\boldsymbol{\xi}_{J}^{ \top})^{\top}\in\mathbb{R}^{MJ\times 1}\), the vector of basis coefficients. Considering that the basis coefficients and the measurement errors are centered and jointly Gaussian, we establish the following proposition, allowing estimating coefficients:
**Proposition 3**.: _Denoting \(\mathbf{F}_{i}=\mathrm{diag}(\mathbf{F}_{i1},\ldots,\mathbf{F}_{iJ})\), \(\boldsymbol{\Sigma}=\mathbb{E}[\boldsymbol{\xi}\boldsymbol{\xi}^{\top}]\) and \(\boldsymbol{\sigma}_{i}=\mathrm{diag}(\sigma_{1}^{2}\mathbf{I}_{n_{ii}1}, \ldots,\sigma_{J}^{2}\mathbf{I}_{n_{iJ}})\), the best linear unbiaised predictor (BLUP) for \(\boldsymbol{\xi}_{i}\) is given by_
\[\mathbb{E}(\boldsymbol{\xi}_{i}|\boldsymbol{U}_{i})=\boldsymbol{\Sigma} \mathbf{F}_{i}^{\top}(\mathbf{F}_{i}\boldsymbol{\Sigma}\mathbf{F}_{i}^{\top}+ \boldsymbol{\sigma}_{i})^{-1}(\boldsymbol{U}_{i}-\boldsymbol{\mu}_{i}) \tag{10}\]
In this expression, all the terms can be estimated. Notably, the canonical functions can be interpolated at the observation time points of each subject allowing estimating matrices \(\mathbf{F}_{i}\). The noise standard deviation \(\sigma_{j}\) can be approximated for each process using the estimated covariance surface of each process (a procedure is presented in Yao et al. (2005)). Finally, the mean functions \(\boldsymbol{\mu}_{i}\) are usually estimated when estimating (cross-)covariance surfaces, often using smoothing techniques or GAMs.
### Retrieving higher-order uncorrelated components
The deflation strategy introduced in Section 2.4 allows recovering multiple orthogonal functions for each process. However, as introduced in section 2.1, retrieving uncorrelated components is often preferable. In the multivariate setting, this deflation strategy is carried out using the following deflation equation \(\mathrm{x}_{j}^{\prime}=\mathrm{x}_{j}-(a_{j}^{\top}\boldsymbol{\Sigma}_{jj} a_{j})^{-1}\boldsymbol{\Sigma}_{jj}a_{j}a_{j}^{\top}\,\mathrm{x}_{j}\). As previously, we propose to adapt this strategy in the functional setting to deflate the (cross-)covariance operators directly. For this purpose, we establish the following proposition:
**Proposition 4**.: _Denoting \(\mathbf{\Sigma}^{\prime}_{jj^{\prime}}\) the deflated (cross-)covariance operator of \(\mathbf{\Sigma}_{jj^{\prime}}\), obtained from regressing out the components from their associated block. The following equality holds :_
\[\mathbf{\Sigma}^{\prime}_{jj^{\prime}}=(\mathbf{I}_{I_{j}}-d_{j}\mathbf{\Sigma}_{jj} \mathbf{\Phi}_{j})\mathbf{\Sigma}_{jj^{\prime}}(\mathbf{I}_{I_{j^{\prime}}}-d_{j^{ \prime}}\mathbf{\Phi}_{j^{\prime}}\mathbf{\Sigma}_{j^{\prime}j^{\prime}}) \tag{11}\]
_where \(d_{j}=(\mathrm{y}_{j}^{\top}\,\mathrm{y}_{j})^{-1}\) and, as previously, \(\mathbf{\Phi}_{j}:L^{2}(I_{j})\to L^{2}(I_{i})\) is the operator defined by :_
\[(\mathbf{\Phi}_{j})(f)=(f_{j}\otimes f_{j})(f)=\langle f_{j},f\rangle f_{j}\]
As previously, new functions associated with uncorrelated components can be obtained by replacing (cross-)covariance operators in the optimization problem by deflated ones and running the solving procedure. However, to obtain uncorrelated estimates of \(\mathrm{y}_{j}^{m}\), additional steps are required. Indeed, the equality \(\langle\mathrm{X}_{j},f_{j}^{m}\rangle_{L^{2}}=\langle\mathrm{X}_{j}^{m},f_{j }^{m}\rangle_{L^{2}}=\mathrm{y}_{j}^{m}\), which was previously used to estimate the components in the mixed-effects model, no longer holds in this setting. Nevertheless, the orthogonal property of the retrieved functions is still holding by construction. Therefore, canonical functions \(f_{j}^{m}\) can still be used as a decomposition basis in the mixed-effects framework presented previously. Furthermore, using deflation equations, basis coefficients and components can be linked with the following recursive equation:
\[\mathrm{y}_{j}^{m+1}=\xi_{j}^{m+1}-\sum_{k=1}^{m}P_{j}^{k}\xi_{j}^{m+1} \tag{12}\]
With \(P_{j}^{k}=(\mathrm{y}_{j}^{k}{}^{\top}\,\mathrm{y}_{j}^{k})^{-1}\,\mathrm{y}_ {j}^{k}\,\mathrm{y}_{j}^{k}{}^{\top}\) being the projection matrix from the regression of \(\xi_{j}^{k}\) on \(\mathrm{y}_{j}^{k}\), starting with \(\xi_{j}^{1}=\mathrm{y}_{j}^{1}\). This equation can be seen as a decorrelation procedure: for each process, each new basis coefficient estimate is decorrelated from the previous components.
Finally, the choice of the deflation type depends on the desired usage of the components and the canonical functions. For reconstructing trajectories, orthogonal functions deflation may be more adapted as the orthonormal functions retrieved are preferable for decomposition purposes. For doing clustering or dimension reduction for further analysis, such as regression, using uncorrelated components seem more suitable.
### Integrating a multivariate response
Inspired from the PLS-framework, we propose to modify slightly the FGCCA optimization problem to include a multivariate response \(\mathrm{Y}\in\mathbb{R}^{p}\) with \(p\in\mathbb{N}^{*}\) for expanding the possibilities given by the framework:
\[\underset{f_{1},\ldots,f_{J},\in\Omega_{1}\times\cdots\times\Omega_{J}}{\mathrm{ argmax}}\sum_{j\neq j^{\prime}}c_{j,j^{\prime}}g(\left\langle f_{j},\mathbf{\Sigma}_{jj^{ \prime}}f_{j^{\prime}}\right\rangle_{L^{2}})+2\sum_{j}g(\left\langle f_{j},\mathbf{ \Sigma}_{j\,\mathrm{Y}}a\right\rangle_{L^{2}}) \tag{13}\]
where \(\mathbf{\Sigma}_{j\,\mathrm{Y}}\) is the cross-covariance operator between the process \(j\) and the response \(\mathrm{Y}\), defined as:
\[\mathbf{\Sigma}_{j\,\mathrm{Y}}:\mathbb{R}^{p}\to L^{2}(I_{j}),\;a\mapsto g,\;g(s )=\sum_{i=1}^{p}\mathbb{E}[\mathrm{X}_{j}(t)\,\mathrm{Y}_{i}]a_{i}\]
As previously, this operator can be estimated with kernel smoothing methods or GAMs. This additional interaction allows recovering canonical functions for each process that explains best the process, interaction with the others (depending on the design matrix \(C\)) and interaction with the response. This design is particularly relevant in a predictive framework as it could be interesting to use the components retrieved to predict the response \(\mathrm{Y}\).
Finally, note that the various procedures presented previously are not significantly affected by this change and can be easily rewritten to integrate the multivariate vector using the multivariate-functional cross-covariance operator presented above.
## 3 Results
### Simulations studies
#### 3.1.1 Simulation 1: validating the Bayesian approach.
For \(J=3\) processes, we propose to compare the components retrieved using the Bayesian approach previously described in Section (2.5) to the components obtained by computing the scalar product, as usually done. For this purpose, we generate data according to Equation (8) so that true component values are known. We choose the first \(M=6\) Fourier basis functions in the \([0,1]\) interval as our orthonormal basis. The components \(\xi_{ij}^{m}\) for each subject are generated jointly with a centered Gaussian distribution of covariance \(\Sigma\) having a decreasing variance structure. For each subject and each process, the time points are generated by sparsifying a grid of size \(50\) in the \([0,1]\) interval.
Various sparsity levels are compared: Dense (100 % of observations retained), Low Sparsity (100 % to 80 % of observations retained), Medium Sparsity (80 % to 40 % of observations retained), and High Sparsity (40 % to 10 % of observations retained). We compare the two approaches by computing the mean squared error on the canonical components for 100 simulations. Results are reported in Figure 1.
We observe a clear advantage of the Bayesian approach in the Medium and in the High sparsity settings. The gap between the two approaches increases for higher-order components. Furthermore, we notice a slight advantage of the Bayesian approach in the Dense and in the Low sparsity cases, again, especially for the higher-order components. Therefore, we can conclude that the estimation error when using the Bayesian approach is smaller than the estimation error due to the numerical approximation of the integral when using the standard scalar product approach. Thus, we advise using the Bayesian approach in all cases as it is also computationally inexpensive.
#### 3.1.2 Simulation 2: comparing results
To validate FGCCA further, we suggest to compare it to some of its subsumed methods. We propose to consider two other approaches for \(J=2\) processes: PACE-based Functional Principal Component Analysis (FPCA) (Yao et al. (2005)) and Functional Singular Value Decomposition (FSVD) (Yang et al. (2011)). Both approaches can handle sparse and irregular data. The first gives similar results to FGCCA for specific component's covariance designs. The latter is based on an optimization problem equivalent to FGCCA in the 2 processes setting when regularization parameters \(\tau_{j}\) are set to 1. The simulation setting previously described is used again to generate data. The covariance
Figure 1: (top) Mean squared error (MSE) boxplots of the components \(\mathrm{y}_{j}^{m}\) estimated with the Bayesian approach or the scalar product (integration) for \(m=1,2,3,4,5,6\) obtained from 100 simulations with \(N=100\) and averaged over the \(J=3\) processes, \(\sigma^{2}=1\) and various sparsity settings.
matrix \(\Sigma\) is designed so FPCA, FSVD and FGCCA recover components and functions in the same order. Results are reported on Figure 2.
As FGCCA and FSVD are supposed to retrieve similar canonical functions, we observe similar distributions over the mean squared errors for functions. Components estimation rely on a slightly different formula for FSVD. Indeed, the singular values retrieved from the analysis are used, which should, in theory, increase estimation accuracy. However, it seems that in our case, the estimations obtained from a FSVD are significantly worse than with FGCCA, especially for high and medium sparsity settings and for high-order functions. On another hand, FPCA gives slightly better function estimations both for functions and components, especially in the sparse setting and for the first functions and components. For high order components, FGCCA sometimes outperforms FPCA. Additional results along with simulation details are presented in Supplementary Materials.
Figure 2: Mean squared errors (MSE) of functions \(f^{m}\) (top) and components \(\xi^{m}\) (bottom) for \(m=1,2,3,4,5,6\) obtained from 100 simulations with \(N=100\), \(\sigma^{2}=1\) and various sparsity settings. Comparison between FPCA, FSVD and FGCCA.
#### 3.1.3 Simulation 3 : comparing reconstruction
Alternatively, we propose to compare the estimation quality of the reconstructed trajectories between FGCCA with a fully-connected design and an orthogonal deflation, and Multivariate Functional Principal Component Analysis (MFPCA) (Happ and Greven (2018)). The reconstructed trajectories are obtained for FGCCA using the decomposition equation (8) with the estimated canonical functions and components, and for MFPCA using the multi-dimensional Karhunen-Loeve decomposition, presented in the aforementioned paper. This time, using the package funData(Happ-Kurz (2020)), 3 processes are generated based on the first \(M=6\) Fourier basis functions and with a linear decreasing variance over the components.
The mean squared relative error is compared over 100 simulations in various configurations. Results are presented on Figure 3. We can see that our approach improves slightly the reconstruction of the processes, especially when the number of subject \(N\) is not too small. The gap between the two methods appears to be stable among all sparsity settings. Further results, available in the Supplementary Materials, show that the difference is bigger as the noise \(\sigma^{2}\) is smaller.
### Application to _Primary Biliary Cirrhosis_ dataset
The Primary Biliary Cirrhosis dataset (Murtaugh et al. (1994)) is a dataset from Mayo Clinic containing the follow-up of various biomarkers extracted from blood analyses of 312 patients who have been diagnosed with primary biliary cirrhosis of the liver, a rare autoimmune disease. We propose to use this multi-biomarker dataset to show various usages of FGCCA. For this purpose, three
Figure 3: Mean relative squared errors (MRSE) of reconstructed trajectories, using estimated canonical functions and components. Comparing FGCCA and MFPCA with \(M=6\), \(\sigma^{2}=1\), for various number of subjects \(N\) and various sparsity settings. Statistical significance displayed : (***) \(p<0.001\) (***) \(p<0.0001\)
biomarkers were considered: albumin, bilirubin, which is log transformed, and prothrombin time, observed up to 10 years after the first visit. Those biomarkers were chosen as they have been proven to be good predictors of patient outcomes. Figure 4 represents an aggregated view of the data.
#### 3.2.1 Exploratory analysis
We first propose visualizing the canonical functions and components obtained with FGCCA when using a fully connected design and deflation leading to uncorrelated components. We compare the results to the principal functions and components obtained using PACE-based FPCA (Yao et al. [2005]). For both methods, the bandwidths are manually set to 1 for interpretability. The results are displayed in Figure 5.
The first principal and canonical functions have a similar flat shape, implying that the difference in the trajectories between subjects for all biomarkers comes primarily from an overall shift around the mean. On the other hand, the second canonical and principal functions are either decreasing or increasing during the 10 years interval, indicating that the next source of variation between subjects thus comes from the monotonicity of the trajectories: for the different biomarkers, subjects have either an increasing or decreasing trend. Additionally, we notice a more significant difference between principal and canonical functions for prothrombin, suggesting that this biomarker is particularly correlated to the others. The differences are, however, difficult to interpret. Additionally, note that the functions retrieved with FGCCA are slightly smoother, notably at the end of the interval. We can explain this by the border effects when estimating the (cross-)covariance operators. Indeed, for FGCCA, the functions are estimated using information from multiple processes and not just one (as done in FPCA), leading to more stable and reliable results.
Figure 4: Longitudinal trajectories for albumin, bilirubin and prothrombin time in the pbc2 dataset for all individuals. As usually done, the bilirubin marker is log transformed.
Component plots allow us to see the differences between the two approaches more clearly. First, we notice that the components are spread more evenly for FGCCA, especially for prothrombin, thanks to the normalization. For the three biomarkers, the components given by FGCCA seem to separate better the two outcomes. This property will be confirmed in the predictive analysis.
#### 3.2.2 Prediction
Inspired by Singh et al. (2019), we propose using a multiblock functional PLS framework to predict each patient's outcome and demonstrate the ability of FGCCA to integrate a multivariate or univariate response. The multiblock functional PLS design is defined as a FGCCA design where only the associations with the response are considered in the problem. It allows to recover biomarker information correlated to the response, which is particularly useful for predictive purposes. A simple logistic regression model is fitted per biomarker to predict the response using the first component retrieved with FGCCA. For a new subject, the components are predicted using the observed trajectories. The final prediction is obtained by computing a weighted average of the predicted outcomes, where each biomarker prediction is weighted by the correlation of its component with the response. The predictive performances obtained are compared to a similar model where the principal components from FPCA are used instead.
The results, summarized in Figure 6, show that the FGCCA-based components provide a significantly better outcome estimation. The apparent difference in the canonical and principal functions implies that FGCCA has indeed retrieved components highlighting the association with the outcome for each biomarker. As the canonical functions have a monotonic trend, we can argue that survival is mainly associated with a decreasing or increasing behavior of the various biomarkers. More precisely, mortality seems to be associated with an increase in bilirubin, prothrombin and a decrease in albumin. It is consistent with the fact that a decreasing albumin or increasing bilirubin and prothrombin are usually associated with a bad prognosis of the liver.
#### 3.2.3 Reconstruction
Finally, we propose to evaluate the ability of FGCCA to reconstruct trajectories and predict biomarker values at unobserved times. To this end, we propose dividing the data into training and test datasets. The training dataset is used to estimate (cross-)covariance operators and to run FGCCA. The test dataset contains trajectories on which we seek to predict the last observation, which has been re
Figure 5: (top) First 3 functional modes retrieved by FGCCA (canonical functions) and FPCA (principal functions), for each biomarker. Functions were flipped to minimize the differences between the two methods. (bottom) Biplots, for each biomarker, of the first 2 components obtained with FGCCA and FPCA coloured by final status. Ellipses represent the estimated Gaussian distributions of the components for each outcome.
moved. Each subject's prediction is made from reconstructed trajectories computed from previously obtained canonical functions (from the training dataset) and components estimated using data before the last observation. We propose to compare the results to an FPCA-based approach, where the components and functions are estimated from an FPCA. Furthermore, to evaluate the robustness of the two approaches, we sparsify the test trajectories at various levels. The results are reported in Figure 7.
We observe a significant advantage of the FGCCA-based approach over the FPCA-based approach except for bilirubin in the (1M) scenario, asserting that FGCCA has integrated additional or more stable knowledge. We note that the gap widens as the sparsity increases. These results pave the way to joint modeling, as the components could be used both to predict the future trajectories of biomarkers, as it is done here, and the survival of subjects. This promising application is currently investigated.
## 4 Discussion
We introduced Functional Generalized Canonical Correlation Analysis (FGCCA), a flexible framework for exploring associations among multiple longitudinal variables, by finding the main joint modes of variation. The method relies on a monotone and globally convergent algorithm, which
Figure 6: (left) First principal/canonical function retrieved with FPCA/FGCCA. For FGCCA a multiblock-FPLS design is used, integrating only the interaction between each biomarker and the response. Functions are flipped to ensure that the first component is positively correlated to the outcome, to improve interpretability. (right) Boxplot of the balanced accuracy computed on the test set for 100 monte-carlo runs. p-value and significance level of the difference between the two distributions (t-test) are given.
Figure 7: (left) Reconstruction obtained with FGCCA and FPCA in 3 scenarios : (1M) last observation removed, (2M) two last observations removed (3M) three last observations removed. Crosses correspond to observations used to estimate the components, circles correspond to observations we aim to predict. (right) Boxplots of last observation prediction mean squared error obtained on 100 runs. Statistical significance displayed : (*) \(p<0.05\) (*****) \(p<0.0001\)
only requires (cross-)covariance operators. We proposed a Bayesian approach for estimating the components. Consequently, the method is robust to irregular and sparse data, making it applicable to numerous settings. In addition, we allow integrating a multivariate response in the analysis by slightly modifying the optimization problem, paving the way to mixed data uses. Simulation studies assess the validity of our approach and its underlying design. A wide variety of usages are presented in the application.
As previously mentioned, the method relies significantly on the estimations of (cross-)covariance operators. Therefore, studying more in-depth those estimation procedures could be interesting as new methods have been proposed recently (Xiao et al. (2018)). Computing confidence bands for the estimated scores as it is done in sparse and irregular FPCA (Yao et al. (2005)) is investigated. However, difficulties arise since the FGCCA algorithm does not have a closed-form solution. Finally, an implementation allowing the user to change the regularization parameter was developed. In this context, analyzing the impact of the parameter on the algorithm and the results obtained could be further investigated.
In numerous studies, longitudinal variables can be grouped in blocks representing different modalities. For instance, in imaging genetics, multiple longitudinal variables representing the evolution of several neuroimaging features can be observed along multiple genetic features. In this context, considering a block for each variable, as done with FGCCA, may be inefficient as it would require a complex design and intensive computational resources. Another approach, which is currently being investigated, would be to integrate the multiple longitudinal variables in blocks. This approach was used by Happ and Greven (2018) and can be referred to as multivariate functional data modeling. Inspired by the multi-way/tensor literature, a reduced rank model could also significantly help reduce the problem's complexity. In this context, numerous works have been proposed, notably for tensor regression (Zhou et al. (2008)) and, as evoked in the introduction, for RGCCA (Girka et al. (2023)).
|
2308.12211 | Recording of 50 Business Assignments | One of the main use cases of process mining is to discover and analyze how
users follow business assignments, providing valuable insights into process
efficiency and optimization. In this paper, we present a comprehensive dataset
consisting of 50 real business processes. The dataset holds significant
potential for research in various applications, including task mining and
process automation which is a valuable resource for researchers and
practitioners. | Michal Sroka, Mohammadreza Fani Sani | 2023-08-22T08:58:13Z | http://arxiv.org/abs/2308.12211v1 | # Recording of 50 Business Assignments
###### Abstract
One of the main use cases of process mining is to discover and analyze how users follow business assignments, providing valuable insights into process efficiency and optimization. In this paper, we present a comprehensive dataset consisting of 50 real business processes. The dataset holds significant potential for research in various applications, including task mining and process automation which is a valuable resource for researchers and practitioners.
In this paper, we aim to provide some information about the dataset that we gather from recordings of 50 different processes. This dataset can be accessed via github repository [4].
In the following, we first explain how we gathered the recordings. Thereafter, some characteristics of this dataset will be discussed and finally we provide a conclusion and some possible use cases of this dataset.
## 2 Dataset
In this section, we provide an overview of the dataset, including the process of creating it and its key characteristics.
### How the dataset was created
The aim of this dataset is to cover a broad range of business assignments common for office workers across many industries. Such coverage can help in developing new algorithms for Robotic Process Automation (RPA) and task mining [2]. Many of the common tasks can be automated using Power Connectors. There are hundreds of Microsoft Power Platform Connectors which can be used for this purpose. The connectors selected for this dataset are among the most commonly used.
Each of the processes aims at presenting a process whose part could be automated using one or more of such connectors. The connectors were selected apriori to the recording.
The method for creation of the recording is:
1. Select two or more connectors
2. Select a business scenario which utilized the selected connectors
3. Create a task description
4. Perform the task while recording (i.e., recording all the steps that are by the user).
To explain the record creation process, please consider the following example.
1. Considering Sharepoint and Onenote as the selected connectors
2. Defining a business scenario to use the selected connectors "during the field survey, you collected feedback points. To discuss with team members, you are going to share those points on collaborative platform."
3. Describing the task for the user to run the defined business scenario: 1. Go to the personal note taking app 2. Locate the desire file and copy the points 3. Go to Sharepoint 4. Create a document to discuss with Team Members
4. Recording the user when they perform each small steps, e.g., clicking or typing. For see the details steps of this example, please refer to \(Recording\_91\) of \(Process\_33\) in the github repository [4].
After all recordings were created, a sequential processID was assigned to each specific process.
### Dataset Characteristics
Here, we provide Characteristics information of the gathered dataset. The dataset contains \(165\) cases (or recordings) that are related to \(50\) unique processes. There are \(5718\) events (or steps). Each process has at least \(3\) cases. In each case, in average there \(35\) events. For each event/step available information is summarized in Table 1.
The activities are accurately labeled by human judges who used different terminology for similar activities, there are \(752\) unique activities in this dataset.
#### 2.2.1 Data model and schema of the resource
Here, we explain each column of the dataset.
**ProcessId:**: ID of the process. The process is defined on a high level by up to 3 tasks which needs to be achieved. This tasks use various business applications, detailed in the ApplicationParentWindowName column described below.
**RecordingId:**: ID of the recording within the process. Every process is typically recorded 3 times. In this context Recording ID could be considered as a Case ID.
**Stepld:**: ID of the step within the recording of the process. A step refers to a discrete action or operation performed as part of a broader task. It represents a specific unit of work that contributes to achieving the overall goal, and is the most fine grained operation considered such as a click of a mouse or stroke of a key.
**StepName:**: Step Name is a standardized name of the action taken by the user. It is one of the 19 values, with the distribution shown in Table 2.
**TimeStamp:**: Time when the specific step was recorded, automatically captured by the system. The granularity of the timestamp is in seconds.
**StepDescription:**: Extended description of each step automatically generated by the software. E.g. _Check Box 'All day event' in Window 'Untitled - Appointment'_
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Column Name & Details \\ \hline Stepld & Description: The number of the step within the recording. \\ & Example: 1 \\ \hline RecordingId & Description: ID of the recording. \\ & Example: Recording\_4 \\ \hline ProcessId & Description: ID of the process, for which the recording was done. \\ & Example: Process\_3 \\ \hline TimeStamp & Description: Timestamp of the step within the recordng. \\ & Example: 2022-02-21T08:19:12+00:00 \\ \hline StepName & Description: Name of the step. It is one of 19 different options. \\ & Example: Press button in window \\ \hline StepDescription & Description: More detailed description of the step. \\ & Example: Button ’New Tab’ in Window ’Process... - Microsoft Edge’ \\ \hline ApplicationProcessName & Description: Process Name, taken from the opened application. \\ & Example: msedge \\ \hline ApplicationParentWindowName & Description: Parent Window Name, taken from the opened application. \\ & Example: Process advisor | Power Automate and 1 more page \\ & – Personal - Microsoft Edge \\ \hline AutomationCode & Description: A Script which could be used to automate that step. \\ & Example: [”UIAutomation.PressButton Button: \\ & appmask[’Window ’Process... - Microsoft Edge”] \\ & [’Button ’New Tab”] n”] \\ \hline label\_EventName & Description: Event name given by a person making the recording for the groupped steps. \\ & Example: Check weather condition. \\ \hline label\_EventId & Description: ID of the group of steps, called an Event. \\ & Example: 2 \\ \hline \end{tabular}
\end{table}
Table 1: Different columns that discribes an event in dataset.
**ApplicationProcessName:**: Name of the process associated with the application. STatistical analysis of this column can be found in Table 3.
**ApplicationParentWindowName:**: The name of the Window as shown by the parent application. For example opening accuweather in Microsoft Edge browser, while haveing 2 other tabs opened, is shown as _accuweather - Search and 2 more pages - Personal - Microsoft Edge_
**AutomationCode:**: Code which can be used for automating the step.
**NextStepld:**: ID of the next step. This Id corresponds to StepID field above. It can be used for chaining steps in the event that a step has been deleted.
**labe_EventName:**: Human label for EventName, which groups certain tasks together into an event.
**label_EventId:**: Automatically assigned ID for the Event Name given by the human judge.
## 3 Conclusion and Use cases
In this paper, we explain a dataset that aims to provide a comprehensive coverage of business assignments commonly encountered by office workers across various industries. The dataset is can be used to facilitate the development of new algorithms for Robotic Process Automation (RPA), with a focus on utilizing Microsoft Power Platform Connectors. We describe the process of creating the dataset, characterization information of it, and explanation of its main fields. Overall, this dataset serves as a valuable resource for advancing research in the field of task mining, and automation.
The current event log can be used for various use cases. In the following, we explain two possible use cases.
\begin{table}
\begin{tabular}{|l|l|} \hline StepName & Count \\ \hline Click UI element in window & 2504 \\ Press button in window & 1406 \\ Populate text field in window & 718 \\ Select menu option in window & 404 \\ Send keys & 387 \\ Drag and drop UI element in window & 110 \\ Select tab in window & 75 \\ Set checkbox state in window & 44 \\ Set drop-down list value in window & 20 \\ Select radio button in window & 16 \\ MouseAndKeyboard.SendKeys.FocusAndSendKeys & 15 \\ Expand/collapse tree node in window & 9 \\ Comment & 2 \\ Move window & 2 \\ Prepare a form for employees feedback & 0 \\ Close window & 1 \\ Resize window & 1 \\ Locate the Notification and review it in Inbox of mailing app & 1 \\ Get details of a UI element in window & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Step names and their frequency
### Process Mining for Automation
One of the important scenarios that process mining can help industries to reduce the required times and cost is to automate their processes. In this regard, using process mining we are able to detect the common frequent patterns in the processes and recommend them for possible automation.
As explained in this dataset, we have at least \(3\) recordings that are done by different humans for doing similar tasks. Using this dataset, we are able to assess how different automation detection methods are able to detect frequent patterns and also what will be the possible reduction in the required time, if we automate those tasks.
### Bottleneck analysis with task mining
Bottleneck analysis is a crucial aspect of process improvement, aiming to identify and address bottlenecks that hinder workflow efficiency. Task mining, a technique that leverages process execution data from digital traces, offers a powerful approach to conduct bottleneck analysis. By capturing and analyzing user interactions with digital systems, task mining provides insights into the actual execution of tasks and reveals potential bottlenecks in the process flow.
By utilizing the provided dataset and the its recordings, task mining algorithms can identify patterns and dependencies to uncover potential bottlenecks in the process flow for specific tasks. The dataset's rich information, including step details and contextual data, enables a holistic analysis of bottlenecks.
## 4 Additional Information
Link to the Github: [https://github.com/microsoft/50BusinessAssignmentsLog](https://github.com/microsoft/50BusinessAssignmentsLog)
Link to the Readme file within this repository, containing information about the dataset: [https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/data/data_format.md](https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/data/data_format.md)
Link to the document including instruction on how to download it: [https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/README.md](https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/README.md)
Link to the dataset: [https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/data/Record_Business_Tasks.csv](https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/data/Record_Business_Tasks.csv)
Link to the license: [https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/LICENSE](https://github.com/microsoft/50BusinessAssignmentsLog/blob/main/LICENSE)
\begin{table}
\begin{tabular}{|l|l|} \hline ApplicationProcessName & Count of steps \\ \hline chrome & 1661 \\ msedge & 1128 \\ firefox & 884 \\ Teams & 570 \\ PBIDesktop & 125 \\ OUTLOOK & 119 \\ ApplicationFrameHost & 107 \\ Sams & 46 \\ CoollePDFConverter & 42 \\ SearchApp & 38 \\ OneDrive & 27 \\ ONENOTE & 26 \\ explorer & 21 \\ Skype & 19 \\ EXCEL & 16 \\ ShellExperienceHost & 13 \\ cmd & 3 \\ \hline \end{tabular}
\end{table}
Table 3: Frequencies of ApplicationProcessName |
2305.05949 | Scalable and Precise Application-Centered Call Graph Construction for
Python | Call graph construction is the foundation of inter-procedural static
analysis. PYCG is the state-of-the-art approach for constructing call graphs
for Python programs. Unfortunately, PyCG does not scale to large programs when
adapted to whole-program analysis where application and dependent libraries are
both analyzed. Moreover, PyCG is flow-insensitive and does not fully support
Python's features, hindering its accuracy. To overcome these drawbacks, we
propose a scalable and precise approach for constructing application-centered
call graphs for Python programs, and implement it as a prototype tool JARVIS.
JARVIS maintains a type graph (i.e., type relations of program identifiers) for
each function in a program to allow type inference. Taking one function as an
input, JARVIS generates the call graph on-the-fly, where flow-sensitive
intra-procedural analysis and inter-procedural analysis are conducted in turn
and strong updates are conducted. Our evaluation on a micro-benchmark of 135
small Python programs and a macro-benchmark of 6 real-world Python applications
has demonstrated that JARVIS can significantly improve PYCG by at least 67%
faster in time, 84% higher in precision, and at least 20% higher in recall. | Kaifeng Huang, Yixuan Yan, Bihuan Chen, Zixin Tao, Xin Peng | 2023-05-10T07:40:05Z | http://arxiv.org/abs/2305.05949v5 | # Scalable Demand-Driven Call Graph Generation for Python
###### Abstract.
Call graph generation is the foundation of inter-procedural static analysis. PvCG is the state-of-the-art approach for generating call graphs for Python programs. Unfortunately, PvCG does not scale to large programs when adapted to whole-program analysis where dependent libraries are also analyzed. Further, PvCG does not support demand-driven analysis where only the reachable functions from given entry functions are analyzed. Moreover, PvCG is flow-insensitive and does not fully support Python's features, hindering its accuracy.
To overcome these drawbacks, we propose a scalable demand-driven approach for generating call graphs for Python programs, and implement it as a prototype tool Jarvis. Jarvis maintains an assignment graph (i.e., points-to relations between program identifiers) for each function in a program to allow reuse and improve scalability. Given a set of entry functions as the demands, Jarvis generates the call graph on-the-fly, where flow-sensitive intra-procedural analysis and inter-procedural analysis are conducted in turn. Our evaluation on a micro-benchmark of 135 small Python programs and a macro-benchmark of 6 real-world Python applications has demonstrated that Jarvis can significantly improve PvCG by at least 67% faster in time, 84% higher in precision, and at least 10% higher in recall.
+
Footnote †: K. Huang is the corresponding author.
+
Footnote †: K. Huang is the corresponding author.
+
Footnote †: K. Huang is the corresponding author.
+
Footnote †: K. Huang is the corresponding author.
+
Footnote †: K. Huang is the corresponding author.
## 1. Introduction
Python has become one of the most popular programming languages in recent years (Pydron, 2017). The prevalent adoption of Python in a variety of application domains calls for great needs of static analysis to ensure software quality. Call graph generation is the foundation of inter-procedural static analysis. It embraces a wide scope of static analysis tasks, e.g., security analysis (Kang et al., 2018; Kang et al., 2019; Wang et al., 2020; Wang et al., 2020), dependency management (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) and software debloating (Kang et al., 2019; Wang et al., 2020).
It is challenging to generate a precise and sound call graph for Python. Python has dynamic language features as any interpreted language does, which makes the analysis more complicated compared with compiled languages (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). For example, dynamically typed variables demand the analysis to undertake a precise inter-procedural points-to analysis to resolve the type of variables. Several approaches, i.e., Pyan (Wang et al., 2020), Depends(Zhou et al., 2020) and PyCG (Wang et al., 2020), have been recently proposed to generate call graphs for Python programs. Specifically, PvCG is the state-of-the-art, which achieves the best precision, recall, and time and memory performance (Wang et al., 2020). It conducts a flow-insensitive inter-procedural points-to analysis using a fixed-point algorithm. This algorithm takes unfixed iterations to update an _assignment graph_ (i.e., points-to relations between program identifiers) until it is unchanged. After the assignment graph for the whole program is constructed, the call graph is generated.
However, PvCG still suffers several drawbacks. First, PvCG conducts an exhaustive analysis on the application program only without analyzing any dependent library (i.e., as showed by the left part of Fig. 1). As a result, the generated call graph is infeasible for static analysis tasks such as dependency management (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) and software debloating (Kang et al., 2019; Wang et al., 2020) where the calls into directly and transitively dependent libraries are needed. We adapt PvCG to whole-program analysis by further analyzing all dependent libraries (as showed by the middle part of Fig. 1), but find that it does not scale to large programs that usually have many dependent libraries.
Second, due to the intrinsic design of its exhaustive analysis, PvCG does not support demand-driven analysis which only analyzes the reachable functions from a set of given entry functions (i.e., the demands) rather than analyzing all functions (i.e., as illustrated by the right part of Fig. 1). However, demand-driven analysis is needed for static analysis tasks such as dependency management (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) and software debloating (Kang et al., 2019; Wang et al., 2020). A workaround to achieve demand-driven analysis is to prune the call graph generated by PvCG according to the given entry functions, but it wastes computation time on analyzing unreachable functions and suffers the scalability issue.
Third, PvCG conducts a flow-insensitive analysis which ignores control flows and over-approximately computes points-to relations. This over-approximation introduces false positives. In addition, PvCG introduces false negatives because it does not fully support Python's language features (e.g., built-in types and functions).
To overcome these drawbacks, we propose a scalable demand-driven call graph generation approach for Python, and implement it as a prototype tool Jarvis. Jarvis has four key characteristics which are different from PvCG. First, Jarvis is scalable to large programs as it maintains an assignment graph for each function in a program. Such a design allows Jarvis to reuse the assignment graph of a function without reevaluating the function at each call site. Second, Jarvis generates call graphs on-the-fly, i.e., it uses the intermediate assignment graph before the call site to infer the callee function. Therefore, Jarvis inherently supports customized analysis scope (e.g., application-program, whole-program, or any intermediate scope). Third, Jarvis is demand-driven. Given a set of entry functions as the demands, Jarvis
Figure 1. Analysis Scope of Call Graph Generation
only analyzes the reachable functions from the entry functions. Therefore, unnecessary computation for un-demanded functions is skipped. Fourth, Jarvis is flow-sensitive to improve its precision. It creates a copy of assignment graph when control flow diverges in a function, and merges assignment graphs when control flow converges. Besides, Jarvis supports Python's language features more comprehensively, which helps to improve its recall.
We evaluate the efficiency and effectiveness of Jarvis on a micro-benchmark of 135 small Python programs (covering a wide range of language features) and a macro-benchmark of 6 real-world Python applications. For efficiency, Jarvis improves PyCG by at least 67% faster in time in the scenario of exhaustive whole-program analysis; and Jarvis takes averagely 3.84 seconds for 2.8k lines of application code in the scenario of demand-driven whole-program analysis. For effectiveness, Jarvis improves PyCG by 84% in precision and at least 10% in recall in demand-driven whole-program analysis.
In summary, this work makes the following contributions.
* We proposed Jarvis to scalably generate call graphs for Python programs on demand via flow-sensitive on-the-fly analysis.
* We conducted experiments on two benchmarks to demonstrate the improved efficiency and effectiveness over the state-of-the-art.
## 2. Background
We first introduce the assignment graph in PyCG and then discuss the drawbacks of PyCG using a motivating example.
### Assignment Graph in PyCG
As Python has higher-order functions, module imports and object-oriented programming features, the assignment graph in PyCG maintains the assignment relations between program identifiers, i.e., variables, functions, classes and modules. It has a broader scope than a traditional points-to graph by further including functions, classes and modules. PyCG uses a fixed-point algorithm to iteratively update the assignment graph by resolving unknown identifiers until the assignment graph is fixed. After it constructs the assignment graph for a Python program, it utilizes the assignment graph to generate the call graph by resolving all calls to potentially invoked functions.
### Drawbacks of PyCG
To illustrate the drawbacks of PyCG, we use a motivating example, as shown in Fig. 2, which consists of six modules (i.e., files). PyCG takes four iterations to construct the assignment graph, as shown in Fig. 3. Specifically, the first iteration parses imports, class, functions, returns, assignment, etc. for updating the assignment graph (corresponding to the black ovals and arrows in Fig. 3). For the import at Line 1 in f.py, PyCG adds a points-to relation f.fool \(\rightarrow\) a.fool, meaning that function fool in module \(\mathtt{f}\) is function fool in module a. For class creation and assignment at Line 5 in f.py, PyCG adds a points-to relation f.m\(\rightarrow\) f.cls, denoting that variable in m in module f is an instance of class f.Cls. Similarly, for each function in class Cls in e.py, variable self is created and points to class e.Cls (e.g., e.Cls.apply.self \(\rightarrow\) e.Cls). For the return at Line 3 in a.py, virtual variable \(\mathtt{ret}\) is added and points to a.baz (i.e., a.fool.\(<\)ret\(\rightarrow\) a.baz), meaning that the return of function fool in module a points to the function baz in module a. At the first iteration, PyCG does not utilize the points-to relations in the assignment graph. Thus, the invoking variable (e.g., f.m) and the function definition for function call (e.g., f.m.change) is unknown yet.
At the following iterations, PyCG computes the transitive closure of the assignment graph, applies simplifications (Zhou et al., 2017) and updates the assignment graph. At the second iteration (corresponding to the blue ovals and arrows in Fig. 3), PyCG resolves the function calls at Line 7 and 9 in f.py to e.Cls.change using the points-to relation f.m\(\rightarrow\) e.Cls, and thus two points-to relations e.Cls.change.f \(\rightarrow\) f.foo2 and e.Cls.change.f \(\rightarrow\) f.foo3 are added. Similarly, the function call at Line 10 in f.py is resolved to e.Cls.apply, and hence variable f.r is added and points to the virtual variable of the return of e.Cls.apply (i.e, f.r\(\rightarrow\) e.Cls.apply.\(<\)ret).
At the third iteration (corresponding to the green ovals and arrows in Fig. 3), the pointed functions of e.Cls.func are collapsed into one set {a.fool, b.fool, c.foo3} for simplification and performance. Then, for the return at Line 7 in e.py, the return of these pointed functions are pointed to by the return of e.Cls.apply (e.g., e.Cls.apply.\(<\)ret\(>\) a.fool.\(<\)ret). At the fourth iteration, PyCG makes no changes to the assignment graph, and then starts to generate the call graph on the basis of the assignment graph.
As illustrated by the previous example, there are several drawbacks in PyCG's call graph generation. PyCG undergoes several iterations to obtain a fixed assignment graph, and each module and function are analyzed in each iteration with no discrimination, causing unnecessary computation. Therefore, PyCG suffers the scalability issue and does not support demand-driven analysis. Moreover, PyCG conducts a flow-insensitive inter-procedural analysis, which introduces false positives. PyCG generates three calls from e.apply to a.fool, b.foo2 and c.foo3. However, the call to a.fool is a false positive because PyCG disregards the flow of Line 5, 7 and 9 in f.py. Due to this false positive, PyCG introduces another false positive, i.e., the call from f.main to d.baz (here code fragment located in a module, e.g., Line 5-11 in f.py, is named main).
Figure 3. Assignment Graph for PyCG
Figure 2. A Motivating Example
## 3. Approach
We first give notions and domains of our analysis. Then, we describe the overview of Jarvis. Finally, we elaborate Jarvis in detail.
### Notions and Domains
Our analysis works on the AST representation of Python programs using the built-in _ast_ library in Python. Identifiers are important elements in Python programs, and expressions are considered as the basic analysis block as it is atomic and can be composed into complicated program elements. Fig. 4 shows the basic notions used in our analysis. Specifically, each identifier definition \(d\) is defined as a tuple \((\tau,\phi,n)\), where \(\tau\) denotes the identifier type, \(\phi\) denotes the identifier namespace, and \(n\) denotes the identifier name. \(\tau\) can be one of the five types, i.e., module in the application (**mod**), module in external dependent libraries (**ext_mod**), class (**cls**), function (**func**), and variable (**var**). Each expression \(e\) can be various types, and a full list of expressions can be found at the official Python website (Zarvis, 2019).
Then, we introduce the six domains that are maintained in our analysis, i.e., _function assignment graph_, _control flow graph_, _function summary_, _class summary_, _import summary_, and _call graph_.
_Function Assignment Graph_ (\(\mathcal{FAG}\)). A function assignment graph (FAG) maintains points-to relations of a function, which is different from the program-level assignment graph in PyCG. It is designed at the function level to allow reuse and improve scalability (see Sec. 3.3.2). Formally, a function assignment graph is denoted as a 3-tuple \((Def,Expr,Pts)\), where \(Def\) denotes identifier definitions in a function, \(Expr\) denotes expressions in a function, and \(Pts\) denotes points-to relations between identifiers. Each points-to relation \(pts\in Pts\) is denoted as a 3-tuple \(\langle d_{1},d_{2},e\rangle\) (or \(d_{1}\overset{e}{\rightarrow}d_{2}\)), where \(d_{1}\), \(d_{2}\in Def\), and \(e\in Expr\) denotes the evaluated expression that results in \(pts\). Here \(e\) facilitates flow-sensitive analysis (see Sec. 3.3.4).
Hereafter, we use \(\mathcal{FAG}_{in}\) to denote the initial FAG before the evaluation of the first expression in a function, which contains points-to relations about parameter variables that are passed from its caller. We use \(\mathcal{FAG}_{e}\) to denote the intermediate FAG after the expression \(e\) is evaluated. We use \(\mathcal{FAG}_{R}\) to denote the final FAG after all expressions are evaluated. We use \(\mathcal{FAG}_{out}\) to denote the output FAG which contains the final points-to relations about parameter variables that will be passed back to its caller.
_Control Flow Graph_ (\(\mathcal{CFG}\)). The control flow graph of a function maintains the control dependencies between expressions. It is denoted as a 4-tuple \(\langle Expr,Crtrl,e_{en},E_{r}\rangle\), where \(Expr\) denotes expressions in a function, \(Crtl\) denotes control flows in a function, \(e_{en}\) denotes the entry expression, and \(E_{r}\) denotes the return expressions (either explicit returns or implicit returns). Each control flow \(ctrl\in Crtl\) is denoted as a 2-tuple \(\langle e_{1},e_{2}\rangle\), representing the control flow from expression \(e_{1}\) to expression \(e_{2}\). Notice that we add a virtual dummy expression \(e_{durm}\) that all return expressions flow to.
_Example 3.1_.: The control flow graph of function f.main (i.e., Line 5-11 in f.py) in Fig. 2 is shown in Fig. 5. \(Expr\) consists of six expressions, i.e., \(e_{1}\), \(e_{2}\), \(e_{3}\), \(e_{4}\), \(e_{5}\) and \(e_{durm}\). \(e_{1}\) is the entry expression (i.e., \(e_{en}=e_{1}\)). \(e_{5}\) is the only one return expression (i.e., \(E_{r}=\{e_{5}\}\)). \(Crtl\) contains \(\langle e_{1},e_{2}\rangle\), \(\langle e_{1},e_{3}\rangle\), \(\langle e_{2},e_{4}\rangle\), \(\langle e_{3},e_{4}\rangle\), \(\langle e_{4},e_{5}\rangle\), \(\langle e_{5},e_{durm}\rangle\).
_Function Summary_ (\(\mathcal{F}\)). The function summary contains a set of functions that are visited in our analysis. Each function \(f\in\mathcal{F}\) is denoted as a 2-tuple \(\langle d,P\rangle\), where \(d\in Def\) denotes the definition of \(f\), and \(P\) denotes the set of parameter names of \(f\). Notice that code fragment located in a module, e.g., Line 5-11 in f.py in Fig. 2, is regarded as a virtual function, which is named main.
_Class Summary_ (\(\mathcal{C}\)). The class summary is denoted as a 2-tuple \(\langle Hier,Incl\rangle\), where \(Hier\) denotes class hierarchy (i.e., inheritance relations between classes), and \(Incl\) denotes the inclusion relations from class to its included function definitions. Each entry in \(Hier\) is denoted as a 2-tuple \(\langle d_{clsb},d_{clss}\rangle\), where \(d_{clsb}\) and \(d_{clss}\in Def\) denote the base class and sub class, respectively. Each entry in \(Incl\) is denoted as a 2-tuple \(\langle d_{c},d_{f}\rangle\), where \(d_{c}\in Def\) denotes a class, and \(d_{f}\in Def\) denotes a function definition in the class.
_Import Summary_ (\(\mathcal{I}\)). The import summary contains points-to relations from importing definition to imported definition. Each entry in \(\mathcal{I}\) is denoted as a 3-tuple \(\langle d_{s},d_{t},e\rangle\), where \(d_{s}\in Def\) denotes the importing definition, \(d_{t}\in Def\) denotes the imported definition, and \(e\) denotes the import expression. For example, for the import expression at Line 1 in f.py in Fig. 2, a.foo1 is the imported definition, and f.foo1 is the importing definition.
_Call Graph_ (\(\mathcal{C}\)). The call graph contains call relations in a program. It is denoted as a 2-tuple \(\langle V,E\rangle\), where each entry in \(E\) is denoted as a 2-tuple \(\langle fer,fe_{e}\rangle\), representing a call relation from the caller function \(f_{er}\in V\subseteq Def\) to the callee function \(f_{ee}\in V\).
Figure 4. Notions of Our Analysis
Figure 5. Control Flow Graph of f.main
Figure 6. Approach Overview of Jarvis
### Overview of Jarvis
The overview of Jarvis is shown in Fig. 6. Given an entry function, it first runs intra-procedural analysis on this function, and then runs inter-procedural analysis and intra-procedural analysis in turn.
**Intra-Procedural Analysis.** When running intra-procedural analysis on a function \(f\), Jarvis takes as input \(\theta_{call}\), which denotes the points-to relations about argument variables passed from the call expression _call_ that invokes \(f\). For an entry function, its \(\theta_{call}\) is empty by default. Jarvis takes four steps to complete intra-procedural analysis. First, Jarvis initializes \(\mathcal{FAG}_{in}\) before evaluating the function's expressions. Then, Jarvis reuses \(\mathcal{FAG}_{out}\) if the function has been visited before and the points-to relations about argument variables are the same as before. Otherwise, Jarvis updates the FAG. It applies transfer function to evaluate each expression \(e\) in the function, and adds the evaluation result \(\Delta_{e}\) to obtain \(\mathcal{FAG}_{e}\). Specifically, if \(e\) is a call expression \(call^{\prime}\), Jarvis runs inter-procedural analysis to obtain its evaluation result \(\lambda_{call^{\prime}}\). Finally, after all expressions are evaluated, Jarvis computes \(\mathcal{FAG}_{out}\), and passes \(\mathcal{FAG}_{in}\) and \(\mathcal{FAG}_{out}\) back to the inter-procedural analysis on \(call\).
**Inter-Procedural Analysis.** Jarvis starts inter-procedural analysis when a call expression \(call^{\prime}\) in \(f\) is resolved into a callee function \(f^{\prime}\). Specifically, Jarvis first creates a call relation \(\langle f,f^{\prime}\rangle\) and updates it into the call graph. Then, Jarvis computes the points-to relations about argument variables in \(call^{\prime}\) (i.e., \(\theta_{call^{\prime}}\)). Next, Jarvis runs intra-procedural analysis on \(f^{\prime}\) using \(\theta^{\prime}_{call}\), and passes \(\mathcal{FAG}^{\prime}_{in}\) and \(\mathcal{FAG}^{\prime}_{out}\) back. Finally, Jarvis computes the changed points-to relations about argument variables (i.e., \(\Delta_{call^{\prime}}\)) from \(\mathcal{FAG}^{\prime}_{in}\) to \(\mathcal{FAG}^{\prime}_{out}\), and passes \(\Delta_{call^{\prime}}\) back to the intra-procedural analysis on \(f\) in order to reflect the evaluation results from \(call^{\prime}\).
After our intra-procedural analysis and inter-procedural analysis finish, we can directly obtain the demanded call graph because it is constructed on-the-fly in our inter-procedural analysis.
### Intra-Procedural Analysis
The algorithm of our intra-procedural analysis is presented in Alg. 1. It mainly consists of four steps, i.e., _Initialize FAG, Reuse FAG, Update FAG_, and _Compute Output FAG_.
#### 3.3.1. Initialize FAG
Given the points-to relations \(\theta_{call}\) about argument variables in the call expression \(call\) that invokes function \(f\), this step initializes \(\mathcal{FAG}^{f}_{in}\) (Line 2-6 in Alg. 1), i.e., the initial FAG of \(f\) before evaluating the first expression in \(f\). First, it computes the points-to relations about parameter variables (i.e., \(f.P\)) according to \(\theta_{call}\) based on the mapping between parameters and passed arguments, and puts the result to a temporary FAG \(gin\) (Line 2). Then, if the module \(m_{f}\) where \(f\) locates (which can be derived from \(f.d.\phi\)) has not been visited before, it updates \(\mathcal{F}\), \(C\) and \(\mathcal{I}\) by parsing the code of \(m_{f}\) (Line 3-5). Finally, it adds the global points-to relations resulted from import expressions to \(gin\) (Line 6). Now \(gin\) is the initial FAG of \(f\), and will be assigned to \(\mathcal{FAG}^{f}_{in}\) (Line 10).
```
0:\(f\), \(\theta_{call}\)
0:\(\mathcal{FAG}^{f}_{in}\), \(\mathcal{FAG}^{f}_{out}\)
1 functionintra_analysis
2\(g_{in}\) = computeParamPts(\(f.P\), \(\theta_{call}\))
3ifnot isVisited(\(m_{f}\))then
4\(\mathcal{F},\mathcal{C},\mathcal{I}\)\(\leftarrow\) previsitModule(\(m_{f}\))
5endif
6\(g_{in}\gets I_{m_{f}}\)
7ifisVisited(\(f\)) and isEqual(\(g_{in}\), \(\mathcal{FAG}^{f}_{in}\)) then
8return\(\mathcal{FAG}^{f}_{in}\), \(\mathcal{FAG}^{f}_{out}\)
9 end if
10\(\mathcal{FAG}^{f}_{in}=g_{in}\)
11for each\(e\in\)preOrder(AST of \(f\))do
12foreach\(e\).\(p\)\(\in\) parents of \(e\) and \(\text{outDegree}(e.p)>1\)do
13\(\mathcal{FAG}^{f}_{e,p}=\) copy(\(\mathcal{FAG}^{f}_{e,p}\))
14 end for
15if\(e\) has one parent then
16\(\Delta_{e}\) = applyTransferRule(\(f\), \(e\), \(\mathcal{FAG}^{f}_{e,p}\))
17\(\mathcal{FAG}^{f}_{e}\) = updateFAG(\(\mathcal{FAG}^{f}_{e,p}\), \(\Delta_{e}\), \(e\))
18 end if
19if\(e\) has more than one parent then
20\(g_{merge}=\bigcup_{e,p\in parents\ of\ e}\)\(\mathcal{FAG}^{f}_{e,p}\)
21\(\Delta_{e}=\) applyTransferRule(\(f\), \(e,g_{merge}\))
22\(\mathcal{FAG}^{f}_{e}=\) updateFAG(\(g_{merge}\), \(e\), \(e\))
23 end if
24 end for
25\(\mathcal{FAG}^{f}_{R}=\bigcup_{e\in\mathcal{C}\mathcal{F}_{f},E_{e}}\)\(\mathcal{FAG}^{f}_{e}\) setVisited(\(f\))
26\(\mathcal{FAG}^{f}_{out}=\) computeOutputFAG(\(\mathcal{FAG}^{f}_{in}\), \(\mathcal{FAG}^{f}_{R}\))
27return\(\mathcal{FAG}^{f}_{in}\), \(\mathcal{FAG}^{f}_{out}\)
28 end function
```
**Algorithm 1** Intra-Procedural Analysis
#### 3.3.2. Reuse FAG
If the function has been visited before, the FAG of \(f\) has been constructed, which provides the opportunity to reuse the FAG and improve scalability. To this end, this step first determines whether \(f\) has been visited before (Line 7). If yes, it further determines whether \(g_{in}\) equals to \(\mathcal{FAG}^{f}_{in}\) that is constructed in the previous visit (Line 7). If yes (meaning that the FAG can be reused), it directly returns the previously constructed \(\mathcal{FAG}^{f}_{in}\) and \(\mathcal{FAG}^{f}_{out}\) (i.e., the final points-to relations about parameter variables) (Line 8). Otherwise, it goes to the next step to build the FAG.
#### 3.3.3. Update FAG
Given \(\mathcal{FAG}^{f}_{in}\), this step iterates each expression by a preorder traversal on the AST of \(f\) (Line 11-24 in Alg. 1). The control flow graph (i.e., \(\mathcal{C}\mathcal{F}_{f}\)) and the FAG are updated using each expression's evaluated result in each iteration. \(\mathcal{C}\mathcal{F}_{f}\) is used to enable flow-sensitive analysis, but we omit its construction detail which is straightforward. In each iteration, there are three steps depending on the positive of the evaluated expression \(e\) in \(\mathcal{C}\mathcal{F}_{f}\).
First, for each of \(e\)'s parent expressions (denoted as \(e.p\)), if the out-degree of \(e.p\) in \(\mathcal{C}\mathcal{F}_{f}\) is larger than 1 (Line 12), it means that the control flow diverges with \(e\) as the first expression on the diverged flow. To enable flow-sensitive analysis, it creates a copy of \(\mathcal{FAG}^{f}_{e,p}\), i.e., the FAG after \(e.p\) is evaluated, for further update (Line 13).
Second, if \(e\) has one parent in \(\mathcal{C}\mathcal{F}_{f}\), the resulting points-to relations \(\Delta_{e}\) from evaluating \(e\) is updated to \(\mathcal{FAG}^{f}_{e,p}\) for producing \(\mathcal{FAG}^{f}_{e}\) (Line 15-18). The evaluation applies a transfer function,
which is comprised of a list of transfer rules with respect to different expressions. Part of the transfer rules are reported in Fig. 7, and a full list is available at our website. Each transfer rule generates new points-to relations \(\Delta_{e}\). Notice that if \(e\) is a call expression, it runs inter-procedural analysis to compute \(\Delta_{e}\) (see. Sec. 3.4.3). After \(\Delta_{e}\) is computed, \(\mathcal{FAG}^{f}_{e}\) is computed by adding \(\Delta_{e}\) to \(\mathcal{FAG}^{f}_{e,p}\).
Third, if \(e\) has more than one parent, for all of \(e\)'s parent expressions, it merges their \(\mathcal{FAG}^{f}_{e,p}\) into a new FAG \(g_{merge}\) (Line 20), representing that the control flow converges. Then, it evaluates \(e\) and adds newly generated points-to relations \(\Delta_{e}\) to \(g_{merge}\) (Line 21-22).
After all iterations finish, for each of the return expressions, i.e., \(e\in\mathcal{CFG}_{f}\).\(E_{r}\), there exists \(\mathcal{FAG}^{f}_{e}\). Our analysis proceeds to over-approximately merge them into \(\mathcal{FAG}^{f}_{R}\), i.e., the final FAG after all expressions in \(f\) has been evaluated (Line 25).
Example 3.2 ().: Fig. 8(a) shows the \(\mathcal{FAG}^{f}_{in}\) before the function \(\mathsf{f\_main}\) in \(\mathsf{f\_py}\) in Fig. 2 is evaluated. As \(\mathsf{f\_main}\) does not have any parameter, \(\mathcal{FAG}^{f,main}_{in}\) only contains points-to relations resulted from import expressions. For the ease of presentation, we use line number to denote the evaluated expression in points-to relations; e.g., \(f\):1 denotes the import expression at the first line of \(\mathsf{f\_py}\).
After \(e_{1}\) (\(f\):5) in the control flow graph in Fig. 5 is evaluated, two new points-to relations are produced (as highlighted by the red part in Fig. 8(b)), and added to \(\mathcal{FAG}^{f,main}_{in}\) to produce \(\mathcal{FAG}^{f,main}_{e_{1}}\).
When our analysis proceeds to \(e_{2}\) (\(f\):7), the control flow diverges. Hence, our analysis creates a copy of \(\mathcal{FAG}^{f,main}_{e_{1}}\), and adds the newly-created points-to relation \(\mathsf{e\_cls\_func}\xrightarrow{f\_7}\mathsf{f\_foo2}\) to it for producing \(\mathcal{FAG}^{f,main}_{e_{1}}\) (as shown in Fig. 8(c)). Similarly, when \(e_{3}\) (\(f\):9) is evaluated, it first creates a copy of \(\mathcal{FAG}^{f,main}_{e_{1}}\), and then adds the newly-created points-to relation \(\mathsf{e\_cls\_func}\xrightarrow{f\_9}\mathsf{f\_foo3}\) for generating \(\mathcal{FAG}^{f,main}_{e_{1}}\) (as shown in Fig. 8(d)).
Next, when \(e_{4}\) (\(f\):10) is evaluated, the control flow converges. Thus, our analysis merges \(\mathcal{FAG}^{f,main}_{e_{2}}\) and \(\mathcal{FAG}^{f,main}_{e_{1}}\) by adding points-to relations that are only contained in \(\mathcal{FAG}^{f,main}_{e_{1}}\) to \(\mathcal{FAG}^{f,main}_{e_{2}}\) and then applying simplification (Zhou et al., 2017) to \(\mathcal{FAG}^{f,main}_{e_{1}}\). The merged points-to relation is \(\mathsf{e\_cls\_func}\xrightarrow{f\_10}\mathsf{\{\_foo2}\), \(\mathsf{\_foo3}\)}, as shown in Fig. 8(e). Notice that the collapsed functions (i.e., b.foo2 and c.foo3) are caused by simplification. Then, the resulting points-to relations from evaluating \(e_{4}\) is added to produce \(\mathcal{FAG}^{f,main}_{e_{1}}\).
Finally, the evaluation of \(e_{5}\) does not produce any new points-to relation. Thus, \(\mathcal{FAG}^{f,main}_{e_{1}}\) is the same to \(\mathcal{FAG}^{f,main}_{e_{1}}\). As there is only one return statement, \(\mathcal{FAG}^{f,main}_{R}\) is the same to \(\mathcal{FAG}^{f,main}_{e_{1}}\).
#### 3.3.4. Compute Output FAG
Given \(\mathcal{FAG}^{f}_{in}\) and \(\mathcal{FAG}^{f}_{R}\), this step computes \(\mathcal{FAG}^{f}_{out}\) (Line 27). Different from \(\mathcal{FAG}^{f}_{R}\) which records possible points-to relations across all control flows, \(\mathcal{FAG}^{f}_{out}\) is a subset of \(\mathcal{FAG}^{f}_{R}\) where points-to relations about temporary variables are discarded and the rest of points-to relations are still in effect after the call to \(f\) is returned. In particular, for each points-to relation \(d_{1}\xrightarrow{e}d_{2}\in\mathcal{FAG}^{f}_{in}\), it first obtains \(d_{1}\) from \(\mathcal{FAG}^{f}_{in}\), and then computes definitions that \(d_{1}\) finally points to in \(\mathcal{FAG}^{f}_{R}\). Notice that it also computes definitions that the fields of \(d_{1}\) finally point to. Concerning that \(d_{1}\) may point to multiple definitions, it compares the order of evaluated expression for each points-to relation using \(\mathcal{CFG}_{f}\) in order to select the latest result as \(\mathcal{FAG}^{f}_{out}\).
Example 3.3 ().: In \(\mathcal{FAG}^{f,main}_{R}\) in Fig. 8, there are three points-to relations from \(\mathsf{e\_cls\_func}\), i.e., \(\mathsf{e\_cls\_func}\xrightarrow{f\_7}\mathsf{f\_foo1}\), \(\mathsf{e\_cls\_func}\xrightarrow{f\_7}\mathsf{f\_foo2}\), and \(\mathsf{e\_cls\_func}\xrightarrow{f\_10}\mathsf{\_foo2}\), \(\mathsf{e\_foo3}\). We can learn that the above three points-to relations are respectively resulted from evaluating \(e_{1}\) (\(f\):5), \(e_{2}\) (\(f\):7) and \(e_{4}\) (\(f\):10). According to \(\mathcal{CFG}_{f}\), we can learn that \(e_{4}\) is the successor of \(e_{1}\) and \(e_{2}\), and hence the points-to relation resulted from \(e_{1}\) and \(e_{2}\) are discarded.
### Inter-Procedural Analysis
The algorithm of our inter-procedural analysis is shown in Alg. 2. It has three steps, i.e., _Update CG_, _Compute \(\theta_{call}\)_, and _Compute \(\Delta_{call}\)
Figure 7. Transfer Rules of Our Analysis
#### 3.4.1. Update CG
Jarvis generates call relations during our inter-procedural analysis (Line 2 in Alg. 2). Given inputs as the call expression \(call^{\prime}\) at the caller function \(f\) as well as the FAG after evaluating \(call^{\prime}\)'s parent expression \(\mathcal{FAG}^{f}_{call^{\prime},p}\), this step resolves the callee function \(f^{\prime}\), and adds the call relation \(\langle f,f^{\prime}\rangle\) to the call graph.
Specifically, \(call^{\prime}\) can be in the form of a.b(\(\dots\)) or b(\(\dots\)). For the form of a.b(\(\dots\)), it first searches \(\mathcal{FAG}^{f}_{call^{\prime},p}\) for the class definitions that are pointed to by the invoking variable (e.g., a) of \(call^{\prime}\). Then, for each searched class definition \(d_{cls}\), it checks whether there exists a function definition \(d_{b}\) whose name \(d_{b}.n\) equals to the call name (e.g., b) of \(call^{\prime}\) according to \(C.Incl\). If yes, the callee function \(f^{\prime}\) is resolved; otherwise, it continues this procedure on the super class of \(d_{cls}\) according to \(C.Hier\). In other words, the ancestor classes of \(d_{cls}\) is searched along the inheritance hierarchy. If \(f^{\prime}\) is still not resolved, it searches \(\mathcal{FAG}^{f}_{call^{\prime},p}\) for the module definition \(d_{m}\) that is pointed to by the invoking variable (e.g., a) of \(call^{\prime}\). Then, it searches \(\mathcal{F}\) for the function definition \(d_{b}\) that satisfies \(d_{b}.n=\) b and \(d_{b}.\phi=d_{m}.\phi.(d_{m}.n,\mathbf{mod})\) (i.e., the function definition is imported through module import expression import...). If found, the callee function \(f^{\prime}\) is resolved.
For the form of b(\(\dots\)), it obtains its call name (e.g., b). Then, it searches \(\mathcal{F}^{f}\) for the function definition \(d_{b}\) that satisfies \(d_{b}.n=\) b and \(d_{b}.\phi=f.\phi\) (i.e., the function definition is in the same module with \(f\)). If such a \(d_{b}\) is found, the callee function \(f^{\prime}\) is resolved; otherwise, it continues to search \(\mathcal{I}\) for the points-to relation \(\langle d_{s},d_{t},e\rangle\) that satisfies \(d_{s}.n=\) b (i.e., the function definition is imported by function import expression from...import...). If found, \(d_{t}\) is the callee function's definition, and \(f^{\prime}\) is resolved.
_Example 3.4_.: When evaluating \(e_{4}\) in Fig. 5, our analysis creates a call relation \(\langle\texttt{f.main},\texttt{e.Cls.apply}\rangle\) as f.m points to e.Cls and function e.Cls.apply is a function definition inside class e.Cls.
#### 3.4.2. Compute \(\theta_{call^{\prime}}\)
This step selects relevant points-to relations \(\theta_{call^{\prime}}\) from the caller's FAG, and pass \(\theta_{call^{\prime}}\) to the construction of the callee's FAG (Line 3 in Alg. 2). In other words, \(\theta_{call^{\prime}}\) is passed from the caller's inter-procedural analysis into the callee's intra-procedural analysis (Line 4). Specifically, \(\theta_{call^{\prime}}\) contains points-to relations about invoking variable and argument variables of \(call^{\prime}\), which can be directly selected from \(\mathcal{FAG}^{f}_{call^{\prime},p}\).
_Example 3.5_.: When \(e_{1}\) in Fig. 5 is evaluated, our analysis prepares \(\theta_{call^{\prime}}\) to jump into e.Cls._init_ for the next intra-procedural analysis. As \(e_{1}\) has one argument variable but no invoking variable, our analysis only selects one points-to relation f.foo1\(\xrightarrow{f.1}\) a.foo from \(\mathcal{FAG}^{f,main}_{in}\) in Fig. 8(1a), and puts it to \(\theta_{call^{\prime}}\). After \(\theta_{call^{\prime}}\) is passed to our intra-procedural analysis on e.Cls._init_ (see Sec. 3.3.1), the pointed function definition a.foo1 is pointed to by the corresponding parameter variable e.Cls._init_.f, i.e., e.Cls._init_.f \(\xrightarrow{e.2}\)a.foo1, as shown in Fig. 8(2).
#### 3.4.3. Compute \(\Delta_{call^{\prime}}\)
\(\Delta_{call^{\prime}}\) is computed as the result of our inter-procedural analysis, which is passed back to the previous intra-procedural analysis (Line 5 in Alg. 2). Given inputs as \(\mathcal{FAG}^{f^{\prime}}_{in}\) which is the FAG before intra-procedural analysis and \(\mathcal{FAG}^{f^{\prime}}_{out}\) which is the FAG after intra-procedural analysis, this step computes changed points-to relations from \(\mathcal{FAG}^{f^{\prime}}_{in}\) to \(\mathcal{FAG}^{f^{\prime}}_{out}\), and puts them into \(\Delta_{call^{\prime}}\). In particular, for each points-to relation \(d_{1}\xrightarrow{e_{1}}d_{2}\) in \(\mathcal{FAG}^{f^{\prime}}_{out}\), if \(d_{1}\) exists in a points-to relation \(d_{1}\xrightarrow{e_{2}}d_{3}\) in \(\mathcal{FAG}^{f^{\prime}}_{in}\); it searches
Figure 8. Function Assignment Graph for Jarvis
\(\mathcal{FAG}^{f^{\prime}}_{out}\) for the final pointed definition \(d_{n}\) and adds a new points-to relation \(d_{1}\xrightarrow{e_{3}}d_{n}\) to \(\Delta_{call}\); and similarly, if a points-to relation about the field of \(d_{1}\) exists in \(\mathcal{FAG}^{f^{\prime}}_{out}\) but does not exist \(\mathcal{FAG}^{f^{\prime}}_{in}\), this points-to relation is also added to \(\Delta_{call}\).
Example 3.6 ().: Following Example 3.4, our analysis proceeds to conduct intra-procedural analysis on e.Cls.apply in Fig. 2(e). The \(\theta_{call^{\prime}}\) for this intra-procedural analysis has two points-to relations, i.e., \(\underline{f.10}\xrightarrow{f.5}\), e.Cls and e.Cls.func\(\xrightarrow{f.10}\) (b.foo2, c.foo3); and our intra-procedural analysis initializes two points-to relations, i.e., \(\underline{\text{self}}\xrightarrow{e.6}\), e.Cls and \(\underline{\text{self}}\xrightarrow{e.6}\) (b.foo2, c.foo3), as shown in Fig. 8(d). Using the two points-to relations, our analysis knows that the return of self.func points to the return of b.foo2 and c.foo3, therefore creating \(\underline{\text{self}}\).func\(\underline{\text{.}}\) (b.foo2.\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
we do not compare them in E.W. because of the potentially huge effort in constructing the ground truth. Precision is calculated by the proportion of correctly generated call relations (i.e., \(\mathcal{CG}_{Gen}\_E\cap C\mathcal{G}_{GT}.E\)) in the generated call relations (i.e., \(\mathcal{CG}_{Gen}.E\)), and recall is calculated by the proportion of correctly generated call relations in the ground truth (i.e., \(\mathcal{CG}_{GT}.E\)), as formulated in Eq. 1.
\[Pre.=\frac{|\ \mathcal{CG}_{Gen}.E\cap C\mathcal{G}_{GT}.E\ |}{|\ \mathcal{CG}_{Gen}.E\ |},Rec.= \frac{|\ \mathcal{CG}_{Gen}.E\cap C\mathcal{G}_{GT}.E\ |}{|\ \mathcal{CG}_{GT}.E\ |} \tag{1}\]
As PrCG does not support D.A. and D.W., we use PrCG to generate call graphs in E.A. and E.W., and prune the call graphs by only keeping the functions that are reachable from the entry functions.
**Ground Truth Construction.** We reuse the ground truth for micro-benchmark from PyCG (Zhu et al., 2019). For our newly-added programs in micro-benchmark, we build the ground truth manually, which is easy because they are simple programs.
For macro-benchmark, our construction is two-fold. On the one hand, we execute the test cases and collect call traces for each application using the embedded Python trace module (_python -m trace -listfumes <python_file>_). The call traces span across the whole-program. Then, the call traces are transformed into the same format in micro-benchmark. The transformed call graph contains implicit call relations that are invisible and inherently invoked by the Python interpreter; e.g., with import keyword, Python interpreter invokes functions from _frozen_importlib which is in fact not part of the whole program. We filter them out from the ground truth.
On the other hand, we enlarge the collection of call relations generated by test case execution by manually inspecting application functions (i.e., functions in application modules exclusive of library functions). Specifically, we go through the code of each application function, and add missing call relations. This step is to improve the imperfect construction result of call graph using test cases, i.e., the incomplete test cases might miss call relations. Three of the authors are involved in this procedure, which takes 4 person months.
In summary, we construct the ground truth of call graphs with a total number of 5,653 functions and 20,085 call relations.
### Scalability Evaluation (RQ1)
The scalability results in terms of time and memory performance on our macro-benchmark are reported in Table. 2. Time (\(T\)) is measured in seconds and memory (\(M\)) is measured in MB if not specified.
In terms of time, both Javais and PyCG generate call graph in E.A. within a second. PyCG takes more time for more iterations in E.A. (i.e., from 0.36 seconds in E.A. (1) to 0.55 seconds in E.A. (m)). The gap becomes larger in E.W., where PyCG takes avergedy 502.82 seconds in E.W. (1), and Jarvis takes avergedy 301.26 seconds. As the iteration increases, PyCG crashes in two projects due to out-of-memory (OOM) and recursion error (RE). Specifically, PyCG runs out-of-memory in E.W. (2) and suffers recursion error in E.W. (m) on P3, while PyCG runs out-of-memory in E.W. (2) and E.W. (m) on P5. We record the consumed time immediately before PyCG is crashed. Thereby, the average time for PyCG in E.W. (2) and E.W. (m) is more than 774.21 seconds and more than 16.7 hours. Therefore, Jarvis runs 67% faster than PyCG in E.W. (1), and at least 157% faster than PrCG in E.W. (2). Furthermore, we also measure time for Jarvis's demand-driven call graph generation. Jarvis takes 0.33 seconds for D.A. and 3.84 seconds for D.W., which significantly consumes less time than in exhaustive call graph generation.
In terms of memory, Jarvis consumes 4 MB less memory than PyCG in E.A. (1), and 7 MB less memory than PyCG in E.A. (m). When the analysis scope is expanded to whole program, PyCG consumes 266 MB less memory than Jarvis in E.W. (1). However, when the iteration number increases, PyCG consumes significantly more memory than Jarvis, and also suffers out-of-memory and recursion error in E.W. (2) and E.W. (m). Moreover, it takes Jarvis 311 MB in D.A. and 194 MB in D.W., which significantly consumes less memory than in exhaustive call graph generation.
SummaryJarvis runs 67% faster than PyCG in E.W. (1) and at least 157% faster than PyCG in E.W. (2). It only takes Jarvis avergedy 3.84 seconds to generate whole-program call graphs on demand (i.e., in D.W.). Jarvis is memory-efficient in both E.W. and D.W. Therefore, Jarvis is scalable in whole-program call graph generation.
### Accuracy Evaluation (RQ2)
We present the accuracy results of Jarvis and PyCG on our micro-benchmark and macro-benchmark.
**Micro-Benchmark.** The accuracy results on our micro-benchmark are presented in Table. 3. In terms of completeness, PyCG generates call graphs that are complete for 107 programs. 23 of the incomplete cases come from our newly-added categories (superscripted with "*") and the rest 5 incomplete cases come from the original category. Those incomplete cases generate call graphs with 30 false positives (FP). The majority (24) of the 30 false positives for PyCG locate in the five newly-added categories. Differently, Jarvis generates complete call graphs for 134 cases. Jarvis only generates one incomplete case with one false positive in decorators.
In terms of soundness, PyCG generates call graphs that are sound for 126 programs. The rest 9 unsound cases span in categories such as built-ins, control flow, assignment, dicts and lists. The unsound cases cause 18 false negatives. There are 7 false negatives in built-ins, ranking the top among other categories. In the meantime, Jarvis generates 113 sound call graphs. The unsound cases of Jarvis span in categories such as dicts, lists and built-ins. Jarvis generates 35 false negatives, while the majority (26) of the false negatives come from dicts and lists.
**Macro-Benchmark.** The accuracy results on our macro-benchmark are presented in Table. 4. In terms of precision, Jarvis achieves similar precision in E.A. and D.A. when compared with PyCG. Besides, PyCG achieves similar precision in E.A. (1) and E.A. (m), and in D.A. (1) and D.A. (m) due to the relatively small analysis scope. In D.W., PyCG's precision drastically drops to 0.19, while Jarvis's precision also greatly drops to 0.35, but is 84% higher than PyCG. The reason for the drop from E.A. to D.W. is two-fold. First, our ground truth can be incomplete especially for the call relations in dependent libraries. Second, there is a difference in measuring accuracy of call graph from exhaustive analysis to demand-driven analysis. To be more specific, in exhaustive analysis, accuracy is computed by comparing sets of call relations, whereas in demand-driven analysis, accuracy is computed by comparing chains of call relations from entry functions. Therefore, for demand-driven analysis, the
imprecisioness of call graph is magnified. In other words, if there exists one false positive call relation, the subsequent call relations along the generated call chain are all considered as false positives. The same reason also explains the precision drop from D.A. to D.W.; i.e., in demand-analysis, the larger the analysis scope, the longer the call chains, and the higher chance to introduce more false positives.
Moreover, we inspect the precision loss of PyCG against Jarvis. The major reason is that PyCG reports call relations disregarding control flows, whereas Jarvis is flow-sensitive. In addition, PyCG also reports false positives due to insufficient treatment of class inheritance. For example, for function calls invoked from base classes, it reports function calls in sub classes, but they actually do not exist in the sub classes. We also inspect the impreciseness of Jarvis. The major reason is correlated with false negatives whose reasons will be discussed in the next paragraphs. Jarvis conducts on-the-fly analysis assuming that it could retrieve complete points-to relations after evaluating each expression. If the points-to relations are not updated timely and Jarvis does not capture them because of unreachable function definitions, it would report false call relations because it uses the outdated points-to relations.
In terms of recall, Jarvis achieves a recall of 0.82 and 0.66 in E.A. and D.A., improving PyCG by 8% in both E.A. and D.A. In D.W. (1), Jarvis improves PyCG by 15%, while in D.W. (2), Jarvis improves PyCG by 10% on the four projects where PyCG does not crash. The recall of PyCG and Jarvis drops from E.A. to D.A., which is also mainly caused by the accuracy computing difference. In demand-driven analysis, if there exists one false negative call relation, the subsequent call relations along the call chain in the ground truth are all considered as false negatives. The same reason also explains the recall drop from D.A. to D.W.; i.e., in demand-driven analysis, the larger the analysis scope, the longer the call chains, and the higher chance to introduce more false negatives.
Furthermore, we inspect the recall gain of Jarvis against PyCG. The major reason is that Jarvis supports built-ins and control flow more comprehensively than PyCG. For example, regarding support for built-ins, PyCG misses the call builtins.split for the call expression _'tel-num':split('-')_, and regarding control flow, PyCG does not handle the with statements, and thus ignores the relevant control flows. We also inspect the false negatives of Jarvis. The root causes for false negatives are two-fold. First, functions stored in list, tuple and dict are missing as our FAG does not maintain points-to relations for the above data structures. Second, functions invoked from dynamic linked libraries (e.g., _math-python-*so_) are missing due to reflective invocations and Jarvis's incapability of inter-procedural analysis of dynamic linked libraries.
[leftmargin=*] _Summary_ Jarvis achieves similar precision to PyCG in E.A. and D.A., but improves PyCG in recall by 8% in E.A. and D.A. Further, Jarvis obtains a precision of 0.35 and a recall of 0.55 in D.W., which significantly improves PyCG by 84% in precision and at least 10% in recall.
### Threats
The primary threats to the validity of our experiments is our benchmark and the construction of ground truth. For micro-benchmark, we add 23 programs that are written by two authors that have at least two years of Python programming experience in order to have a more complete coverage of language features. Although not meant to be exhaustive, we believe it is representative. Besides, the ground truth of these 23 programs is easy to build because they are all simple programs. For macro-benchmark, we carefully select 6 real-world Python applications. We believe they are representative because they are popular in community, well-maintained, feasible to run and large-scale in whole-program. However, the construction of their ground truth is a challenging task. Therefore, we construct
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Id.} & \multicolumn{6}{c}{PyCG} & \multicolumn{6}{c}{Javvis} \\ \cline{2-13} & E.A. (1) & E.A. (m) & E.W. (1) & E.W. (2) & E.W. (m) & E.A. & E.W. & D.A. & & D.W. \\ \cline{2-13} & T. & M. & T. & M. & T. & M. & T. & M. & T. & M. & T. & M. & T. & M. & T. & M. \\ \hline P1 & 0.78 & 79 & 1.26 & 90 & 78.49 & 766 & 113.50 & 799 & 24h+ & 2.3G & 1.22 & 61 & 48.43 & 1003 & 0.88 & 53 & 3.21 & 162 \\ P2 & 0.55 & 36 & 1.01 & 40 & 48.83 & 702 & 76.29 & 855 & 24h+ & 5.7G & 0.46 & 33 & 33.79 & 764 & 0.31 & 30 & 1.27 & 75 \\ P3 & 0.16 & 24 & 0.19 & 24 & 176.07 & 1630 & 1947.39 & 0OM & 4h+ & RE & 0.19 & 24 & 1465.26 & 2061 & 0.18 & 24 & 6.57 & 314 \\ P4 & 0.25 & 30 & 0.35 & 30 & 47.14 & 569 & 72.53 & 636 & 24h+ & 5.1G & 0.30 & 32 & 34.35 & 775 & 0.22 & 29 & 0.86 & 57 \\ P5 & 0.20 & 26 & 0.25 & 28 & 988.36 & 1431 & 2190.40 & OOM & 0.5h+ & OOM & 0.26 & 26 & 163.01 & 1523 & 0.21 & 25 & 7.82 & 381 \\ P6 & 0.19 & 28 & 0.24 & 29 & 149.02 & 684 & 245.13 & 741 & 24h+ & 4.6G & 0.25 & 26 & 62.69 & 1250 & 0.19 & 26 & 3.29 & 175 \\ \hline Avg. & 0.36 & 37 & 0.55 & 40 & 502.82 & 963 & 774.21 & 757 & 16.7h+ & 4.4G+ & 0.45 & 33 & 301.26 & 1229 & 0.33 & 31 & 3.84 & 194 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Scalability Results on Our Macro-Benchmark
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Category} & \multicolumn{6}{c}{PyCG} & \multicolumn{6}{c}{Javvis} \\ \cline{2-13} & C. & S. & TP & FP & FN & C. & S. & TP & FP & FN \\ \hline arguments & 6/6 & 6/6 & 14 & 0 & 0 & 6/6 & 6/6 & 14 & 0 & 0 \\ assignments & 4/4 & 3/4 & 13 & 0 & 2 & 4/4 & 3/4 & 13 & 0 & 2 \\ built-ins & 3/3 & 1/3 & 3 & 0 & 7 & 3/3 & 2/3 & 6 & 0 & 4 \\ classes & 22/22 & 21/22 & 65 & 0 & 1 & 22/22 & 22/22 & 66 & 0 & 0 \\ decorators & 6/7 & 7/7 & 22 & 1 & 0 & 6/7 & 6/7 & 21 & 1 & 1 \\ dicts & 12/12 & 11/12 & 21 & 0 & 2 & 12/12 & 0/12 & 0 & 17 & 0 \\ direct calls & 4/4 & 4/4 & 10 & 0 & 0 & 4/4 & 4/4 & 10 & 0 & 0 \\ exceptions & 3/3 & 3/3 & 3 & 0 & 0 & 3/3 & 3/3 & 3 & 0 & 0 \\ functions & 4/4 & 4/4 & 4 & 0 & 0 & 4/4 & 4/4 & 4 & 0 & 0 \\ generators & 6/6 & 6/6 & 18 & 0 & 0 & 6/6 & 4/6 & 16 & 0 & 2 \\ imports & 13/14 & 14/14 & 20 & 2 & 0 & 14/14 & 14/14 & 20 & 0 & 0 \\ kwargs & 2/3 & 3/3 & 9 & 1 & 0 & 3/3 & 3/3 & 9 & 0 & 0 \\ lambdas & 5/5 & 5/5 & 14 & 0 & 5/5 & 5/5 & 5/5 & 14 & 0 & 0 \\ lists & 7/8 & 7/8 & 15 & 1 & 2 & 8/8 & 3/8 & 8 & 0 & 9 \\ mro & 6/7 & 6/7 & 19 & 1 & 1 & 7/7 & 7/7 & 20 & 0 & 0 \\ returns & 4/4 & 4/4 & 12 & 0 & 0 & 4/4 & 4/4 & 12 & 0 & 0 \\ arguments* & 0/4 & 4/4 & 8 & 4 & 0 & 4/4 & 4/4 & 8 & 0 & 0 \\ assignments* & 0/4 & 4/4 & 4 & 4 & 0 & 4/4 & 4/4 & 4 & 0 & 0 \\ direct calls* & 0/5 & 5/5 & 18 & 5 & 0 & 5/5 & 5/5 & 18 & 0 & 0 \\ imports* & 0/5 & 5/5 & 7 & 4 & 0 & 5/5 & 5/5 & 7 & 0 & 0 \\ control flow* & 0/5 & 3/5 & 17 & 7 & 3 & 5/5 & 5/5 & 20 & 0 & 0 \\ \hline Total & 107/135 & 126/135 & 316 & 30 & 18 & 134/135 & 113/135 & 299 & 1 & 35 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Accuracy Results on Our Micro-Benchmark
it by two ways, i.e., automated test case execution, and manual investigation. The manual investigation is conducted by two of the authors independently, and another author is involved to resolve disagreement. We do not use the macro-benchmark (containing 5 projects) of PyCG as it only contains call relations in application code, and its ground truth is manually constructed. To the best of our knowledge, our macro-benchmark is the largest one.
## 5. Related Work
We review the literature related to points-to analysis, call graph generation and evaluation of call graph generation.
### Points-to Analysis
Many research works have proposed to improve the precision of points-to analysis from different perspectives, e.g., combining call-site sensitivity with object sensitivity (S
Besides, Sui et al. (2006) also evaluate the soundness of Doop with real-world Java programs. Reif et al. (2010); Reif et al. (2011) construct an extensive test suite covering language and API features in Java, and compare the soundness and time overhead of Soot, WALA, OPAL(Reif et al., 2012) and Doop. Differently, Tip and Palsberg (2011) explore the precision and scalability of different approaches for resolving virtual function calls (e.g., CHA (Reif et al., 2011), RTA (Reif et al., 2011) and \(k\)-CFA (Bartos et al., 2012; Bartos et al., 2012)). To the best of our knowledge, none of these studies address the scalability problem in Python language, and our work fills this gap.
## 6. Conclusions
In this paper, we have proposed a scalable demand-driven call graph generation approach Jarvis for Python. Jarvis is demand-driven, flow-sensitive, scalable to large programs and generates call graphs on-the-fly. Our evaluation has demonstrated that Jarvis is efficient and effective in both demand-driven and exhaustive whole-program call graph generation, and improves the state-of-the-art approach. In future, we plan to apply Jarvis to foster downstream applications in security analysis and debloating Python dependencies.
## 7. Data Availability
All the source code of Jarvis and data used in evaluation are available at our replication site [https://pythonjarvis.github.io/](https://pythonjarvis.github.io/).
|
2310.01999 | Convective mixing in porous media: A review of Darcy, pore-scale and
Hele-Shaw studies | Convection-driven porous media flows are common in industrial processes and
in nature. The multiscale and multiphase character of these systems and the
inherent non-linear flow dynamics make convection in porous media a complex
phenomenon. As a result, a combination of different complementary approaches,
namely theory, simulations and experiments, have been deployed to elucidate the
intricate physics of convection in porous media. In this work, we review recent
findings on mixing in fluid-saturated porous media convection. We focus on the
dissolution of a heavy fluid layer into a lighter one, and we consider
different flow configurations. We present Darcy, pore-scale and Hele-Shaw
investigations inspired by geophysical processes. While the results obtained
for Darcy flows match the dissolution behaviour predicted theoretically,
Hele-Shaw and pore-scale investigations reveal a different and tangled scenario
in which finite-size effects play a key role. Finally, we present recent
numerical and experimental developments and we highlight possible future
research directions. The findings reviewed in this work will be crucial to make
reliable predictions about the long-term behaviour of dissolution and mixing in
engineering and natural processes, which are required to tackle societal
challenges such as climate change mitigation and energy transition. | Marco De Paoli | 2023-10-03T12:16:59Z | http://arxiv.org/abs/2310.01999v3 | # Convective mixing in porous media: A review of Darcy, pore-scale and Hele-Shaw studies
###### Abstract
Convection-driven porous media flows are common in industrial processes and in nature. The multiscale and multiphase character of these systems and the inherent non-linear flow dynamics make convection in porous media a complex phenomenon. As a result, a combination of different complementary approaches, namely theory, simulations and experiments, have been deployed to elucidate the intricate physics of convection in porous media. In this work, we review recent findings on mixing in fluid-saturated porous media convection. We focus on the dissolution of a heavy fluid layer into a lighter one, and we consider different flow configurations. We present Darcy, pore-scale and Hele-Shaw investigations inspired by geophysical processes. While the results obtained for Darcy flows match the dissolution behaviour predicted theoretically, Hele-Shaw and pore-scale investigations reveal a different and tangled scenario in which finite-size effects play a key role. Finally, we present recent numerical and experimental developments and we highlight possible future research directions. The findings reviewed in this work will be crucial to make reliable predictions about the long-term behaviour of dissolution and mixing in engineering and natural processes, which are required to tackle societal challenges such as climate change mitigation and energy transition.
**Keywords:** convection, porous media, Darcy, pore-scale, dispersion, Hele-Shaw
###### Contents
* 1 Introduction
* 2 Modelling of convection
* 2.1 Pore-scale modelling
* 2.2 Darcy model and dispersion
* 2.2.1 Dispersion
* 2.3 Flow configurations and quantification of mixing
* 3 Rayleigh-Benard convection
* 3.1 Darcy flow
* 3.2 Pore-scale flow
* 4 One-sided convection
* 4.1 Darcy flows
* 4.2 Pore-scale and Hele-Shaw flows
* 5 Finite-size effects
* 5.1 Effect of confinement
* 5.2 Hele-Shaw flows
* 5.3 Dispersion in bead packs
Summary and future perspectives
* 6.1 Recent developments in experimental techniques
* 6.2 Additional effects influencing mixing
## 1 Introduction
A porous medium is a material consisting of a solid matrix with an interconnected void, which allows fluids to flow through it. When a fluid-saturated medium subject to the action of gravity experiences an unstable density profile, i.e. a heavy fluid parcel sitting above a less dense one, the denser fluid will eventually move and replace the lighter fluid, and vice versa. The density-driven physical mechanism inducing this motion is defined as convection, and it represents the driving force of many problems of practical interest, particularly in geophysical processes. The regular polygonally patterned crusts of salt shown in Fig. 1(a), approximately a meter in diameter, is the surface signature of the vertical transport of salt, a fundamental process in arid regions. These ridges form as a result of solutal convection in the porous soil beneath the surface [1, 2]. Similarly, in supercritical geothermal systems heat supplied by a magmatic heat source produces a buoyancy-induced flow circulation due to convection [3]. Formation of sea ice (Fig. 1b) or solidification of multicomponent alloys may originate mushy layers, which consist of a porous medium filled with interstitial fluid [4]. This fluid (brie, a mixture of water and sea salt) experiences density gradients produced by differences of temperature and solute concentration, which induce convective motions within the porous layer and control the subsequent solidification dynamics [5, 6]. The above convective processes in porous media are associated with grand societal challenges, including energy transition and climate change mitigation. Understanding the underlying fluid mechanics is crucial for making reliable predictions on the evolution of the natural environment [7]. Within the many applications of importance in this context, convection in porous media has received renovated attention due to the implications it bears for geological sequestration of carbon dioxide (CO\({}_{2}\)) [8].
Geological CO\({}_{2}\) storage consists of injecting large volumes of carbon dioxide in underground geological formations with the aim of permanent (or long-term) storage (Fig. 1c). These formations are typically saline aquifers and consist of a porous material confined by horizontal low permeability layers (grey regions in Fig. 1c). The aquifers are located 1-3 km beneath the Earth surface, where the pressure is sufficient to keep the CO\({}_{2}\) supercritical [12, 13]. Here, a rich flow dynamics emerges. Injected CO\({}_{2}\) (black) is initially lighter than the fluid (brine, yellow) naturally filling the subsurface aquifer, and therefore carbon dioxide migrates towards the upper region of the formation, driven by convection, to form a CO\({}_{2}\) layer that spreads horizontally. The low permeability layer prevents CO\({}_{2}\) from escaping and migrating to the uppermost parts of the aquifer, from where it could eventually return to the atmosphere. At the interface between the currents of carbon dioxide and brine, the dissolution of CO\({}_{2}\) into the underlying brine layer takes place, originating a new mixture heavier than both starting fluids (red-to-green fluid in Fig. 1c). The dissolution process, illustrated in the squared inset of Fig. 1(c), makes the interfacial layer heavier and thicker, and eventually finger-like instabilities form. The CO\({}_{2}\)-rich solution will sink down being permanently stored in the formation. The presence of these finger-like structures makes the convective dissolution process more efficient compared to a diffusive dissolution. Such a behaviour is highly desired for CO\({}_{2}\) storage because injected carbon dioxide will dissolve faster preventing leakages in case of faults at the top low-permeability confining layer. In turn, the presence of non-linear structures makes the system complex to study, and long-term predictions of the dynamics of injected carbon dioxide require huge computational efforts. An element further increasing the complexity of this scenario is represented by the finite-size pore-scale effects. At the level of the rock grains, schematically reported in the circle of Fig. 1(c), the fluid moves in the interstitial space following sinuous paths, further spreading the solute transported and making predictions on the long-term behaviour even more challenging. Motivated by the CO\({}_{2}\) storage process, convection in porous media has been recently investigated in great detail [14]. In this work, we will review the current modelling approaches, numerical and laboratory measurements, and in particular we will focus on the role of finite-size effects such as confinements and pore-scale dispersion.
In a convective porous medium flow, the dynamics is controlled by the relative importance of driving and dissipative mechanisms, which is quantified by the Rayleigh-Darcy number _Ra_. Convection is the driving process, and it is determined by the combination of fluid properties (density contrast), medium properties (permeability and porosity) and domain properties (gravity and domain size). Dissipative forces act against convection either as a drag force between the fluid and the solid (due to viscosity) or reducing local gradients of density (due to molecular diffusion). As a result of solute redistribution due to the tortuous fluid path in the interstitial matrix, the solute concentration field is made more uniform. This effect, labelled as dispersion, contributes as well to dissipate the potential mixing energy of the system, since the concentration gradients within the domain reduce. A key challenge in studying convective geophysical flows consists of making reliable predictions of their evolution by determining how global transport quantities, e.g. the solute flux or the mixing rate, vary as a function of _Ra_ and time. Simplified mathematical models solved numerically and theoretically have provided a clear picture of the flow behaviour at the Darcy scale [15, 16, 17], i.e. when a sufficiently large representative elementary volume including many pores is considered [18]. In contrast, these results disagree with the experimental measurements [19, 20], possibly suggesting that physical effects present in laboratory setups are not captured by the classical Darcy formulation [21].
An intuitive way to experimentally mimic a porous medium consists of filling a confined volume with solid materials, and when spherical objects are used the medium is defined as a bead pack [22]. These experiments may be challenging, since the medium is typically hardly accessible due to its opacity, and only in recent years non-invasive and non-intrusive measurements such as X-ray tomography and magnetic resonance imaging have become accessible [23, 24]. As a result, most of the experiments on convective flows in porous media have been performed in Hele-Shaw cells, which consist of two transparent plates separated by a narrow gap where the fluid flows [25]. The Hele-Shaw apparatus is particularly relevant because it provides optical access, and in some conditions the flow follows a Darcy-like behaviour. In general, neither bead packs nor Hele-Shaw cells faithfully reproduce the dynamics of a Darcy flow, in which the flow structures within a porous medium are much larger than the average pore size. A difference emerged in the transport properties of bead packs experiments, Hele-Shaw experiments and numerical simulations [21]. The solute redistribution effects (dispersion) produced either by the presence of the solid obstacles in the porous matrix
Figure 1: Examples of convection in porous media in geophysical applications. (a) Salt polygons at the Hoz-e Sultan (Iran) [image courtesy of 9]. These superficial formations are the result of salt-induced convective subsurface flows. (b) Formation of sea ice [adapted with permission from 10]. When sea ice grows, the intermediate layer between the ice exposed to the atmosphere and the ocean forms a porous solid matrix (ice) filled in the interstitial space by brine (water and salt). Salt-rich (yellow) plumes of brine drain from this mushy layer into the underlying ocean (blue). (c) Migration of carbon dioxide (CO\({}_{2}\)) in a post-injection scenario [adapted with permission from 11]. Brine and CO\({}_{2}\) saturate the porous medium and are vertically confined by two low-permeability layers. Due to symmetry, only the right half of the reservoir is shown. (square) Dissolution of CO\({}_{2}\) in brine occurs at the interface between the currents of these fluids. (circle) Liquid phase filling the interstitial space within the pores of the rocks.
or by the walls in a Hele-Shaw flow have been identified as the main responsible for these discrepancies [26], and are labelled here as _finite-size_ effects. In recent years, the advancement of theoretical, experimental and numerical techniques allowed a more precise characterisation of the flow, with accurate measurements of pore-scale dissolution rates, and a clearer picture of the influence of finite-size effects on dissolution and mixing is now available.
In this work, we review recent theoretical, numerical and experimental findings in the field of convection in porous media. This review is meant to be complementary with respect to other works [12, 13, 14], since we focus on dissolution and mixing with emphasis on finite-size effects. The paper is organised as follows. In Sec. 2 we describe the mathematical models and the idealized configurations used to investigate convection in porous media, and we derive a unified formulation to evaluate and relate mixing in different flow configurations. In Secs. 3 and 4, we review the results obtained in Rayleigh-Benard and one-sided configurations, respectively. Finite-size effects possibly leading to the discrepancy observed between experiments and simulations is discussed in Sec. 5. Finally, in Sec. 6 we summarise the results discussed and present recent experimental developments, together with an overview of additional effects not present in the configurations discussed in Secs. 3 and 4.
## 2 Modelling of convection
### Pore-scale modelling
Convective flows are produced by the presence of unstable density gradients within an accelerated fluid domain. These density differences drive the flow towards a more stable configuration, decreasing the gravitational potential energy within the system [27]. We consider problems in which convection is induced by the presence of a scalar quantity (e.g., solute concentration or temperature) that modifies the density field of the flow. For simplicity, in this review we will define the parameters in case of solute convection, but the findings extend to the case of thermally-driven convection unless explicitly mentioned.
The maximum density difference within the domain, \(\Delta\rho\), determines the strength of the convective flow. On the other hand, (molecular or thermal) diffusion reduces the local scalar gradients diminishing the driving force of the flow, and viscosity is responsible for energy dissipation due to friction. In a free fluid (i.e., in absence of a porous medium) the relative importance of these two contributions is quantified by the Rayleigh number \(\mathit{Ra}_{\mathrm{T}}\) defined on the characteristic length scale of the flow \(H\)
\[\mathit{Ra}_{\mathrm{T}}=\frac{g\Delta\rho H^{3}}{\mu D}\, \tag{1}\]
where \(g\) is the acceleration due to gravity, \(D\) is the molecular diffusivity and \(\mu\) is the fluid dynamic viscosity. The ratio of kinematic viscosity \(\mu/\rho\) to solute diffusivity \(D\) (or molecular diffusivity) determines the Schmidt number
\[Sc=\frac{\mu}{\rho_{r}D}\, \tag{2}\]
with \(\rho_{r}\) the average (or reference) fluid density within the domain. Similarly, for thermally-driven flows one can define the Prandtl number (\(\mathit{Pr}\)), in which the molecular diffusion is replaced by its thermal counterpart.
Modelling heat or mass transport at the pore-scale requires to resolve the flow within the interstitial space. Momentum transport is controlled by continuity and Navier-Stokes equations, respectively:
\[\nabla\cdot\mathbf{\tilde{u}}=0\, \tag{3}\] \[\tilde{\rho}\left[\frac{\partial\mathbf{\tilde{u}}}{\partial t}+ \left(\mathbf{\tilde{u}}\cdot\nabla\right)\mathbf{\tilde{u}}\right]=-\nabla \tilde{p}+\mu\nabla^{2}\mathbf{\tilde{u}}+\tilde{\rho}\mathbf{g}\, \tag{4}\]
where \(\mathbf{\tilde{u}}\), \(\tilde{\rho}\) and \(\tilde{p}\) are the velocity, density and pressure fields, respectively, and \(\mathbf{g}\) indicates acceleration due to gravity. We assumed that the Boussinesq approximation applies, which is reasonable for geophysical processes such as carbon sequestration [28] (additional details on this assumption are provided in Sec. 6.2). The fluid density \(\tilde{\rho}\) is typically defined by an equation of state (EOS) that depends on both solute concentration and temperature (other scalars present in the system may be similarly treated). When linearized, the EOS may be rewritten to obtain the density \(\tilde{\rho}\) as a function of temperature (\(\tilde{T}\)) and concentration (\(\tilde{C}\)) (possible limitations of this approach are discussed in
Sec. 6.2). With respect to the value \(\tilde{\rho}_{r}\) defined at the reference state \((\tilde{C}_{r},\tilde{T}_{r})\), it reads
\[\tilde{\rho}=\tilde{\rho}_{r}+\alpha_{c}(\tilde{C}-\tilde{C}_{r})+\alpha_{t}( \tilde{T}-\tilde{T}_{r}), \tag{5}\]
where \(\alpha_{c},\alpha_{t}\) are the expansion coefficients relating the density to the variations of concentration and temperature, respectively, being typically \(\alpha_{c}>0\) and \(\alpha_{t}<0\). In this case, assuming the presence of solute scalars only, Eq. (5) reduces to the form \(\tilde{\rho}=\tilde{\rho}(\tilde{C})=\tilde{\rho}_{r}+\alpha_{c}(\tilde{C}- \tilde{C}_{r})\).
Solute conservation is accounted by the advection-diffusion equation:
\[\frac{\partial\tilde{C}}{\partial t}+\mathbf{\tilde{u}}\cdot\nabla\tilde{C}=D \nabla^{2}\tilde{C}. \tag{6}\]
Eq. (3)-(6) are solved for the fluid domain to determine the evolution of the flow at the pore-scale [29]. When heat transport is considered, a diffusive heat flux in the solid matrix may be also accounted [30, 31] (additional details will be provided in Sec. 3.2). The presence of additional phases is not discussed in this review, and we refer to [32] for pore-scale modelling approaches of multiphase flows.
Notwithstanding recent significant improvements of numerical schemes and computational infrastructures, resolving real-scale convective flows from the pore level to the reservoir scale requires a computational effort that is beyond the present capabilities. To overcome this issue, a possible approach consists of modelling the flow at an intermediate scale between pores length scale and domain height, i.e., the Darcy scale. Despite missing a precise description of the flow dynamics at the pore level, Darcy models have proven to be a reliable framework to determine the overall long-time behaviour transport of species in convective porous media flows [2, 10, 12, 33]. In the following, we will describe under which assumptions a convective flow in porous media can be modelled as a continuum via the Darcy flow approximation.
### Darcy model and dispersion
A possible strategy to model flows in porous media consists of taking the average of relevant quantities (velocity, concentration and pressure fields) over a representative volume that contains several pores [18]. An illustrative example is sketched in Fig. 2. The size of the volume (indicated as representative elementary volume, REV) over which the average is computed is larger compared to the pores length scale \(d\), but still smaller than the domain reference length \(H\). The Darcy model is based on empirical observations initially proposed more than 150 years ago [34], and the later derived analytically by [35]. We refer to [18, 36] and references therein for additional details. The key assumption of the Darcy equation is that the average flow velocity over the representative volume is proportional to the pressure gradient applied to the volume via the fluid viscosity and a property of the medium defined as permeability. These conditions are achieved when the flow inertia is negligible compared to viscous forces [18]. In the following, we will characterise the medium properties and the governing flow parameters, and finally we will discuss a model for the Darcy flow.
The characteristic geometrical properties of the solid matrix and its intimate interaction with the interstitial fluid determine the flow behaviour. The main macroscopic parameters used to characterise a porous medium are: (i) porosity \(\phi\), defined as the ratio of volume of fluid to the total volume (fluid and solid), and (ii) permeability \(k\), a measure of the resistance opposed by the medium to the flow. For a given porous medium, the Darcy
Figure 2: Model of the flow at different scales. (a) At the Darcy level, all flow quantities are obtained as averaged over the REV. Solid boundaries (in this example at the bottom of the domain) are impermeable to fluid, i.e. the velocity component perpendicular to the wall is zero (\(\mathbf{n}\) is the unit vector normal to the wall). However, slip along this boundary is possible. (b) At the pore-level, the fluid phase flows within the interstitial space of the solid matrix, which is made of impermeable solid objects. Over the surface of each of these objects, no-slip boundary condition applies.
number represents the relative importance of permeability and its cross-sectional reference area. With respect to the domain reference length scale \(H\), the Darcy number reads:
\[\mathit{Da}=\frac{k}{H^{2}}. \tag{7}\]
A convective flow is driven by density differences, and therefore a possible velocity scale is the buoyancy velocity \(U\), i.e. the free fall velocity of a parcel of immiscible fluid surrounded by fluid having a density contrast \(\Delta\rho\), which is defined as
\[U=\frac{g\Delta\rho k}{\mu}. \tag{8}\]
We observe that \(U\) is independent of the any length scale, and it relates to the fluid (\(\Delta\rho,\mu\)), medium (\(k\)) and domain (\(g\)) properties. In addition to the domain length scale (\(H\)), one can consider as a reference length scale the distance \(\ell\) over which advection and diffusion balance [17]
\[\ell=\frac{\phi D}{U} \tag{9}\]
(in thermal convection, \(\ell=D/U\) with \(D\) representing the thermal diffusivity). The evolution of the fluid layer is controlled by buoyancy, which tends to drive the flow towards a stable configuration, and diffusion, acting to reduce local concentration gradients and increasing the mixing of solute in the domain. The relative importance of the strength of these contributions is evaluated by the Rayleigh-Darcy number \(\mathit{Ra}\)
\[\mathit{Ra}=\frac{H}{\ell}, \tag{10}\]
obtained combining the Rayleigh number \(\mathit{Ra}_{\mathrm{T}}\) (1) and the Darcy number \(\mathit{Da}\) (7). In particular, \(\mathit{Ra}=\mathit{Ra}_{\mathrm{T}}\mathit{Da}\,/\phi\) in the instance of solutal convection, with the solid being impermeable to the solute fluxes, and \(\mathit{Ra}=\mathit{Ra}_{\mathrm{T}}\mathit{Da}\) for thermally-driven cases in conductive media. While in the thermal case an equilibrium between the solid and the fluid phases may be achieved, in solutal convection the solid phase is always solute-free. Notwithstanding this difference, when thermal equilibrium locally occurs between the solid and the fluid phases, results for thermal convection can be equally interpreted as results for solutal convection, and vice-versa, provided that the Rayleigh-Darcy number is matched [14]. The Rayleigh-Darcy number includes all the macroscopic properties of the system: domain (\(g,H\)), medium (\(k,\phi\)) and fluid (\(D,\mu,\Delta\rho\)) properties. In addition, when the spatial coordinates are made dimensionless with respect to \(\ell\) (9), \(\mathit{Ra}\) can be interpreted as the dimensionless domain height [17].
A Darcy-type flow occurs when the size of the flow structures is much greater than the reference length of the REV [14]. The reference length scale is in this case the pore-scale, which is proportional to \(\sqrt{k}\). In quantitative terms, the criterion above is fulfilled when: (i) the pore-scale Reynolds number is small, i.e., viscous dissipation (\(\mu U\)) dominates over inertia (\(\rho_{r}U^{2}\sqrt{k}\)), and (ii) the smallest length scale of the flow (\(\ell\)) is large compared to the pore size (\(\sqrt{k}\)). These conditions are matched if:
\[\frac{\rho_{r}U^{2}\sqrt{k}}{\mu U}\ll 1\Rightarrow\mathit{Re}=\frac{\rho_{r}U \sqrt{k}}{\mu}=\frac{\mathit{Ra}\,\mathit{Da}^{1/2}}{Sc}\ll 1 \tag{11}\]
\[\frac{\ell}{\sqrt{k}}\gg 1\Rightarrow\mathit{Pe}=\frac{\sqrt{k}}{\ell}= \mathit{Ra}\,\mathit{Da}^{1/2}\ll 1, \tag{12}\]
i.e. when Reynolds (\(\mathit{Re}\)) and Peclet (\(\mathit{Pe}\)) numbers are much less than unity. Note that in these definitions the pore length scale (\(\sqrt{k}\)) and the buoyancy velocity (\(U\)) are used as length and velocity scales, respectively.
We consider a fluid-saturated homogeneous and isotropic porous medium with porosity \(\phi\) and permeability \(k\) (Fig. 2a) fulfilling the conditions (11)-(12). The flow field is fully described by the continuity and Darcy equations, respectively:
\[\nabla\cdot\mathbf{u}=0 \tag{13}\]
\[\mathbf{u}=-\frac{k}{\mu}\left(\nabla p+\rho\mathbf{g}\right). \tag{14}\]
Note that in this case \(\mathbf{u}\) is the seepage or Darcy velocity, and it represents the value of fluid velocity averaged over the REV (Fig. 2b). It is related to the fluid velocity averaged over the fluid phase
of the \(\mathrm{REV}\) (\(\mathbf{\tilde{u}}\)) via the Dupuit-Forchheimer relationship \(\mathbf{u}=\phi\mathbf{\tilde{u}}\)[18]. Same applies to pressure \(p\) and density \(\rho\).
The evolution of the concentration field is controlled by the advection-diffusion equation
\[\phi\frac{\partial C}{\partial t}+\nabla\cdot(\mathbf{u}C-\phi D\nabla C)=0\, \tag{15}\]
where \(t\) is time, \(C\) is the concentration averaged over the \(\mathrm{REV}\) and \(D\) is the solute diffusivity, which is assumed constant and independent from the flow. In a more general formulation discussed in Sec. 2.2.1 this coefficient may be replaced by a dispersion tensor \(\mathbf{D}\), that depends on the local flow conditions (\(\mathbf{u}\)) or the fluid properties (\(Sc\)). While the solid is commonly impermeable to solute, in the thermal case a diffusive heat flux may occur within the solid matrix. In case of thermal equilibrium between the solid and the liquid phases, Eqs. (13)-(15) keep being valid.
#### 2.2.1 Dispersion
Solute redistribution induced by the fluid carrying the solute and flowing through the porous medium is defined as dispersion [22]. This mechanism, which has the effect of homogenizing the solute concentration field, adds to the contribution of molecular diffusion. For this reason, these two contributions that are originated by very different physical mechanisms are often grouped within a unique formulation. In porous media, dispersion may arise due to several reasons: pore-scale change of flow direction (mechanical dispersion), heterogeneous permeability fields (large-scale dispersion) or other mechanisms, such as no-slip at the boundary of the pores or dead-end pores (anomalous dispersion). These effects are the result of the pore-scale dynamics, and can be considerably more effective (up to few orders of magnitude) than the solute spreading due to molecular diffusion. Therefore, it may be necessary to account for the presence of dispersion when modelling the flow at the Darcy scale. Here we consider the contribution of mechanical dispersion and molecular diffusion, usually grouped in a term defined as hydrodynamics dispersion. For simplicity, in the following we will indicate this mechanism as dispersion, ad we refer to [37] for a general theoretical discussion on dispersion-induced mixing.
A classical approach to account for the effects of dispersion consists of replacing the molecular diffusion coefficient, \(D\) in Eq. (15), with a dispersion tensor \(\mathbf{D}\) which depends on the local flow conditions. Typically, the dispersion tensor is anisotropic and aligned with the flow, meaning that it can be decomposed into two components in the directions parallel (\(D_{L}\), longitudinal dispersion) and perpendicular (\(D_{T}\), transverse dispersion) to the local flow velocity \(\mathbf{u}\). This model is labelled as Fickian dispersion model [38]. With these assumptions, the dispersion tensor takes the form:
\[\mathbf{D}=D\mathbf{I}+(\alpha_{L}-\alpha_{T})\frac{\mathbf{u}\mathbf{u}}{| \mathbf{u}|}+\alpha_{T}\mathbf{u}\mathbf{I}, \tag{16}\]
where \(\mathbf{I}\) is the identity tensor, and the coefficients \(\alpha_{L}=D_{L}/U\) and \(\alpha_{T}=D_{T}/U\) correspond to the dispersivities of the medium in longitudinal and transverse directions, respectively. For solute transport and \(\mathit{Pe}\gg 1\), dispersion in the cross-flow direction is typically 1 order of magnitude smaller than in the stream flow direction [39]. The ratio of these two contributions is quantified by the dispersivity ratio
\[r=\frac{D_{L}}{D_{T}}. \tag{17}\]
The magnitude of \(D_{L}\) and \(D_{T}\) is estimated with the aid of correlations based on experiments and simulations [see 22, and references therein]. Longitudinal and transverse dispersion coefficients depend on many parameters, namely Schmidt number [40], Reynolds number [41], tortuosity of the medium [42], Peclet number [43], fluid phases [42]. We refer to [44] for a review of numerical, experimental and theoretical works in this field.
We consider here an example of a medium composed of monodispersed beads, typical of numerical and experimental setups commonly employed in pore-scale investigations, and we show that the dispersivity ratio \(r\) may considerably vary as a function of the Peclet number of the flow (a similar procedure applies for different media - porosity, tortuosity - and flow - Peclet number - properties). We consider the case of solutal convection in a monodispersed bead pack at \(\mathit{Sc}\leq 550\). For a monodispersed close random packing the porosity is \(\phi=0.37\)[45, 46] and the tortuosity (the ratio of actual flow path length to the straight distance
between the ends of the flow path [36]) is \(\tau=0.68\) [see 47, and references therein]. We use the empirical correlations proposed by [42], obtained for laboratory experiments, i.e., in architecture-controlled media, whereas we refer to [48] for a review of dispersion relations in field-scale data. The results proposed by [42] are valid for liquids and at \(\mathit{Sc}\leq 550\), and we report the dispersivity ratio in Fig. 3 (for \(\mathit{Sc}>550\), similar correlations are provided). Four main flow regimes have been identified, for increasing \(Pe\): (i) diffusion regime, with molecular diffusion being the dominant mechanism; (ii) diffusion and mechanical dispersion, when the two contributions are comparable; (iii) pure mechanical dispersion, when the influence of molecular diffusion is negligible; (iv) non-Darcy, when the effects of inertia and turbulence cannot be neglected. Note that these correlations are obtained from experimental measurements performed for a wide parameters space, and a sharp separation between these regimes is hard to identify. A theoretical prediction is available for low (\(r=1\), [49]) and high (\(r=6\), [42, and references theirn]) Peclet numbers. A similar regime classification has been also proposed by [50].
With this example we have shown that in general \(r\) varies with \(Pe\) and \(\mathit{Sc}\) among the other parameters, and also across the scales [51]. To simplify the picture, a possible approach used in numerical simulations consists of fixing the values of \(D_{L}\) and \(D_{T}\) or \(r\)[26], which is a reasonable approximation if a narrow range of \(Pe\) is considered. Results on the effect of dispersion on convective flow are presented in Sec. 5.3.
### Flow configurations and quantification of mixing
Convective processes of practical interest are characterised by the mixing of one or more scalar quantities (e.g., the concentration of a dispersed solute phase) in the ambient fluid, and predicting the time required to achieve a certain degree of mixing may be necessary. In the instance of geological carbon sequestration, for example, it is desired to find the time required to dissolve a considerable fraction of the CO\({}_{2}\) injected, to assess the reliability of a given sequestration site. These estimates can be obtained via experiments and simulations in representative flow configurations, which are well controlled and designed to reproduce the main features observed in environmental and industrial cases. In this Section, we will first introduce the main flow configurations investigated in literature, with clear indication of the initial and boundary conditions. Then we will define a general framework and identify relevant observables required to quantify the mixing and analyze the evolution of the system.
Three archetypal flow configurations are generally employed to investigated the dynamics of convection in porous media. They consist of analogue systems that help us to have a comprehension of specific scenarios occurring in nature. A sketch to illustrate possible boundary conditions applied is shown in Fig. 4(a). At the top (label 1) and bottom (label 0) boundaries, both flux \(F\) and concentration \(C\) may be prescribed (the flux will be defined more precisely later in this section). All boundaries are considered impermeable to fluid, i.e. no penetration condition applies (\(\mathbf{u}\cdot\mathbf{n}=0\), being \(\mathbf{u}\) the fluid velocity and \(\mathbf{n}\) the vector perpendicular to the boundary. However, periodic conditions on the side boundaries may be also considered for convenience in numerical studies, with no difference on the modelling described in the following. In all cases considered here, the domain boundaries are assumed impermeable to fluid, and the fluid is supposed initially still [\(\mathbf{u}(t=0)=0\)]. The boundary conditions for the solute (fixed concentration or no flux) will determine the nature of the system considered
Figure 3: Dispersivity ratio \(r=D_{L}/D_{T}\) shown for porosity \(\phi=0.37\), tortuosity \(\tau=0.68\) and different Schmidt numbers, namely \(\mathit{Sc}=50\), \(150\), \(250\), \(350\), \(450\) and \(550\). The correlations proposed by [42] have been employed. The advective flow is divided in several regimes, discussed in the text.
(steady or transient), whereas the initial condition for the solute (uniform concentration, or two fluid layers with different concentration) will control the flow evolution. The flow configurations considered are:
1. Rayleigh-Benard (Fig. 4b-i): the solute concentration is fixed at the horizontal boundaries, so that the density of the fluid at the bottom wall (\(C=C_{0}\)) is lighter than the density of the fluid at the top wall (\(C=C_{1}\)) [53, 54]. This unstable flow attains a statistically steady state, which is rigorously steady for sufficiently low Rayleigh-Darcy numbers [55]. A scalar flux is possible through the upper (\(F=F_{1}\)) and the lower (\(F=F_{0}\)) boundaries.
2. One-sided (Fig. 4c-i): the concentration is imposed at the upper wall, where a solute flux is also possible (\(C=C_{1},F=F_{1}\)), and the domain is impermeable to solute at the lower wall (\(\partial C/\partial z=0\), corresponding to \(F_{0}=0\)). This configuration originates a time-dependent flow, and the domain, initially filled with uniform solute concentration \(C=C_{0}\), is gradually filled with the solute coming from the upper boundary [15, 17, 56].
3. Rayleigh-Taylor (Fig. 4d-i): both walls are impermeable to the scalar (\(F_{0}=F_{1}=0\)). The domain initially consists of two uniform layers of different density (\(C=C_{1}\) for the upper portion, and \(C=C_{0}\) for the lower portion), so that the flow configuration is unstable [57, 58]. Solute mixing evolves controlled by the dynamics of the flow structures.
The flow configurations considered differ in terms of boundary conditions and evolution, and suitable flow observables are required to estimate the mixing state of each system. For instance, the Sherwood number _Sh_, defined as the ratio of the convective to the diffusive mass transport, is suitable in solute-permeable domains (e.g., the Rayleigh-Benard case), but it does not provide any indication in closed domains (e.g., the Rayleigh-Taylor case). Therefore, in each flow configuration different quantities are used, which are related through exact mathematical relations that are derived here. Following [21], we take the advection-diffusion equation (15) multiplied by \(C\), and we integrate over the entire domain. We use the hypothesis of incompressibility of the
Figure 4: Flow configurations [adapted with permission from 52]. (a) Sketch of boundary conditions applied at the top (label 1) and bottom (label 0) boundaries in terms of flux \(F\) and concentration \(C\). All boundaries are impermeable to fluid (\(\mathbf{u}\cdot\mathbf{n}=0\)), and side boundaries may be also considered as periodic. The reference frame (\(x,z\)) and gravity (\(\mathbf{g}\)) are also indicated. Three flow configurations are shown: (b) Rayleigh-Benard, (c) one-sided and (d) Rayleigh-Taylor. An exemplar field obtained for two-dimensional simulations at \(\textit{Ra}=7244\) is reported. The field is taken at the time indicated by the green arrows in panels (b-ii), (c-ii) and (d-ii), where the evolution of the parameters \(\widehat{\chi}\), \(\widehat{F}\) and \(d_{t}(C^{2})/2\) is reported (the operator \(d_{t}\) stands for the time-derivative). Quantities are computed as in Eq. (25) and made dimensionless with respect to the length-scale \(\mathcal{L}=\ell\). The time-averaged value of \(\widehat{F}\) is also shown (dashed lines) in panels (b-ii) and (c-ii).
flow (14) together with the impermeability of the boundaries to the fluid (note that the same result is achieved assuming periodicity in horizontal direction). After some algebraic manipulations, we obtain the following exact global relation:
\[\frac{\phi}{2}\frac{d\langle C^{2}\rangle}{dt}=\frac{\phi}{H}\left(C_{1}F_{1}+C_ {0}F_{0}\right)-\phi\chi\, \tag{18}\]
where \(\langle\cdot\rangle\) indicates the volume average. Eq. (18) relates the mean squared concentration, the solute flux through the walls \(F\) and the mean scalar dissipation within the domain \(\chi\), respectively defined as
\[F_{i}=\frac{D}{L}\int_{0}^{L}\frac{\partial C}{\partial z}\bigg{|}_{z=z_{i}} \,\mathrm{d}x\quad\text{with }i=\{0,1\}\, \tag{19}\]
with \(L\) domain width, and
\[\chi=D\langle|\nabla C|^{2}\rangle. \tag{20}\]
When \(C\) is defined as a mass concentration, \(F\) may be interpreted as the average mass of solute that enters (or leaves) the domain per unit of surface area and time. Eq. (18) can be interpreted as follows. The rate of change of mean squared concentration within the domain is the result of external contributions (\(F_{0},F_{1}\), either positive or negative) and dissipation of mixing energy (\(\chi\) is always positive, therefore it contributes to a reduction of scalar variance \(\langle C^{2}\rangle\)).
To enable comparisons among different systems, a possible set of dimensionless variables consists of \(\mathcal{L}\) for lengths, \(\phi\mathcal{L}/U\) for time, and \(U\) for velocities. The concentration \(C\) is made dimensionless as \(\widehat{C}=(C-C_{0})/\Delta C\), where \(\widehat{\cdot}\) indicates dimensionless quantities and \(\Delta C=C_{1}-C_{0}\). Making Eq. (15) dimensionless with these variables and proceeding as above, we obtain a dimensionless form of Eq. (18) that reads:
\[\frac{1}{2}\frac{d\langle\widehat{C}^{2}\rangle}{d\widehat{t}}=\frac{\phi D} {U\mathcal{L}}\left(\widehat{F}-\widehat{\chi}\right) \tag{21}\]
with \(\widehat{F}=F_{1}\mathcal{L}/(D\Delta C)\) the dimensionless flux and \(\widehat{\chi}=\chi\mathcal{L}^{2}/[D(\Delta C)^{2}]\) the dimensionless mean scalar dissipation. Note that in this expression the contribution of the flux at the bottom boundary vanishes, due to the set of dimensionless variables considered. The reference length scale \(\mathcal{L}\) has not been defined yet and it can be conveniently set in each configuration. With respect to the systems previously introduced, the following scenarios appear:
1. Rayleigh-Benard (Fig. 4b-ii): after an initial transient phase, the system attains a statistically steady state [54, 59]. The time-average of Eq. (21) returns \[\overline{\widehat{F}}=\overline{\widehat{\chi}},\] (22) where \(\overline{\cdot}\) indicates the time-averaging operator. We observe in Fig. 4(b-ii) that while a non-zero instantaneous contribution \(d\langle\widehat{C}^{2}\rangle/d\widehat{t}\) is present, \(\overline{\widehat{F}}\) and \(\overline{\chi}\) fluctuate around their time-averaged value (black dashed line). Note that the reference length scale normally used in this configuration is \(\mathcal{L}=H\), which gives in (21) the prefactor \(\phi D/(U\mathcal{L})=1/\mathit{Ra}\). The quantity used to evaluate the mass transfer in this configurations is the Sherwood number \[\mathit{Sh}=\frac{H}{\Delta CL}\overline{\int_{0}^{L}\frac{\partial C}{ \partial z}\bigg{|}_{z=z_{1}}\,\mathrm{d}x}\,\] (23) defined as the relative contribution of convective and diffusive to diffusive mass transport. Using definition of \(\widehat{F}\) and Eq. (22), \(\mathit{Sh}\) can be related to the flux and the mean scalar dissipation [53]: \[\mathit{Sh}=\mathit{Ra}\,\overline{\widehat{F}}=\mathit{Ra}\,\overline{\chi}.\] (24)
2. One-sided (Fig. 4c-ii): the domain is impermeable to solute at the lower wall (\(F_{0}=0\)). By setting \(\mathcal{L}=\ell\) as defined in (9), Eq. (21) is independent of \(\mathit{Ra}\) and reads \[\frac{1}{2}\frac{d\langle\widehat{C}^{2}\rangle}{d\widehat{t}}=\widehat{F}- \widehat{\chi},\] (25) where \[\widehat{F}=\frac{\phi D}{U\Delta C}\frac{1}{L}\int_{0}^{L}\frac{\partial C}{ \partial z}\bigg{|}_{z=z_{1}}\,\mathrm{d}x.\] (26) This choice for \(\mathcal{L}\) is convenient to compare systems having different \(\mathit{Ra}\) because the value of \(\widehat{F}\) appears to be universal, as will be later
discussed in Sec. 4. The time-dependent flow originated from this configuration consists of three-main flow regimes [17, 60, 61]. Initially (\(\widehat{t}<10^{3}\)) diffusion dominates and a high-concentration high-density unstable fluid layer thickens. At a later stage (\(\widehat{t}<16\mbox{\it{Ra}}\)), convection takes place and plumes formed at the top boundary layer grow and invade the domain. In this phase \(\widehat{F}\) is statistically steady and characterized by a value (black dashed line) that is independent of the Rayleigh-Darcy number considered. A similar behaviour holds for \(\chi\), but a closer inspection reveals that after the fingers reach the bottom (\(\widehat{t}>10\mbox{\it{Ra}}\)) an increase of \(d\langle\widehat{C}^{2}\rangle/d\widehat{t}\) is observed. A corresponding decreasing behaviour is reflected in \(\chi\), but with half the amplitude. After the upper layer of the domain is also saturated (\(\widehat{t}>16\mbox{\it{Ra}}\)), the dissolution flux \(\widehat{F}\) drops, and the system enters the shutdown regime.
3. Rayleigh-Taylor (Fig. 4d-ii): the domain is impermeable to the solute (\(\widehat{F}=0\)) and Eq. (21) reads \[\frac{1}{2}\frac{d\langle\widehat{C}^{2}\rangle}{d\widehat{t}}=-\frac{\phi D }{U\mathcal{L}}\widehat{\chi}.\] (27) The flow is initialised considering two fluid layers of different density in an unstable configuration. Eq. (27) suggests that all the potential energy initially stored by keeping the two phases segregated is dissipated as time evolves. Both \(\ell\) and \(H\) can be considered as reference length scales, depending on which part of the flow evolution is considered. However, [52] have shown that \(\mathcal{L}=\ell\) provides a universal picture for the evolution of \(\widehat{\chi}\), and results are presented in Fig. 4(d-ii) using this length-scale. Similarly to what observed in the one-sided configuration, the flow is initially controlled by diffusion (\(\widehat{t}<10^{3}\)). Afterwards (\(10^{3}<\widehat{t}<\mbox{\it{Ra}}\,/2\)) the formation of fingers is observed, which merge and grow, accelerating mixing. In this phase, occurring at \(\mbox{\it{Ra}}<\widehat{t}<3\mbox{\it{Ra}}\) in the simulations considered, \(\widehat{\chi}\) is observed to increase in Darcy simulations, as shown in Fig. 4(d-ii), whereas it decreases in pore-scale simulations, due to finite-size effects [29]. The limits in which these regimes set in are indicative, as the flow evolution in strongly influenced by the initial perturbation. When the domain is nearly saturated, a stable density profile is achieved, local concentration gradients are not sufficient to sustain convection, which is in turn overcome by diffusion. Correspondingly, scalar dissipation is observed to reduce, asymptotically attaining a zero value in correspondence of a uniformly mixed domain.
A major proportion of recent studies focused on the determination of correlations of the mixing parameters (\(\widehat{F}\), _Sh_ or \(\widehat{\chi}\)) with the flow parameter (_Ra_). These results will be reviewed Secs. 3 and 4 for the Rayleigh-Benard and the one-sided configurations, respectively.
## 3 Rayleigh-Benard convection
Rayleigh-Benard convection produces the statistically-steady flow discussed in Sec. 2.3, with mass transfer properties quantified by the Sherwood number, a time-averaged ratio of total (convective and diffusive) to diffusive mass transport at the boundaries of the domain defined in Eq. (23). Alternatively, the Nusselt Number is used in case of thermal convection. In this section we will review the results relative to Darcy and pore-scale flows in this configuration.
### Darcy flow
In the Darcy case [Eqs. (13)-(15)], the system is uniquely controlled by the Rayleigh-Darcy number \(\mbox{\it{Ra}}\), defined in Eq. (10), which sets the flow structure. The behaviour of _Sh_ with _Ra_ is reported in Fig. 5 for Darcy studies available in literature for two- [31, 54, 62, 63] and three-dimensional [59, 64] simulations. We briefly recall here the main features of the flow, and we refer to [14] for a detailed review of the flow structure. For \(\mbox{\it{Ra}}<4\pi^{2}\), the mass transport is purely diffusive [65, 66] and no convective motion arises (_Sh_ = 1). The flow is maintained quiescent by the dissipative (diffusive) effects that dominate over convection. For increasing _Ra_, instabilities appear in the form of steady convective rolls [55] with corresponding increase of the convective mass transfer. When \(\mbox{\it{Ra}}\approx 400\), unsteady boundary layer instabilities take place and become progressively dominant. When the driving force is sufficiently large, namely at \(\mbox{\it{Ra}}\approx 1300\) and \(\mbox{\it{Ra}}\approx 1700\) for two- and three-dimensional systems [53, 64], respectively, these instabilities turn into a dynamic formation of small plumes at the
boundary layer, which eventually grow and merge into larger plumes spanning the entire domain height. In this stage, the flow enters the _high-Ra_ regime [54]. The dynamics described above is similar in two- and three-dimensional domains. However, in the three-dimensional case the flow pattern obtained at low _Ra_ may be affected by the initial condition, i.e., different flow structures are obtained starting from different initial concentration distributions. In addition, hysteresis effects have been observed in the two-dimensional case [53]: when the flow is initialised using a solution obtained at higher _Ra_, the flow structure (number of rolls) and the transport properties (_Sh_) differ from those obtained starting from, e.g., a linear temperature distribution or from a solution obtained at lower _Ra_. As a starting point, \(\textit{Ra}=1255\) is used by [53], and _Ra_ is progressively decreased. The system evolves following two distinct branches (Fig. 5a), both differing from the solution obtained for increasing _Ra_.
Determining the scaling of _Sh_ with _Ra_ in the high-_Ra_ regime has been object of active investigation in recent years, also due to the improvements of computational capabilities. In the frame of free fluids (i.e., no porous medium), [67] and [68] proposed that at sufficiently high Rayleigh numbers the interior of the domain is well mixed, and the temperature gradients are localised at the wall boundary layers. The Sherwood number is then obtained as a result of the diffusive heat flux across these layers, which is inversely proportional to their thickness, and for porous media it is predicted to scale linearly with _Ra_, \(\textit{Sh}\sim\textit{Ra}\). An accurate phenomenological description of the flow and scaling arguments is provided by [14]. The linear scaling best fitting the two-dimensional numerical results [54] is
\[\textit{Sh}=0.0069\textit{Ra}+2.75, \tag{28}\]
and it agrees also with the best known theoretical upper bound, for which \(\textit{Sh}\leq 0.0297\textit{Ra}\)[53]. The asymptotic scaling proposed by [54] [solid black line in Fig. 5(a)] fits well the numerical results, and it is in agreement with the above mentioned linear predictions: The compensated Sherwood number [Fig. 5(b)] approaches in this case the asymptotic value (0.0069).
In three-dimensional domains the situation differs, as the compensated Sherwood number has not reached yet the asymptotic linear scaling [Fig. 5(b)]. The best-fitting is in this case provided by [59]
\[\textit{Sh}=0.0081\textit{Ra}+0.067\textit{Ra}^{0.61}, \tag{29}\]
which consists of a linear relation with sublinear corrections. The discrepancy existing between the scaling obtained in three-dimensional porous media and the linear asymptotic prediction for \(\textit{Ra}\rightarrow\infty\) is due to the different flow structure produced by the additional degree of freedom provided by the third spatial dimension, i.e., the flow has not reached yet the asymptotic state. It was estimated [59] that in three-dimensional domains the asymptotic regime sets in at \(\textit{Ra}\approx 5\times 10^{5}\), i.e. more than one order of magnitude beyond the threshold identified in two-dimensional flows, and further investigations at \(\textit{Ra}\approx 10^{6}\) are required to confirm this finding.
Resolving the flow equations at the Darcy scale at large _Ra_ may require extensive computational resources [58, 59]. An interesting approach proposed to overcome this obstacle consists of a new
Figure 5: Darcy simulations. (panel a) Sherwood number (_Sh_) as a function of Rayleigh-Darcy number (_Ra_), and (panel b) in compensated form (_Sh_ / _Ra_). Results reported are obtained in two- [54, 54, 62, 63] and three-dimensional [59, 64] simulations of homogeneous and isotropic porous media. Best fitting laws at high-Rayleigh-Darcy numbers for two- (black solid line) and three-dimensional flows (red solid line), respectively Eq. (28) and Eq. (29), are also reported. In panel (a), a subset of the two-dimensional data of [53] (blue stars) is also shown to mark the presence of hysteresis effects. The solution obtained at \(\textit{Ra}=1255\) was used as initial condition for these simulations. As \(\textit{Ra}\) is decreased, the system evolves on two branches (blue lines), both differing from the solution obtained for increasing \(\textit{Ra}\).
modelling strategy labelled as large-mode simulation (LMS) [69]. With the aid of a scale-analysis, [69] observed that: (i) large-scale structures are responsible for the bulk of the production of concentration variance, (ii) variance dissipation is dominated by the small diffusive scales, and (iii) both production and dissipation rates are independent of the Rayleigh-Darcy number. On this ground, they propose a LMS model in which closure is achieved by replacing the actual diffusivity with an effective one, in analogy with large eddy simulations for turbulent flows. LMS is based on resolving the low-wavenumber dynamics only, whereas the effect of the unresolved scales on the large ones is modelled. Results obtained with this new strategy are promising to enable simulations for long-term predictions of convective porous media flows in practical settings.
### Pore-scale flow
Recent developments in computational methods allowed numerical solution of pore-resolved convective flow models, defined by Eqs. (3)-(6). Unlike the Darcy case, in pore-scale problems the flow properties cannot be lumped into a single governing parameter, and the contribution of several flow features has to be considered. With respect to the medium, obstacles shape and arrangement determine the medium permeability. The medium conductivity influences heat transport through the solid phase, and the volume fraction of solid sets the porosity. Concerning the fluid and the scalar transported, kinematic viscosity and diffusivity set the relative thickness of thermal and kinematic boundary layers [measured by the Schmidt number \(Sc\), defined in (2)], while the density difference produced by the scalar determines the driving force [measured by the Rayleigh number \(\mathit{Ra}_{\mathrm{T}}\), defined in (1)]. A key quantity to consider is the relative size of the pore-space to the flow structure, which determines the penetration of the buoyant plumes, responsible of convective mixing, in the domain. The influence of these flow parameters on the convective transport efficiency, measured by \(\mathit{Sh}\), is discussed here.
We initially consider a solid phase impermeable to the scalar, e.g. the case of solute convection. Accurate two-dimensional pore-scale simulations of Rayleigh-Benard solutal convection are presented by [31], where the porous medium is modelled as a matrix of aligned squares. They explored different values of porosity (\(0.36\leq\phi\leq 0.56\)) and Schmidt numbers (\(\mathit{Sc}=1\) and \(\mathit{Sc}=250\)). The results, reported in Fig. 6 (green symbols) as measurements of Sherwood number, indicate fair agreement with two-dimensional Darcy simulations (black solid line, [54]). However, the pore-induced dispersion, which may be as strong as buoyancy, affects the flow structure and consequently \(\mathit{Sh}\), and the scaling \(\mathit{Sh}(\mathit{Ra})\) appears sublinear when the porosity is increased (\(\phi=0.56\)). At a low Schmidt numbers (\(\mathit{Sc}=1\)), pore-scale effects on the flow structure, e.g. wavenumber or width of the plumes, are qualitatively similar to those at a high Schmidt numbers (\(\mathit{Sc}=250\)). In a complementary study, [70] investigated large Schmidt numbers (\(\mathit{Sc}=250\)) convection, while focusing on the role of the medium properties. Results of [70] are reported in Fig. 6 (red symbols), and indicate that the dissolution coefficient depends on \(\mathit{Ra}\) as
\[\mathit{Sh}=1+a\mathit{Ra}^{1-0.2\phi^{2}}, \tag{30}\]
Figure 6: Sherwood number (\(\mathit{Sh}\)) as a function of Rayleigh-Darcy number (\(\mathit{Ra}\)) for two-dimensional pore-scale simulations. Results refer to solutal convection [31, 70] (i.e., solid impermeable to solute, green and red symbols) and thermal convection [30] (i.e., conductive medium, blue symbols). Results of two-dimensional Darcy simulations [54] (black solid line) and high-\(\mathit{Ra}\) scaling [Eq. (28), grey solid line] are also shown.
where \(a=0.011\pm 0.002\) is a pore-scale geometric parameter depending on shape and arrangement of the obstacles. The difference with respect to the Darcy case [simulations by [54] - black line, asymptotic best fit - Eq. (28)] is apparent, as it seems that within this range of parameters, systems with same _Ra_ (achieved with different values of porosity) exhibit very different convective transport properties.
An additional degree of freedom is introduced by allowing a flux of scalar through the solid matrix, which may be the case for thermal convection. The flow structure and the heat transfer coefficient are determined by the relative size of thermal length scale (boundary layer thickness) and porous length scale (average pore space). These properties control the penetration of the plumes into the boundary layer region, which in turn determines the heat or mass transfer rate. This physical mechanism has been described by three-dimensional pore-scale simulations of few pore spaces [71]. Later, in a complementary experimental study, [72] observed that while at low Rayleigh numbers the transport mechanism is less efficient than in free fluids Rayleigh-Benard convection, at larger Rayleigh numbers the classical scaling derived for free fluids [73, 74] is recovered. The nature of this transition has been investigated in detail by [30] (blue symbols in Fig. 6).
Within the frame of conductive media, two-dimensional direct numerical simulations have been used to investigate the microscale flow field at \(\textit{Sc}=4.3\). The obstacles consist of circles arranged in a regular manner. When the arrangement is not regular (not shown in Fig. 6), a slight decrease of _Sh_ is observed. In Fig. 6 it appears that the convective heat transport in less efficient compared to the configuration discussed before, in which the matrix was impermeable to solute [for a detailed discussion on the importance of the (im)permeability condition of the solid matrix, see [75]]. In addition to the effect of the thermal conductivity of the solid, the measurements of [30] refer to relatively high values of porosity. As predicted by Eq. (30), the larger the porosity, the lower the _Sh_. The transition from porous convection to unconfined convection is controlled by two physical mechanisms, which are set by the properties of the porous matrix [30]. On the one hand, the presence of obstacles makes the flow more coherent, with the correlation between temperature fluctuation and vertical velocity enhanced and the counter-gradient convective heat transfer suppressed, leading to heat transfer enhancement. On the other hand, the convection strength is reduced due the impedance of the obstacle array, leading to heat transfer reduction. The variation of _Sh_ with \(\textit{Ra}_{\text{T}}\) (not _Ra_) is reported in Fig. 7(c), where the presence of these two distinct regimes is apparent. For sufficiently large \(\textit{Ra}_{\text{T}}\) or high porosity, the classical scaling is recovered (\(\textit{Ra}_{\text{T}}\)[73, 74]). When the Rayleigh number is lowered, however, the role of the porous structure in confining the flow is critical, and a correlation for the
Figure 7: Pore-scale two-dimensional simulations [30]. (a) Exemplar dimensionless temperature field (\(\vartheta\)) [adapted with permission from 30], being 0 and 1 the temperature values at the top and bottom boundaries, respectively. (b) Detail with explicit indication of the boundary layer thickness (\(\delta=H/(2\,\textit{Sh})\)) and the average pore scale (\(l_{\text{s}}\)). The medium consist of aligned circular and conductive obstacles for Schmidt number \(\textit{Sc}=4.3\). (c) Sherwood number (_Sh_) is reported as a function of the Rayleigh number (\(\textit{Ra}_{\text{T}}\)) for different values of porosity, \(\phi\). Note that results for unconfined fluids (\(\phi=1\)) are also shown. (d) compensated Sherwood number (\(\textit{Sh}\,/\textit{Ra}_{\text{T}}\)\({}^{-0.3}\)) as a function of \(\delta/l_{\text{s}}\).
Sherwood number _Sh_ is proposed:
\[\textit{Sh}\approx 1+c\phi\left(\frac{H}{\ell_{s}}\right)^{4}\textit{Sc}^{2} \,\textit{Re}^{2}(\textit{Ra}_{\text{T}})^{-1}, \tag{31}\]
where the Reynolds number _Re_ is computed based on the velocity fluctuations and \(c=8\) is a fitting parameter. This scaling is proven to be well approximated by \(\textit{Sh}\sim\textit{Ra}_{\text{T}}\)\({}^{0.65}\)[30]). The transition between these regimes appears clearly in Fig. 7(d), when the compensated Sherwood number (\(\textit{Sh}\,/\textit{Ra}_{\text{T}}\)\({}^{-0.3}\)) is shown as a function of the boundary layer thickness [\(\delta=H/(2\,\textit{Sh})\)] divided by the average pore scale (\(l_{s}\)). The situation is schematically illustrated in Fig. 7(a,b). When the thickness of the thermal boundary layer is comparable to the averaged pore length scale (\(\delta/l_{s}=1\)), the transition from one regime to the other occurs. In addition to the porous structure and the Rayleigh number, in case of thermal convection, the boundary layer thickness and the heat transfer coefficient are determined also by the value of thermal conductivity of the solid and liquid phases [76, 77].
## 4 One-sided convection
The one-sided configuration introduced in Sec. 2.3 is representative of natural instances like geological CO\({}_{2}\) sequestration [12] and mixing in groundwater flows [78]. In these cases a fluid-saturated porous domain [sketched in Fig. 4(c-i)] is allowed to exchange solute through the top boundary. The system is initially driven by diffusion [17, 60, 79]. The fluid layer below the upper boundary becomes progressively rich in solute, increasing the density of the liquid phase. When sufficiently thick, this high-density layer eventually becomes unstable and fingers like structures form [80, 81], evolve (i.e., grow and merge) and if the Rayleigh-Darcy number is sufficiently large [\(\textit{Ra}>O(10^{3})\)] the system may reach a quasi-steady regime. In this phase the dimensionless solute flux \(\widehat{F}\) computed as in Eq. (26), indicating the mass of solute dissolved through the top boundary per unit of surface area and time, is nearly constant over time. For simplicity, hereinafter we will refer to \(\widehat{F}\) as the time-averaged value of flux in this constant-flux phase. The role of the fingers in promoting solute mixing is crucial, as initially proposed by [82, 83], since the contribution of convection accelerates considerably the dissolution compared to the purely-diffusive case. The domain progressively saturates with incoming solute, up to the point in which the local concentration difference between the upper fluid layer and the top boundary is reduced, and the dissolution rate suddenly drops. This phase is referred to as shutdown regime and it has been accurately described [14, 16, 17, 60, 84]. A thorough description of the whole dissolution process is provided by [17].
In this section we will review the results relative to Darcy and pore-scale flows in the one-sided configuration, and we will focus on the dependency of the dissolution rate \(\widehat{F}\) on the flow parameters during the constant flux regime.
### Darcy flows
When the Darcy model is considered [Eq. (13)-(15)], the flow is uniquely controlled by the Rayleigh-Darcy number _Ra_, similarly to the Rayleigh-Benard case discussed in Sec. 3.1. Numerical two-dimensional simulations agree on the value of the flux during the constant flux regime, which was initially determined by [56] to be
\[\widehat{F}=0.017. \tag{32}\]
This observation has been later confirmed by a number of numerical studies [15, 16, 17, 85, 86, 87, 85]. We refer to [87] for a review of literature scaling laws in presence of variations to this problem (anisotropy, geochemistry, etc.).
In the instance of three-dimensional domains, the dynamics is analogue to that discussed above. However, due to the large computational costs, only few numerical works are available [15, 88, 89]. A seminal work in the field is presented by [15], who performed three-dimensional simulations and estimated the flux to be higher than in the corresponding two-dimensional case (32). These results refer to \(\textit{Ra}\leq 9\times 10^{3}\), and additional data at larger Rayleigh-Darcy numbers are required to determine at the exact value for \(\widehat{F}\), which has been estimated not to exceed 25% of the two-dimensional case [15, 87, 89]. It is apparent that the additional degree of freedom represented by the third spatial dimension adds significant complexity to the fingering phenomena [15, 88], with the flow structure being more complex and dynamical [58].
### Pore-scale and Hele-Shaw flows
The determination of \(\widehat{F}\) has been carried out beyond the Darcy model via pore-scale simulations and experiments, and via Hele-Shaw setups. As discussed in Sec. 3.1, a classical argument for Darcy convection requires that _Sh_ scales linearly with _Ra_[67, 68]. The theoretical interpretation is that in natural convection _Sh_ is uniquely controlled by the diffusive boundary layer, and it is independent of the flow interior and any external length scale. Only for an exponent of one for _Ra_, i.e., _Sh_\(\sim\)_Ra_, it is possible to have an expression for _Sh_ that is independent of \(H\)[18, 68, 90]. As a result [see also Eq. (24)], the flux \(\widehat{F}\) is expected to be independent of _Ra_, as it emerges from Darcy simulations [e.g., see correlation (32)]. Despite this robust theoretical framework, a different scaling for _Sh_(_Ra_) was found by many studies.
Most of the experimental studies investigating one-sided convection have been carried out with the aid of bead packs or Hele-Shaw cells, examples of which are reported in Figs. 8(a) and 8(b), respectively. In the manner, the porous medium consists of a matrix of rigid spheres, typically made of a transparent material to allow optical access to the flow, enclosed in a transparent container. A Hele-Shaw cell, in turn, is obtained with two parallel and transparent plates separated by a narrow gap \(b\) (usually, less than 1 mm thick). When the fluid velocity in the cell is sufficiently low (gap-based Reynolds number \(\ll 1\)), the flow behaves as a laminar Poiseuille flow, i.e., the gap-average velocity is proportional to the pressure gradient via the inverse of viscosity and to a constant equal to \(k=b^{2}/12\), where \(k\) is defined as the equivalent permeability of the cell. Since this formulation represents an analogue of the Darcy law (14), the Hele-Shaw cell is commonly used as a tool to reproduce a flow through a porous medium. Bead packs and Hele-Shaw experiments used to derive scaling laws are discussed in the following.
A first _Sh_(_Ra_) scaling was proposed by [19], who used experiments in glass beads to mimic one-sided convection in porous media. The fluids employed were methanol and ethylene-glycol (MEG) and water. MEG [upper fluid layer in Fig. 8(a)] is lighter than water when pure, but it presents a non-monotonic density profile as a function of the fraction of water. As a result, at the interface between the two fluids [identified by the white boundary in Fig. 8(a)], a heavier mixture forms and originates finger-like instabilities (white structures). The Sherwood number, estimated by tracking the receding interface between the two fluid layers, was measured to scale as _Sh_\(\sim\)_Ra\({}^{4/5}\)_, and the result was explained with a phenomenological model based on a boundary layer theory: The lateral solute diffusion from the downward plumes into the upward ones is responsible for the reduction of local concentration gradients and the corresponding density differences driving the flow. This translates into a reduction of the flux, making _Sh_ to reduce with respect to the classical scaling. An analogue approach was employed by [20], who used Hele-Shaw cells and a layer of water located vertically above a layer of propylene glycol (PPG) [a similar system is shown in Fig. 8(b)]. They obtained the scaling _Sh_\(\sim\)_Ra\({}^{0.76}\)_ and identified the plumes spacing as the key parameter controlling the Sherwood number. Similar results are derived by [91] (Hele-Shaw and beads, _Sh_\(\sim\)_Ra\({}^{0.84}\)_), [92] (Hele-Shaw, _Sh_\(\sim\)_Ra\({}^{0.76}\)_) and [93] (Hele-Shaw, _Sh_\(\sim\)_Ra\({}^{0.95}\)_). The discrepancy existing between these sublinear scalings and the linear theoretical [67, 68] and numerical findings in case of Darcy simulations [15, 16, 17, 60, 86] has been subject of active investigations.
To examine this mismatch, numerical simulations and theoretical arguments were used [21]. Accurate simulations were employed to mimic the behaviour of the fluids used in the experiments (characterised by a non-monotonic density-concentration curve, and with a concentration-dependent viscosity), which differ from the ones classically considered in Darcy simulations (linear
Figure 8: Examples of one-sided studies. (a) Experiment with MEG in water in bead packs [adapted with permission from 19]. (b) Experiment with propylene-glycol (PG) and water in Hele-Shaw cell [adapted with permission from 21]. (c) Darcy simulation with non-monotonic density profile [adapted with permission from 21].
dependency of density with concentration, constant viscosity). A snapshot of the concentration field obtained for a Darcy simulation with non-monotonic density profile is shown in Fig. 8(c). It was found that the dissolution flux is determined by the mean scalar dissipation rate, \(\widehat{\chi}\). Mixing in porous media has a universal character, and the non-linear behaviour observed needs to be explained with effects not present in the classical Darcy-Boussinesq model. In particular, the authors observed that several differences exists between this simple Darcy model and the experiments reporting sublinear scalings. Among the others, they identified three main possible sources of discrepancies: (i) dependency of viscosity with the solute concentration, (ii) non-monotonic behaviour of fluid density with solute concentration, and (iii) compressibility effects (volume change during the process of dissolution). The conclusion of [21] is that while the concentration-dependent behaviour of viscosity has a minor effect, the role of the non-monotonic density-concentration profiles (shape of the density curves) may considerably affect the Sherwood number scaling law. The role of some of these fluid properties has been later investigated and will be discussed in the following.
The scaling analysis performed by [90] for non-Boussinesq and compressible flows reveals that the scaling \(\mathit{Sh}\approx 181.02+0.165\mathit{Ra}\) represents the best fitting for their data. Therefore, the authors propose that the previously reported sublinear relations could be in part a result of relatively limited parameter range of the simulations (as in the case of [94]) or in part because the Rayleigh-Darcy number of the experiments lies below the asymptotic limit, i.e., before the classical linear scaling establishes.
To avoid a non-monotonic dependency of density with concentration. i.e. to remove the fluid properties as a possible reason of non-linear scaling, experiments in Hele-Shaw cells have been performed. Potassium permanganate (KMnO\({}_{4}\)) and water are used as analogue fluids, with solid crystals of KMnO\({}_{4}\) placed on a metal grid located on top of the cell. Water gradually dissolves the crystals, which remain in a fixed position hold by the mesh, and the resulting interface between the light and the heavy fluid is always fixed and flat. This methodology, initially introduced by [79], allowed to cover a wide range of Rayleigh-Darcy numbers. In addition, variations of volume and fluid viscosity with solute concentration are negligible. Results by [95] report a linear scaling of \(\mathit{Sh}\) with \(\mathit{Ra}\). Later studies [96, 25] indicate that within the same value of permeability the scaling \(\mathit{Sh}\sim\mathit{Ra}\) holds. In general, \(\mathit{Sh}\) may still be a function of \(\mathit{Ra}\) due to the presence of mechanical dispersion [97].
The works presented indicate that the fluid properties may not be sufficient to justify the non-linear \(\mathit{Sh}(\mathit{Ra})\) scaling observed. However, other physical mechanisms induced by the Hele-Shaw cell or the dispersion in the porous medium are not present in the classical Darcy model. These effects, labelled as finite-size effects, maybe be responsible of the non-linear scaling observed, and will be discussed in detail in the Sec. 5.
## 5 Finite-size effects
Domain features like lateral confinement, thickness-induced Hele-Shaw dispersion and pore-scale dispersion have been identified to play a role on the non-linear scaling of \(\mathit{Sh}\) with \(\mathit{Ra}\) or the flow structure. The influence of these finite-size effects on convection will be reviewed in this section.
### Effect of confinement
A natural question arising from numerical simulations is what happens when the domain is confined in one of the wall-parallel directions, and we will address this topic here in the frame of Rayeligh-Benard, the Rayleigh-Taylor, and the full reservoir-scale flows dynamics.
The flow in a porous Rayleigh-Benard system at large \(\mathit{Ra}\) consists of two distinct regions (see Sec. 3): (i) the near-wall region, characterised by the presence of protoplumes, and (ii) the interior of the flow, controlled by megaplumes. The average flow structure in each of these regions is quantified by via the time- and horizontally-averaged wavenumber, \(k\). While the near-wall region is hard to be described theoretically, the interior of the flow has been well characterised. In two dimensions, stability analysis [98] of the flow interior for \(\mathit{Ra}\rightarrow\infty\) suggests that \(k\sim\mathit{Ra}^{5/14}\), in fair agreement with numerical measurements that give \(k\sim\mathit{Ra}^{0.4}\)[54]. In three dimensions, theoretical results [99] indicate that \(k\sim\mathit{Ra}^{1/2}\), which is in excellent agreement with numerical measurements of [64] and [58], who obtained \(k\sim\mathit{Ra}^{0.52}\) and \(k\sim\mathit{Ra}^{0.49}\)
respectively. In addition, [59] observed with the aid of numerical simulations that supercells, representing clusters of protoplumes located near the boundaries, are the footprint of the megaplumes dominating the bulk of the flow. Unexpectedly, the correlation between these flow structures is observed to hold up to very high Rayleigh-Dacry numbers. This flow structure, however, may be considerably affected by the domain size.
Two-dimensional numerical simulations performed by [62] revealed that identifying the wavenumber may be complicated. Domains with low aspect ratio can dramatically reduce or even suppress convection. The study shows that the interior structure of a two-dimensional system may result strongly conditioned by the domain width, suggesting that the inter-plume spacing is not unique. The authors finally conclude that determining a precise high-\(\mathit{Ra}\) scaling of the interior inter-plume spacing will require extremely long simulations in very wide computational domains.
In three-dimensions, the effect of the domain confinement has been investigated by [58]. They performed numerical simulations at \(\mathit{Ra}=10^{4}\) in Rayleigh-Benard configuration, in domains having variable extension in one of the wall-parallel directions, namely \(x\) in Fig. 9(a), and constant extension in the other directions (\(W=H\)). Periodic boundary conditions are applied in the wall-parallel directions. The relative size of the domain extension in directions \(y,z\) with respect to \(x\) is quantified by the aspect ratio \(\mathit{R}=L/H\). Four values of \(\mathit{R}\) are considered in Fig. 9, with the domain progressively increasing in size from \(\mathit{R}=1/8\) to \(\mathit{R}=1\). The corresponding temperature fields, taken at the centreline (\(z=1/2\)) and close to the bottom wall (\(z=0.005\)), are shown in Figs. 9(b)-(i). A strong confinement of the domain presents dramatic effects on the flow structures. For sufficiently large domains, e.g. \(\mathit{R}=1\), the near-wall cells reported in Fig. 9(b) are randomly oriented and show a wide distribution of sizes. When the domain width is progressively reduced, the cells are strongly constrained [Figs. 9(f,h)] and eventually end up in an extremely ordered pattern [Figs. 9(d)]. The same applies to the flow structures at the centerline that for small domains (\(\mathit{R}\leq 1/4\)) form sheet-like plumes. More quantitative results, estimated by means of the horizontal radial mean wavenumber of these simulations and
Figure 9: Influence of lateral confinement (domain width) on the development of the flow structures. Three-dimensional Rayleigh-Bénard simulations performed at \(\mathit{Ra}=10^{4}\) are shown [adapted with permission from 58, 59]. (a) Dimensionless temperature distribution (\(\vartheta\)) in a cubic domain, being 0 and 1 the values at top and bottom boundaries, respectively, with gravity \(\mathbf{g}\) acting along \(z\). Periodic boundary conditions are applied in the wall-parallel directions \((x,y)\). The domain has dimensions \(L,W\) and \(H\) in directions \(x,y\) and \(z\), respectively. The size of the domain is progressively increased in direction \(x\), so that the aspect ratio \(\mathit{R}=L/H\) increases from \(\mathit{R}=1/8\) (panels b,c) to \(\mathit{R}=1\) (panels h,i). For each value of \(\mathit{R}\), temperature fields taken at the centreline (\(z=1/2\)) (c,e,g,i) and close to the bottom wall (\(z=0.005\)) (b,d,f,h) are shown. Note that different colorbars apply to centreline and near-wall panels.
additional larger domains (not shown here), indicate that the flow structures at the near-wall and in the interior of the flow are strongly constrained by the size of the domain. They found that at \(\mbox{{Ra}}=10^{4}\) the flow is independent of the size of the domain for \(\mbox{{Ra}}\geq 1\).
Decreasing the size of the computational domain in one direction will inevitably change the flow structure from a three-dimensional towards a two-dimensional character. This transition has been investigated in the frame of Rayleigh-Taylor instability by [100]. Among the other indicators, they analysed the evolution of the mixing length, i.e. the time-dependent vertical extension of the tip-to-rear finger distance, to determine whether the system exhibits a two- or three-dimensional behaviour. They observed that for sufficiently large Rayleigh-Darcy numbers (\(\mbox{{Ra}}>10^{5}\)), the growth of the mixing length is always linear in time in two and three dimensions (note that at lower Rayleigh-Darcy numbers the growth of the mixing length may be superlinear [57, 101]). The prefactor of the growth for the mixing length varies, being larger in two dimensions than in three dimensions. They performed three-dimensional numerical simulation with triply periodic boundary conditions, in which the dimension of the domain in a direction perpendicular to gravity, defined in the following "thickness", is progressively reduced. Results indicate that when the thickness diminishes below a certain threshold value, the systems transitions from a three-dimensional to a two-dimensional behaviour. This critical value corresponds to the wavelength associated with the most unstable mode obtained from linear stability analysis [102, 103]. The sharp transition observed in this case is remarkably different than in turbulent convection [104]. In the turbulent case, the dimensional transition occurs dynamically, i.e. when the width of the mixing region exceeds the confined dimension, and it is smooth due to the co-existence of direct and inverse energy cascades.
The horizontal domain extension is also a parameter that dramatically affects the evolution of a buoyant current from injection to complete dissolution, e.g. in the configuration sketched in Fig. 1(c) relative to geological sequestration of carbon dioxide. Using the model for two-phase gravity currents proposed by [105], [11] analyzed the effect of the domain width on the maximum horizontal extension of the current of carbon dioxide. They performed two-dimensional simulations in which the domain width is progressively increased, while keeping the domain height and the volume of fluid injected constant. It was found that the layer of CO\({}_{2}\)-rich solution may spread over a horizontal distance greater than 100 times the vertical extension of the layer, indicating that simulations are width-independent, and very wide domains have to be considered (width to height ratio \(\geq 140\)).
### Hele-Shaw flows
The working principle of the Hele-Shaw apparatus, briefly introduced in Sec. 4.2, is illustrated in Fig. 10(a). The fluid is contained between two parallel plates separated by a narrow gap of thickness \(b\), and the flow obtained in this configuration may be representative of a Darcy flow. When the flow is dominated by viscous forces (gap-based Reynolds number \(\ll 1\)), the depth-averaged fluid velocity is proportional to the vertical pressure gradient and to the inverse of the viscosity, in analogy to the Darcy law (14). This proportionality constant, defined as equivalent permeability of the cell, is \(k=b^{2}/12\), and it used to draw a link between Darcy and Hele-Shaw flows.
In convective flows, the driving force of the system is the presence of a solute with concentration \(C_{0}\leq C\leq C_{1}\), which produces a maximum density difference \(\Delta\rho\) within the domain. In this frame, the analogy between Hele-Shaw and Darcy flow has been investigated quantitatively by [97], who observed that a combination of fluid properties (Schmidt number, _Sc_), cell geometry (anisotropy ratio, \(\epsilon=\sqrt{k}/H\)) and flow velocity (\(U\), defined in (8), which depends on \(\mbox{{Ra}}\)) determines the flow regime. They considered an incompressible flow (3), and averaged the Navier-Stokes and ADE equations, respectively (4) and (6), in the direction of the gap thickness to obtain the following dimensionless system
\[\frac{\epsilon^{2}\mbox{{Ra}}}{\mbox{{Sc}}}\left[\frac{6}{5} \frac{\partial\mbox{{u}}^{\star}}{\partial t^{\star}}+\frac{54}{35}(\mbox{{u} }^{\star}\cdot\nabla)\mbox{{u}}^{\star}\right]=-\nabla p^{\star}-\mbox{{u}}^{ \star}+\] \[+C^{\star}\mbox{{k}}+\frac{6}{5}\epsilon^{2}\nabla^{2}\mbox{{u} }^{\star}-\frac{2}{35}\epsilon^{2}\mbox{{Ra}}(\mbox{{u}}^{\star}\cdot\nabla C^ {\star})\mbox{{k}} \tag{33}\]
\[\frac{\partial C^{*}}{\partial t^{*}}+{\bf u}^{*}\cdot\nabla C^{*}= \frac{1}{\mbox{\it Ra}}\nabla^{2}C^{*}+\] \[+\frac{2}{35}\epsilon^{2}\mbox{\it Ra}\,\nabla\cdot\left[({\bf u}^ {*}\cdot\nabla C^{*}){\bf u}^{*}\right]\;, \tag{34}\]
valid for \(\epsilon\) small, \(\mbox{\it Sc}\geq 1\) and \(\epsilon^{2}\mbox{\it Ra}\ll 1\). A linear dependency of density with concentration is considered. In this case \({}^{*}\) indicates dimensionless variables where the velocity scale is \(U\) defined as (8), the length scale is \(H\), the time scale is \(H/U\) and the pressure scale is \(\mu UH/k\). The concentration is made dimensionless as \(C^{*}=(C-C_{0})/(C_{1}-C_{0})\) and \({\bf k}\) is the unit vector with direction opposite to gravity. Eq. (33)-(34) may be respectively interpreted as a Darcy law (14) and an advection-diffusion equation (15), both with additional corrective terms taking into account the contribution of inertia and solute redistribution due to the presence of the walls. In the frame of Hele-Shaw convection, three main regimes have been identified [97]: (i) Darcy regime [Fig. 10(b)] when \(\epsilon\to 0\), the concentration profile across the cell gap is nearly uniform and the flow is well described by a Darcy model; (ii) Hele-Shaw regime [Fig. 10(c)] when \(\epsilon\ll 1\), \(\epsilon^{2}\mbox{\it Ra}\ll 1\) and \(\mbox{\it Sc}\geq 1\), characterised a gradient of concentration across the cell gap, but with one single finger; and (iii) three-dimensional regime [Fig. 10(d)], when the parameters do not fall in the above mentioned limits, the inertial effects become dominant and the fluid layer in the gap is unstable, so that multiple fingers appear across the cell thickness. It is apparent that the cell geometry plays a key role in determining the flow regime and that all laboratory experiments fall either in the Hele-Shaw regime or in the three-dimensional regime. With the aid of numerical simulations, [97] provided an evidence for the reduction of the scaling exponent for convective flows in the Hele-Shaw regime.
These finding were later confirmed by the laboratory experiments of [25], where the flux has been measured for different values of permeability (i.e., different \(b\)). Note that when the Schmidt number is large [as in the case of 25, where \(\mbox{\it Sc}=O(10^{3})\)], the dispersive effects dominate over to the inertial terms. As a result, Eq. (33) reduces to the Darcy law (14) with additional dispersive corrections. These findings suggest within the Hele-Shaw regime the scaling exponent is affected by the anisotropy ratio \(\epsilon\), as predicted by [97], possibly explaining the discrepancy observed between Darcy simulations [21] and Hele-Shaw experiments [20].
Finally, the theoretical work proposed by [97] has been recently generalized by [106, 107] to the case of more complex systems characterised by the presence of two layers of fluids with non-monotonic density profiles. The framework provided in [107] allows to evaluate and compare the mixing performance of different systems. They propose a universal law for the evolution of \(\mbox{\it Sh}\,/\widehat{\chi}\), which is independent of the cell geometry (\(\epsilon\)) and directly proportional to \(\mbox{\it Ra}\). Using this theoretical framework, they suggest that a possible reason for the sublinear scaling observed by [20] is the flow regime (Hele-Shaw regime) in which the experiments are performed.
### Dispersion in bead packs
Recent developments in experimental techniques allowed accurate and non-invasive measurements of convective dissolution in three-dimensional porous media. The studies discussed in Sec. 4.2 are relative to thin domains, i.e., laboratory experiments in which the dimension of the cell in the direction perpendicular to the transparent walls is much smaller than the other two. This confinement may have an effect on the development of the flow structures (see Sec. 5.1) and on the dissolution efficiency of the system. We will present here three-dimensional measurements of convection in
Figure 10: (a) Front view of convection in a Hele-Shaw cell in one-sided configuration [25], with the solute concentration being constant at top. Fluids consists of an aqueous solution of KMnO\({}_{4}\) (purple to black) and water (white). The reference frame \((x,y,z)\) and the direction along which gravity (\({\bf g}\)) acts are also indicated. (b-d) Schematic representation of the side views of the cell (the thickness \(b\) is not to scale with respect to the height \(H\)). Three possible flow regimes as identified by [97] are shown.
porous media, and discuss possible approaches to model dispersion in this context.
A remarkable contribution in the field on convection in three-dimensional porous media was presented by [108]. This work is original because of the medium used, consisting of a fibrous material, and because of the remarkable visualisations performed. Beside this work work, most of investigations on convection in three-dimensional porous flows involved the presence of bead packs. The emergence of tomographic imaging systems over the last years has considerably sped up the research in this field. In a pioneering work by [109], magnetic resonance imaging (MRI) of three-dimensional convective flows in opaque media were presented, and plumes at low Rayleigh-Darcy numbers (\(<20\pi^{2}\)) were visualized. Also X-ray computed tomography (CT) imaging scan is now frequently used to study mixing of miscible fluids. [110, 111] provided correlations for Sherwood as a function of Peclet and Rayleigh-Darcy number, and observed a sublinear scaling for _Sh_ with _Ra_, with exponent \(0.40\) and \(0.93\) respectively. The same methodology was employed by [112], who reported the emergence of characteristic patterns that closely resemble the dynamical flow structures produced by high-resolution numerical simulations. In a later study [113] the role of viscosity has been also investigated. While on the one hand [112, 113] observed that the flow is heavily influenced by dispersion, on the other hand a linear scaling \(\textit{Sh}\sim\textit{Ra}\) holds, in contrast with previous studies. This discrepancy may be due to the relatively short range and small values of _Ra_ explored, which is well below the value in correspondence of which the system is observed to attain an asymptotic linear scaling [54, 59]. Employing the same measurement technique but different fluids, [114] achieved larger Rayleigh-Darcy numbers (\(\leq 55,000\)). Through qualitative and quantitative observations of flow evolution, they also observed an enhanced longitudinal spreading of the solute, but in this case a sublinear scaling for _Sh_(_Ra_) holds.
These works agree upon the fact that dispersion is crucial in determining the _Sh_(_Ra_) scaling of the flow, and non-Darcy effects should be included in the models employed [115]. Dispersion has been identifies as responsible for the early onset of convection [116]. In addition, [87, 117] observed that the flow structures are influenced by the strength of dispersion and the dissolution rate \(\widehat{F}\) is increased with increasing strength of dispersion. However, this finding does not apply in general and it seems to be limited to the range parameters considered [84]. With the aid of laboratory experiments, [26] proposed that, in addition to the Rayleigh-Darcy number, a flow with dispersion is controlled by a dispersive Rayleigh-Darcy number
\[\textit{Ra}_{d}=\frac{UH}{\phi D_{T}}=\frac{\textit{Ra}\,D}{D_{T}}, \tag{35}\]
with \(D_{T}\) the transverse dispersion, \(U\) the buoyancy velocity defined in Eq. (8) and \(H\) the domain height. In geological formations, assigning appropriate values to \(D_{T}\) is not trivial, and it has been a debated topic [we refer to 48, for a thorough review on this subject]. The anisotropy ratio \(r=D_{L}/D_{T}\) (see Sec. 2.2.1) is also important to determine the flow character. As a result, the parameter space for convective porous media flows with dispersion is controlled by at least three parameters: \(\textit{Ra}\), \(\textit{Ra}_{d}\) and \(r\). In order to quantify the relative importance of molecular diffusion to transverse dispersion, one can introduce the parameter [84, 118]
\[\Delta=\frac{\textit{Ra}_{d}}{\textit{Ra}}=\frac{D}{D_{T}}, \tag{36}\]
that can used to rewrite the dispersion tensor (16) in dimensionless form as:
\[\frac{\mathbf{D}}{D}=\mathbf{I}+\frac{1}{\Delta U}\left[(r-1)\frac{\mathbf{u }\mathbf{u}}{|\mathbf{u}|}+\mathbf{u}\mathbf{I}\right]. \tag{37}\]
This expression suggests that the case of pure diffusion is recovered when \(D_{T}\ll D\), corresponding to \(\Delta\gg 1\).
With specific reference to granular media, additional simplifications allow a further characterisation of the flow in the parameter's space. Considering that the longitudinal dispersivity can be approximated [29, 118] as \(\alpha_{L}=D_{L}/U\approx d\), we can rewrite Eq. (35) as
\[\textit{Ra}_{d}=\frac{UH}{\phi D_{T}}=\frac{UH}{\phi U\alpha_{T}}=\frac{rH}{ d}. \tag{38}\]
In bead packs, the permeability can be inferred from the Kozeny-Carman correlation [45, 119], i.e.
\[k=\frac{d^{2}}{36k_{C}}\frac{\phi^{3}}{(1-\phi)^{2}}, \tag{39}\]
where \(k_{C}=5\) is the Carman constant for monodispersed spheres randomly packed [120]. As a result, we can provide an expression for \(\mbox{{Ra}}_{d}\) and \(\mbox{{Ra}}\) that is an explicit function of the domain \((g,H)\), fluid \((D,\mu,\Delta\rho)\) and medium \((\phi,d,r)\) properties. This information is particularly important when we characterise the flow in the three-dimensional parameters space \((\mbox{{Ra}}_{d},\mbox{{Ra}},r)\), which we will do in the following.
First, we reduce the parameters space to \((\mbox{{Ra}},\mbox{{Ra}}_{d})\) by taking into account [48] that \(r=O(10)\) is a reasonable approximation for advection dominated systems, and we consider \(r=10\). Note that no remarkable difference in the flow structure and \(\mbox{{Sh}}\) occurs for \(r>10\), provided that \(\mbox{{Ra}}\) and \(\mbox{{Ra}}_{d}\) are sufficiently large (namely, \(10^{4}\) and \(10^{3}\), respectively [84]). For \(r\leq 1\), the flow is qualitatively similar or that observed in absence of dispersion [54]. With respect to the remaining parameters, \(\mbox{{Ra}}\) and \(\mbox{{Ra}}_{d}\), we can rewrite both as a function of the beads diameter and find that \(\mbox{{Ra}}_{d}\sim 1/d\) and \(\mbox{{Ra}}\sim d^{2}\). This implies that if we consider an experiment in which only \(d\) varies, we are locked to one of the green lines of the parameters space \((\mbox{{Ra}},\mbox{{Ra}}_{d})\) shown in Fig. 11, corresponding to \(\mbox{{Ra}}_{d}\sim\mbox{{Ra}}^{-1/2}\). Using realistic laboratory properties, we obtain that a possible range for the experimental parameters \((\mbox{{Ra}}_{d},\mbox{{Ra}})\) at variable \(d\) consists of the circles in Fig. 11 (each series of circles corresponds to one value of density difference, \(\Delta\rho\)). Alternatively, we consider the case in which the medium is fixed (\(d\) constant) and the fluid density contrast varies. Since \(\mbox{{Ra}}_{d}\) is independent of any fluid property, a variation of \(\Delta\rho\) will correspond to a horizontal line of the parameters space (red symbols Fig. 11, in which each series is a different \(d\)). Finally, we consider the case of a constant value of \(\Delta\). It follows that this is achieved when \((\Delta\rho)^{-1}d^{-3}\) is constant [blue lines in Fig. 11]. This condition is extremely challenging to be obtained experimentally because it implies a simultaneous variation of \(\Delta\rho\) and \(d\).
With the aid of numerical simulations the problem of decoupling two of the governing flow parameters \((\mbox{{Ra}},\mbox{{Ra}}_{d})\) can be solved, and their relative effect on the \(\mbox{{Sh}}\) or \(\widehat{F}\) can be investigated. [84] considered a Rayleigh-Benard configuration and investigated systematically a range of flow parameters indicated in Fig. 13 (red squares). The flow structure is mainly ruled by \(\Delta\), which determines the mechanism controlling convection. If \(\Delta>10^{5}\), the flow is ruled by molecular diffusion, plumes
Figure 11: Parameters space \((\mbox{{Ra}},\mbox{{Ra}}_{d})\) with indication of iso-\(\Delta\) lines (blue lines). With respect to experiments in bead packs, if only \(d\) varies, the parameters \((\mbox{{Ra}},\mbox{{Ra}}_{d})\) are locked onto the green curves. An example for a realistic range of parameters is shown by symbols (circles), where each line corresponds to one value of density contrast (\(\Delta\rho\)). Conversely, if only \(\Delta\rho\) is varied in the experiments, \((\mbox{{Ra}},\mbox{{Ra}}_{d})\) are locked to horizontal lines (diamonds). Triangles indicate the slope of the green and blue lines.
Figure 12: Concentration distribution at high \(\mbox{{Ra}}\) and \(r=10\) (Rayleigh-Darcy number and \(\Delta\) indicated within each panel). (a,c) Rayleigh-Bénard configuration [adapted with permission from 84]. (b,d) One-sided configuration [adapted with permission from 26]. When \(\Delta\gg 1\) (a,b), plumes grow vertically in a symmetric fashion (columnar flow). When \(\Delta\ll 1\) (c,d), dispersion makes the plumes to expand in the horizontal direction (fan flow).
grow symmetrically [Fig. 12(a)] and the structure is analogue to the symmetric flow observed in absence of dispersion [see Fig. 4(b-i)]. When \(\Delta<1\), mechanical dispersion dominates over convection, and its inherent anisotropy (\(r\gg 1\)) sets the non-symmetric flow structure (fan flow) shown in Fig. 12(c), in which plumes widen as they move away from the wall. A similar behaviour is observed in the corresponding one-sided cases [Fig. 12(b,d)] by [26].
The effect of the flow structure on the Sherwood number was also quantified. Note that in case of dispersive flows the Sherwood number contains the magnitude of the velocity at the top wall in its definition. Indeed, while the vertical component of velocity in zero at top (no-penetration), a non-zero velocity parallel to the wall is admitted (free-slip), which produces solute spread due to dispersion. Alternatively, _Sh_ can be inferred from the time derivative of the total mass of solute in the domain. We report in Fig. 13 a visual interpretation of the dominant mechanism in each region of the (_Ra,Ra\({}_{d}\)_) space, where regions controlled by different mechanisms are separated by dashed lines. [26] found that in Rayleigh-Benard configuration, when \(\Delta>O(1)\) molecular diffusion dominates over mechanical dispersion, although a small contribution of mechanical dispersion may increase _Sh_. When \(0.02<\Delta<O(1)\), both mechanical dispersion and molecular diffusion determine the value of _Sh_. A linear scaling _Sh_\(\sim\)_Ra_ holds when \(\Delta<0.02\), but \(\mbox{\it Ra}_{d}\) is also important since it determines the prefactor of the scaling law, as it has been later observed by [121].
These results for dispersion-dominated flows (\(\Delta<0.02\)) could also provide an additional interpretation to the Hele-Shaw experiments of [25], where within one value of cell gap (i.e., one value of permeability and mechanical dispersion), the flux remains nearly constant, i.e. the prefactor is constant. On the other hand, Hele-Shaw flows do not exhibit transverse dispersion [122], therefore we believe that such one-to-one comparison between results of dispersion for Hele-Shaw and bead packs flow may not be appropriate.
Recent works investigated the role of mechanical dispersion with the aid of simulations. A possible complementary approach with respect to the formulation used by [26, 84] is proposed by [123], where the dispersive Rayleigh number is replaced by a parameter quantifying the strength of longitudinal dispersion compared to molecular diffusion. More recently, [118] performed simulations in one-sided configuration. They modelled fluids with constant viscosity and linear density-concentration profiles, and derived a linear correlation between _Sh_ and _Ra_, where the prefactor is a function of molecular to dispersive Rayleigh-Darcy numbers, \(\mbox{\it Ra}\,/\mbox{\it Ra}_{d}\). This correlation fits well their results, but it does not fully capture the trend predicted by [84]. This difference may possibly be due to several reasons, including the parameters space (and perhaps the different regimes) explored compared to [84] as it appears from Fig. 13, where the parameters investigated in some of these studies are reported. Each of these works involves a specific configuration (Rayleigh-Benard or one-sided), a different formulation (different model for the fluids and different dimensionless parameters) and a different region of the parameter space. Therefore, providing a precise and general description of convective and dispersive flows in porous media is still not possible, and further studies systematically investigating a broad range of the (_Ra,Ra\({}_{d}\)_) space are required.
Finally, an novel approach consists of including the effects of dispersion also in the momentum equation. [124] used two-dimensional pore-scale and Darcy simulations to study a Rayleigh-Benard
Figure 13: Range of parameters space explored with simulations [117, 84, 118] and experiments [19, 26] with dispersion. The configuration (one-sided - OS or Rayleigh-Benard - RB) is indicated. All cases refer to \(r=10\), with the exception of [117] where also different values of \(r\) are considered. Effect of \(r\) on convection is also discussed in [84], but the corresponding data are not reported in this figure. In this parameters space, \(\Delta\) sets the flow behaviour: diffusion dominated [\(\Delta<O(1)\)], dispersion dominated [\(\Delta>0.02\)] or influenced by both diffusion and dispersion [\(0.02<\Delta<O(1)\)].
flow. They observed that the pore-induced dispersion, which may be as strong as buoyancy, affects also the momentum transport and it is determined by two length scales (the pore length scale, proportional to \(\sqrt{k}\), and the domain size, \(H\)). The authors proposed a two-length-scale diffusion model, in which the pore-scale dispersion is accounted into the momentum transport as a macroscopic diffusion term. A similar model, which is found to be valid for a wide range of porosity values and is based on the effective viscosity, has been proposed to account for pore-scale effects in advection-dominated systems in absence of convection [125].
## 6 Summary and future perspectives
In this work, we have reviewed recent developments on convection in porous media. We focused on state-of-the-art measurements of dissolution and mixing in archetypal flow configurations. Despite the well known mathematical formulation of the problem, the role that several physical processes (e.g., finite-size effects) have on the dissolution and mixing is not yet fully understood. This is also due to the great complexity of the physics involved: convection in porous media is a non-linear phenomenon taking place in multiphase and multiscale systems, eventually located thousands of meters beneath the Earth surface. Notwithstanding the intrinsic difficulties associated with performing reliable measurements in such systems, remarkable developments has been achieved in recent years.
The porous Rayleigh-Benard configurations, consisting of a fluid-saturated porous slab with fixed density at top and bottom boundary, has been extensively investigated [53, 54, 62, 63]. The governing parameter of the flow is the Rayleigh-Darcy number \(\mathit{Ra}\), a measure of strength of convection relative to diffusion. Three-dimensional Darcy simulations performed at unprecedented Rayleigh-Darcy numbers, \(O(10^{5})\) have been used, and the existence of new flow features labelled as supercells emerged [58, 59]. Two-dimensional and three-dimensional simulations have shown that ultimately a linear scaling of the dimensionless dissolution coefficient is attained, namely \(\mathit{Sh}\sim\mathit{Ra}\). While in the two-dimensional case [54, 62] this scaling sets in at \(\mathit{Ra}\leq 10^{4}\), in three-dimensional flows [58, 59] the ultimate state is expected to take place at \(\mathit{Ra}\geq 5\times 10^{5}\), which is beyond the present numerical capabilities. Pore-scale simulations have revealed a more complex scenario, in which the heat/mass transfer is also influenced by porosity [30, 31], Schmidt number and relative conductivity of fluid and solid phases [76, 77]. These extensive numerical campaigns have led to the development of physics-based correlations for \(\mathit{Sh}\) as a function of the flow parameters. In addition, the relative size of boundary layer and average pore-space has been identified as a critical flow feature controlling pore-scale convection [30].
The second archetypal configuration considered is the one-sided configuration [16], where solute dissolves in an initially solute-free porous domain from the upper boundary, with all other boundaries being impermeable to fluid and solute. The flow is characterised by an intermediate phase in which the dissolution rate \(\widehat{F}\) is quasi steady. While Darcy simulations report a constant \(\mathit{Ra}\)-independent value \(\widehat{F}\)[16, 17, 56, 60], experiments in bead packs [19] and Hele-Shaw cells [20] revealed that \(\widehat{F}\) is a function of \(\mathit{Ra}\). The discrepancy observed has been attributed to non-Darcy effects present in the experiments and not accounted by the simulations [21]. This has stimulated further studies focusing on the role of finite-size effects observed in Hele-Shaw [25, 97, 106, 107] and bead packs experiments [26, 84]. The analysis of recent numerical and experimental results [26, 84, 118] highlights the complexity of this system, which is controlled by at least three parameters, respectively quantifying the relative strength of (i) convection and diffusion (\(\mathit{Ra}\)), (ii) convection and dispersion (\(\mathit{Ra}_{d}\)), and (iii) longitudinal and transverse dispersion (\(r\)). The huge parameters space defined in this way and the need for both numerical and computational studies represents a major challenge in this field.
Improvement of numerical and experimental techniques allowed a detailed characterisation of the flow and a better understating of the phenomena involved. The combination of theoretical modelling, numerical simulations and laboratory observations will pave the way to derive and validate large-scale models to be employed in real geophysical and engineering situations. These
findings will be crucial to tackle problems associated with grand societal challenges, such as energy transition and climate change mitigation [7].
To conclude, in Sec. 6.1 we will briefly review recent advancements in experimental techniques, and in Sec. 6.2 we will also discuss the importance of additional effects not considered in previous sections of this paper.
### Recent developments in experimental techniques
One intrinsic challenge associated with measurements in porous media consists of the impossibility of optically accessing inner regions of the flow. An overview of the experimental techniques available to perform measurements in opaque media is presented by [23]. Among the different imaging techniques employed for porous media [36], magnetic resonance imaging (MRI) [126, 127, 128] and X-ray tomography [110, 129] are the most common, and allow to obtain non-invasive and non-intrusive three-dimensional measurements of inner flow regions. Despite the advantages mentioned, these techniques are high-priced and typically lack in resolution in both space and time, making fast and small-scale flows hard to measure. However, thanks to the recent technological progresses, these measurement techniques allowed a detailed characterisation of both medium and flow also at small scales. Some examples are the X-ray synchrotron microtomography [130], with resolution in space of 3.25 \(\mu\)m and in time of 6 s. Recent experiments [131, 132] have shown that the resolution can be further lowered down to 2.3 \(\mu\)m, with a technique also allowing for higher resolution in time. At the time being, similar performances are also achieved by commercial micro-CT systems. Optical measurement in three-dimensional porous media can be also performed by matching the refractive index of fluid and medium [133, 115, 134], provided that a suitable fluid is available. This is not always granted, since fluids with refractive indexes of interest may come with side effects such as high costs or high hazard [135].
Additional challenges associated with laboratory experiments, in particular with respect to geological sequestration of carbon dioxide, consist of reproducing realistic porous media and ambient conditions. For instance, at the depths at which CO\({}_{2}\) is supercritical, the pressure is of the order of tens of bars, and performing controlled experiments with optical access is not trivial. This obstacle has been recently successfully overcome [136, 61], and the methods proposed may represent a first important step towards investigations in more complex geometries. With respect to the design and production of synthetic media at the laboratory scale, microfluidic devices mimicking porous materials are usually made of polydimethylsiloxane (PDMS), which has the drawback of being permeable to CO\({}_{2}\). A solution has been recently proposed by [137], who developed a new method to fabricate a two-dimensional porous medium (regular array of cylinders), consisting of bonding of a patterned photo-lithographed layer on a flat base. Additional examples of manufacturing techniques for analogue porous media are provided in [138]. Real geological formations are inherently disordered and heterogeneous, and mimicking this feature in lab models is essential to capture the role of the medium heterogeneities on the solute mixing. The technique proposed by [139] addresses this issue, and it consists of a cell made of 3D-printed elementary blocks designed to be easily re-arranged to obtain a desired permeability field.
Finally, we conclude with an overview of recent developments in experimental techniques employed in Hele-Shaw cells. The relative low cost and easy of implementation makes this apparatus widely employed to study buoyancy-driven flows. Classical optical methods based on light intensity measurements of patterns induced by density (or density gradients) fields, such as Schlieren and related techniques [24, 140], have been combined or improved to increase the accuracy of the measurements performed. Accurate temperature [141, 142] and concentration [79, 95, 96] measurement techniques have been recently introduced. Velocity measurements have been also performed using advanced particle image velocimetry (PIV) and particle tracking velocimetry (PTV) techniques specifically designed for Hele-Shaw flows [143, 144], or with the aid of machine learning techniques, namely convolutional neural network (CNN) [145]. A separate (i.e., not simultaneous) measurement of scalar and velocity fields complicates the analysis of the phenomena involved
and the description the underlying physical mechanisms. Recently, novel techniques for simultaneous temperature/concentration/velocity measurements have been proposed [96, 146], which are particularly useful to enable reliable comparisons against numerical findings.
### Additional effects influencing mixing
Convection and mixing in real engineering and geophysical problems are far more complex than the idealised conditions depicted in this review, due to the non-ideal medium, fluid, and ambient conditions. Here we will discuss the influence of conditions not present in the configurations previously discussed, and we will provide some references for the interested readers.
We focused on processes in which the Boussinesq approximation applies, i.e., the density variations induced by the presence of a scalar are only significant within the gravitational term of the momentum (Darcy) equation, and can be neglected elsewhere. In general, this may not be the case, and a criterion for the applicability of the Boussinesq approximation have been derived [28]. For instance, in case of iso-thermal brine transport, fluid volume changes may be neglected when \(\mbox{{Ra}}\,\tilde{\rho}_{r}/\Delta\rho\gg 1\), being \(\Delta\rho\) the maximum density difference and \(\tilde{\rho}_{r}\) the reference fluid density. Interestingly, this condition is independent of \(\Delta\rho\) and it is widely fulfilled for geothermal processes, when \(Ra\approx 10^{1}-10^{3}\) and \(\rho_{r}/\Delta\rho\approx 10^{2}-10^{3}\)[147]. Numerical simulations of the fully-compressible CO\({}_{2}\) sequestration process suggest that compressibility and non-Boussinesq effects do not significantly impact spreading and mixing [90]. An aspect particularly relevant when considering experiments with analogue fluids is that the mixing rate strongly depends on the shape of the fluids density-concentration curve and, in particular, on the position of the maximum of this curve [21]. This effect, along with volume variations in the fluid phase [148], may influence the dynamics of the mixing process, and the findings discussed in this review cannot be generalised to fluids with a non-monotonic density-concentration profile or in presence of significant volume variations.
Geological formations are typically characterised by anisotropic and heterogeneous media. The effect of anisotropy has been well characterised by assuming that the permeability tensor is anisotropic [149, 150], and it has been shown that anisotropy is in general favourable since it increases the rate of dissolution and anticipates the onset of convection [60, 63, 151]. These studies assume that a preferential direction exists, i.e., the permeability tensor takes a diagonal form in a reference frame that usually has a direction aligned with gravity. This simplified model does not take into account that formations have heterogeneities, which are also source of anisotropy, and discerning these two features of the medium represents a strong simplification. It has been proposed [89, 152] that the model of anisotropic medium discussed above (in which the permeability tensor is diagonal in some reference frame) may represent a good candidate to investigate heterogeneous media. Different models for heterogeneous formations have been introduced, consisting of essentially three categories: spatially-variable permeability fields [153] (with no preferential direction), long and thin impermeable barriers [89, 152, 154], and layered formations (i.e., regions in which high- and low-permeability strata alternate) [155, 156, 157]. Although a general model for convection in heterogeneous media is not available yet, these studies provide an initial framework to understand the long-term behaviour of these systems.
The role of the fluid properties may also affect the flow evolution and solute mixing. The effect of viscosity, for instance, may be crucial in determining the stability of a layer, and we refer to [158, 159] for a review on this topic. Another effect that is increasingly studied is the reactivity of the medium with the fluid: the solute present in the fluid may induce dissolution or precipitation, which corresponds to a variation of the medium porosity and permeability. Recently this problem has been actively investigated [160, 121, 161], also due to the improvement of numerical capabilities. It has been reported [87] that medium morphology modifications occurring in presence of convective flows affect solute mixing in non-trivial manners. [162] showed that the reacting system rock-CO\({}_{2}\) may be described by a first-order chemical reaction stimulating numerous studies on convective-reactive porous media flows, reviewed in [163, 164].
Finally, the effect of the ambient flow conditions may be also important [165]. It was observed that the presence of a background flow influences the onset of convection [166]. Experiments in one-sided configuration [167] revealed that while convection may be hindered and suppressed, dispersion enhances, with an overall contribution with respect to flux in absence of background flow that can be positive, negative or neutral. [168] observed with the aid of simulations that three regimes exist, in which convection dominates, background flow dominates, or these two contributions have the same strength. These results are relevant, since they can contribute to derive new models suitable for prediction of dissolution at the scale of the reservoir [11, 105, 169] and through the entire lifetime of a buoyant current in a porous formation [169, 170, 171, 172].
Hugo Ulloa and Diego Perissutti are gratefully acknowledged for the feedback provided on the early draft of this manuscript. The Referees are gratefully acknowledged for the constructive feedback provided. Duncan Hewitt, Linfeng Jiang, Shuang Liu and Yan Jin are also acknowledged for providing some of the data presented in this work. This research was funded in part by the Austrian Science Fund (FWF) [Grant J-4612]. The author acknowledges the TU Wien University Library for financial support through its Open Access Funding Program. This project has received funding from the European Union's Horizon Europe research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101062123.
## Declarations
Data availability statement.The data supporting the findings of this study are available within the article and any other data can be made available on reasonable request.
Open Access.This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/).
|
2307.02687 | On time-periodic solutions to an interaction problem between
compressible viscous fluids and viscoelastic beams | In this paper, we study a nonlinear fluid-structure interaction problem
between a viscoelastic beam and a compressible viscous fluid. The beam is
immersed in the fluid which fills a two-dimensional rectangular domain with
periodic boundary conditions. Under the effect of periodic forces acting on the
beam and the fluid, at least one time-periodic weak solution is constructed
which has a bounded energy and a fixed prescribed mass. | Ondřej Kreml, Václav Mácha, Šárka Nečasová, Srđan Trifunović | 2023-07-05T23:08:53Z | http://arxiv.org/abs/2307.02687v1 | On time-periodic solutions to an interaction problem between compressible viscous fluids and viscoelastic beams
###### Abstract
In this paper, we study a nonlinear fluid-structure interaction problem between a viscoelastic beam and a compressible viscous fluid. The beam is immersed in the fluid which fills a two-dimensional rectangular domain with periodic boundary conditions. Under the effect of periodic forces acting on the beam and the fluid, at least one time-periodic weak solution is constructed which has a bounded energy and a fixed prescribed mass.
**Keywords and phrases:** fluid-structure interaction, compressible viscous fluid, viscoelastic beam, time-periodic solutions
**AMS Mathematical Subject classification (2020):** 74F10 (Primary), 35B10, 76N06 (Secondary)
## 1 The model
Let \(L,H,T>0\) and define
\[\Gamma:=(0,L),\quad\Omega=(0,L)\times(-H,H).\]
We denote the horizontal variable by \(x\) and the vertical variable by \(z\). The fluid fills the domain \(\Omega\) and it is described with velocity \(\mathbf{u}:(0,T)\times\Omega\to\mathbb{R}^{2}\) and density \(\rho:(0,T)\times\Omega\to\mathbb{R}\) which are periodic in both the \(x\) and the \(z\) direction. The beam is immersed in the fluid and its vertical displacement is given as \(\eta:(0,T)\times\Gamma\to\mathbb{R}\), while its graph is denoted as
\[\Gamma^{\eta}(t):=\{(x,\eta(t,x)):x\in\Gamma\}.\]
In order to work on a fixed domain \(\Omega\) (note that \(\eta\) does not necessarily have values in \([-H,H]\)), let us define a \(z\)-periodic version of \(\eta\)
\[\hat{\eta}(t,x):=\eta(t,x)-2n(t,x)H,\]
where \(n(t,x)\in\mathbb{Z}\) is uniquely determined by the requirement \(\eta(t,x)-2n(t,x)H\in[-H,H).\) Its graph \(\hat{\Gamma}^{\eta}(t)\) is on Figure 1. The time-space cylinders corresponding to our problem will be denoted as
\[Q_{T}:=(0,T)\times\Omega,\quad\Gamma_{T}:=(0,T)\times\Gamma.\]
The governing equations for our coupled fluid-structure interaction problem read as follows:
**The viscoelastic beam equation** on \(\Gamma_{T}\):
\[\eta_{tt}+\eta_{xxxx}-\eta_{txx}=-S^{\eta}\mathbf{f}_{fl}\cdot\mathbf{e_{2}} +f. \tag{1}\]
Here \(f\) denotes a given external time-periodic force acting on the viscoelastic beam and \(\mathbf{f}_{fl}\) is the force with which the fluid acts on the beam. Moreover, \(S^{\eta}=\sqrt{1+|\eta_{x}|^{2}}\) is the Jacobian of the transformation from Eulerian to Lagrangian coordinates of the beam (i.e. from \(\Gamma^{\eta}\) to \(\Gamma\)).
**The compressible Navier-Stokes equations** on \(\bigcup_{t\in(0,T)}\{t\}\times\big{(}\Omega\setminus\tilde{\Gamma}^{\eta}(t)\big{)}\):
\[\partial_{t}(\rho\mathbf{u})+\nabla\cdot(\rho\mathbf{u}\otimes \mathbf{u}) =-\nabla p(\rho)+\nabla\cdot\mathbb{S}(\nabla\mathbf{u})+\rho \mathbf{F}, \tag{2}\] \[\partial_{t}\rho+\nabla\cdot(\rho\mathbf{u}) =0,\]
where we set the pressure \(p\) for simplicity to be
\[p(\rho)=\rho^{\gamma},\]
the viscous stress tensor \(\mathbb{S}\) is given by the Newton rheological law
\[\mathbb{S}(\nabla\mathbf{u}):=\mu\big{(}\nabla\mathbf{u}+\nabla^{\tau} \mathbf{u}-\nabla\cdot\mathbf{u}\mathbb{I}\big{)}+\zeta\nabla\cdot\mathbf{u} \mathbb{I},\quad\mu,\zeta>0,\]
and \(\mathbf{F}\) is a given time-periodic force acting onto the fluid.
**The fluid-structure coupling (kinematic and dynamic, resp.)** on \(\Gamma_{T}\):
\[\eta_{t}(t,x)\mathbf{e_{2}} = \mathbf{u}(t,x,\hat{\eta}(t,x)), \tag{3}\] \[\mathbf{f}_{fl}(t,x) = \big{[}\big{[}(-p(\rho)\mathbb{I}+\mathbb{S}(\nabla\mathbf{u})) \big{]}\big{]}(t,x,\hat{\eta}(t,x))\ \nu^{\eta}(t,x), \tag{4}\]
where \(\nu^{\eta}=\frac{(-\eta_{x},1)}{\sqrt{1+|\eta_{x}|^{2}}}\) denotes the normal vector on \(\Gamma^{\eta}\) facing upwards and
\[[[A]](\cdot,z):=\lim_{\varepsilon\to 0^{+}}\big{(}A(\cdot,z-\varepsilon)-A( \cdot,z+\varepsilon)\big{)}\]
Figure 1: Two examples of the beam inside the fluid. On the top, the structure is completely contained in \(\Omega\) so \(\Gamma^{\eta}(t)=\tilde{\Gamma}^{\eta}(t)\). On the bottom, the structure leaves \(\Omega\) and re-enters from the other side so \(\Gamma^{\eta}(t)\neq\tilde{\Gamma}^{\eta}(t)\) (the dashed part represents \(\Gamma^{\eta}(t)\setminus\tilde{\Gamma}^{\eta}(t)\)).
represents the jump of quantity \(A\) in the vertical direction.
**The beam boundary conditions**:
\[\eta\text{ is periodic in }x\text{ and }\eta(t,x)=0,\quad(t,x)\in(0,T)\times\{0,L\}. \tag{5}\]
**Fluid spatial periodicity**:
\[\rho,\mathbf{u}\text{ are periodic in }x\text{ and }z\text{ directions.} \tag{6}\]
**Time periodicity**:
\[\rho,\mathbf{u},\eta\text{ are periodic in time.} \tag{7}\]
## 2 Weak solution and main result
The nature of the studied problem enables us to work with two equivalent formulations of the problem. In the original formulation, the domain \(\Omega\) is fixed and the viscoelastic beam appears inside the domain \(\Omega\). However, we may use the \(z-\)periodicity of the problem to formulate it on the moving domain \(\Omega^{\eta}(t)\) filled with the fluid, where the top and the bottom of the domain is given by the viscoelastic beam. For a given \(\eta(t,x)\) we introduce an equivalent fluid domain and the corresponding time-space cylinder
\[\Omega^{\eta}(t):=\{(x,z):x\in(0,L),\eta(t,x)<z<\eta(t,x)+2H\}, \qquad Q^{\eta}_{T}:=\bigcup_{t\in(0,T)}\{t\}\times\Omega^{\eta}(t), \tag{8}\]
both domains are demostrated in Figure 2.
For a set1\(S=(a_{1},a_{1}+L_{1})\times\cdots\times(a_{n},a_{n}+L_{n})\) where \(L_{1},...,L_{n}>0\) and \(n\in\{1,2,3\}\), we introduce the spaces of differentiable periodic functions for \(k\in\mathbb{N}_{0}\cup\{\infty\}\)
Footnote 1: Here, \(S\) will represents either one of the sets \((0,T)\), \(\Gamma\), \(\Omega\) or some of their products.
\[C^{k}_{\#}(S):=\{f\in C^{k}(\mathbb{R}^{n}):f(x_{1},\ldots,x_{n })=f(x_{1}+L_{1},\ldots,x_{n})=...=f(x_{1},\ldots,x_{n}+L_{n})\] \[\text{ for all }(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\}.\]
We define Lebesgue and Sobolev function spaces for any \(p,q\in[1,\infty]\), \(k\in\mathbb{N}_{0}\cup\{\infty\}\) as closures in the respective norms
\[W^{k,p}_{\#}(S):=\overline{C^{\infty}_{\#}(S)}^{\frac{n}{2}\cdot \|_{W^{k,p}(S)}}.\]
In order to accommodate the boundary conditions (5) we further introduce the spaces
\[C^{k}_{\#,0}(\Gamma):=\{\varphi\in C^{k}_{\#}(\Gamma):\varphi(0)=0\},\] \[C^{k}_{\#,0}(\Gamma_{T}):=\{\varphi\in C^{k}_{\#}(\Gamma_{T}): \varphi(t,0)=0\text{ for all }t\in\mathbb{R}\},\]
for \(k\in\mathbb{N}_{0}\cup\{\infty\}\), and the corresponding closure
\[W^{k,p}_{\#,0}(\Gamma):=\overline{C^{\infty}_{\#,0}(\Gamma)}^{\|\cdot\|_{W^{k,p}(\Gamma)}}.\]
Finally, we define
\[L^{p}_{\#}(0,T;W^{1,q}_{\#}(\Omega)) :=\{f\in L^{p}_{\#}(0,T;L^{q}_{\#}(\Omega)):\nabla f\in L^{p}_{ \#}(0,T;L^{q}_{\#}(\Omega))\},\] \[W^{1,p}_{\#}(0,T;L^{q}_{\#}(\Gamma)) :=\{f\in L^{p}_{\#}(0,T;L^{q}_{\#}(\Gamma)):\partial_{t}f\in L^{p }_{\#}(0,T;L^{q}_{\#}(\Gamma))\}.\]
As usual, \(H^{k}\) denotes Sobolev spaces \(W^{k,2}\). For a function \(f\in C^{1}_{\#}(\Omega)\) and \(\eta\in C^{1}_{\#,0}(\Gamma)\), we can define the Lagrangian trace on \(\hat{\Gamma}^{\eta}\) as
\[\gamma_{|\hat{\Gamma}^{\eta}}f(x):=f(x,\hat{\eta}(x))\]
and then extend it to a linear and continuous operator \(\gamma_{|\hat{\Gamma}^{\eta}}:H^{1}_{\#}(\Omega)\to H^{\frac{1}{2}}_{\#}(\Gamma)\). Here \(H^{\frac{1}{2}}\) denotes the Sobolev-Slobodetskii space. Finally, we will denote the two-dimensional space variable \(\mathbf{y}=(x,z)\).
**Definition 2.1** (**Weak solution**).: _We say that \(\rho\in L^{\infty}_{\#}(0,T;L^{\gamma}_{\#}(\Omega))\), \(\mathbf{u}\in L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\) and \(\eta\in W^{1,\infty}_{\#}(0,T;L^{2}_{\#}(\Gamma))\cap L^{\infty}_{\#}(0,T;H^{ 2}_{\#}(\Gamma))\cap H^{1}_{\#}(0,T;H^{1}_{\#,0}(\Omega))\) is a weak solution to (1)-(7) if:_
1. _The kinematic coupling_ \(\gamma_{|\hat{\Gamma}^{\eta}}\mathbf{u}=\eta_{t}\mathbf{e}_{2}\) _holds on_ \(\Gamma_{T}\)_._
2. _The renormalized continuity equation_ \[\int_{Q_{T}}\rho B(\rho)(\partial_{t}\varphi+\mathbf{u}\cdot\nabla\varphi)\, \mathrm{d}\mathbf{y}\mathrm{d}t=\int_{Q_{T}}b(\rho)(\nabla\cdot\mathbf{u}) \varphi\,\mathrm{d}\mathbf{y}\mathrm{d}t\] (9) _holds for all functions_ \(\varphi\in C^{\infty}_{\#}(Q_{T})\) _and any_ \(b\in L^{\infty}(0,\infty)\cap C[0,\infty)\) _such that_ \(b(0)=0\) _with_ \(B(\rho)=B(1)+\int_{1}^{\rho}\frac{b(z)}{z^{2}}dz\)_._
3. _The coupled momentum equation_ \[\int_{Q_{T}}\rho\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi} \,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}(\rho\mathbf{u}\otimes\mathbf{u }):\nabla\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}} \rho^{\gamma}(\nabla\cdot\boldsymbol{\varphi})\,\mathrm{d}\mathbf{y}\mathrm{d }t-\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{\varphi}\, \mathrm{d}\mathbf{y}\mathrm{d}t\] \[+\int_{\Gamma_{T}}\eta_{t}\psi_{t}\,\mathrm{d}x\mathrm{d}t-\int_{ \Gamma_{T}}\eta_{xx}\psi_{xx}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\eta_{ xx}\psi_{x}\,\mathrm{d}x\mathrm{d}t=-\int_{\Gamma_{T}}f\psi\,\mathrm{d}x \mathrm{d}t-\int_{Q_{T}}\rho\mathbf{F}\cdot\boldsymbol{\varphi}\,\mathrm{d} \mathbf{y}\mathrm{d}t\] (10) _holds for all_ \(\boldsymbol{\varphi}\in C^{\infty}_{\#}(Q_{T})\) _and all_ \(\psi\in C^{\infty}_{\#,0}(\Gamma_{T})\) _such that_ \(\boldsymbol{\varphi}(t,x,\hat{\eta}(t,x))=\psi(t,x)\mathbf{e}_{2}\) _on_ \(\Gamma_{T}\)_._
We note that the choice \(b(\rho)=0\) in (9) recovers the standard weak formulation of the continuity equation. Our main result reads as follows.
**Theorem 2.1** (**Main result**).: _Let \(H,L,T,m_{0}>0\) be given and let \(\gamma>1\). Let \(f\in L^{2}_{\#}(\Gamma_{T})\) and \(\mathbf{F}\in L^{2}_{\#}(0,T;L^{\infty}_{\#}(\Omega))\). Then, there exists at least one weak solution to (1)-(7) in the sense of Definition 2.1 such that_
\[\int_{\Omega}\rho(t)\,\mathrm{d}\mathbf{y}=m_{0}\]
_for almost all \(t\in(0,T)\) and the energy inequality_
\[-\int_{Q_{T}}\phi_{t}\left(\frac{1}{2}\rho|\mathbf{u}|^{2}+ \frac{1}{\gamma-1}\rho^{\gamma}\right)\,\mathrm{d}\mathbf{y}\mathrm{d}t-\int_{ \Gamma_{T}}\phi_{t}\left(\frac{1}{2}|\eta_{t}|^{2}+\frac{1}{2}|\eta_{xx}|^{2} \right)(t)\,\mathrm{d}x\mathrm{d}t\] \[+\int_{0}^{T}\int_{\Omega}\phi\mathbb{S}(\nabla\mathbf{u}): \nabla\mathbf{u}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{0}^{T}\int_{\Gamma} \phi|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t\] \[\leq\int_{0}^{T}\int_{\Gamma}\phi f\eta_{t}\,\mathrm{d}x\mathrm{d}t +\int_{0}^{T}\int_{\Omega}\phi\rho\mathbf{u}\cdot\mathbf{F}\,\mathrm{d}\mathbf{y }\mathrm{d}t \tag{11}\]
_holds for all \(\phi\in C^{\infty}_{\#}(0,T)\), \(\phi\geq 0\). Moreover,_
\[\sup_{t\in(0,T)}\Big{[}\int_{\Omega}\Big{(}\frac{1}{2}\rho|\mathbf{ u}|^{2}+\frac{1}{\gamma-1}\rho^{\gamma}\Big{)}\,\mathrm{d}\mathbf{y}+\int_{\Gamma} \Big{(}\frac{1}{2}|\eta_{t}|^{2}+\frac{1}{2}|\eta_{xx}|^{2}\Big{)}\,\mathrm{d}x \Big{]}(t)\\ +\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}\mathrm{y}\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x \mathrm{d}t\leq C(f,\mathbf{F},\Omega,m_{0}). \tag{12}\]
**Remark 2.1** (Strategy of the proof).: _The proof of this theorem is based on a four-level approximation scheme. Following the approach from [48] (see also [33]), we decouple the coupled momentum equation to the fluid momentum equation and the structure momentum equation by penalizing the kinematic coupling condition (3). This allows us to deal with these equations separately. Then, we choose to span the fluid velocity and the structure displacement in finite time-space bases, as it was done in [19] (note that this is in contrast with the fixed-point approach which was used in [18, 37]). Finally, as it is standard in the theory of compressible Navier-Stokes equations, artificial diffusion is added to the fluid continuity equation and the artificial pressure is added to the fluid momentum equation. Several other terms are also added due to technical reasons. In order to obtain a weak solution, there are four limits that are performed, each of them being based on estimates that significantly differ from a limit to limit due to their high sensitivity to the approximation parameters. Unlike the initial value problem, we need to additionally ensure that the energy inequality of the form (11) is satisfied at each approximation level to obtain some important estimates, and for this we need to prove the convergence of the structure kinetic and elastic energies in each of the limits. This part is based on improved structure displacement estimates from [40], adapted to our framework similarly as in [46]._
**Remark 2.2**.: _Throughout the proof, we will work with formulations of the problem both on \(\Omega\) and on \(\Omega^{\eta}(t)\). As both the fluid velocity \(\mathbf{u}\) and density \(\rho\) can be represented on \(\Omega^{\eta}(t)\) equivalently, we keep the same notation for \(\mathbf{u}\) and \(\rho\) whenever we shift to the domain \(\Omega^{\eta}(t)\). Let us point out that \(\mathbf{u}\) is continuous on \(\hat{\Gamma}^{\eta}(t)\) so \(\|\mathbf{u}\|_{W^{1,\pi}(\Omega^{\eta}(t))}=\|\mathbf{u}\|_{W^{1,\pi}(\Omega)}\) for any \(p\in[1,\infty]\), while \(\rho\) may have a jump on \(\hat{\Gamma}^{\eta}(t)\) so we use \(\|\rho\|_{L^{p}(\Omega^{\eta}(t))}=\|\rho\|_{L^{p}(\Omega)}\) for \(p\in[1,\infty]\) only._
## 3 Discussion and literature overview
The mathematical theory of the interaction problems between incompressible viscous fluids and thin elastic structures (plates or shells) has started with results of Beirao da Veiga [6] and Grandmont et al. [15, 21], and continued to develop in the last two decades, see [30, 40, 12, 13, 48, 24, 28] for the existence of weak solutions, [1, 2, 23, 22, 34, 4, 24, 31, 32] for the existence of strong solutions and [25, 43] for uniqueness. Theory involving compressible viscous fluids interacting with plates and shells on the other started quite recently with the result of Schwarzacher and Breit [10], and continued with [47] where weak solution was obtained for an interaction between a compressible viscous fluid and a nonlinear thermoelastic plate. Local in time regular solutions were constructed in [39, 35], while the weak-strong uniqueness for such problems was studied in [46]. In the case of heat-conducting fluids, interaction with an elastic plate was considered in [11] where a weak solution was constructed which satisfies the energy equality, and an interaction with a viscoelastic plate was considered in [36] where the strong solution with maximal regularity was constructed. The interaction of heat-conducting fluids and thermoelastic shells with heat exchange was studied in [33], where a weak solution was constructed. The case of mixture with elastic structure was studied in [26]. A semigroup approach to wellposedness of the problem of interaction of a linearized compressible fluid with an elastic boundary was presented in [5]. Finally, local in time regular solutions to the interaction problems between 3D elastic solids and fluids were obtained in [16, 17, 29, 41, 8], while weak solutions were constructed in [7, 9]. We also refer the reader to a very recent result [27] where such a problem with allowed contact was studied.
With all this in mind, little attention has been given to time-periodic solutions, or more precisely, to the question when the fluid-structure interaction model has a periodic behaviour under periodic forcing. Indeed, this question is of big importance, since many models tend to show periodic behavior. For example, heart beats and air flow through trachea are both periodic. Therefore, one can naturally ask, under what condition we can expect such models to behave periodically? This was first studied by Casanova for an interaction problem between a viscoelastic beam and an incompressible fluid [14] in the framework of strong solutions. Quite recently, Schwarzacher and Mindrila studied the interaction of a linear Koiter shell with an incompressible viscous fluid and obtained existence of a weak solution with a closed rigid boundary with no-slip condition in [37] and a dynamic pressure boundary condition in [38]. Finally, concerning the purely fluid system, the time-periodic weak solutions to the compressible Navier-Stokes system on a fixed domain were constructed in [18] for isentropic flows and in [19] for the full Navier-Stokes-Fourier system.
The main goal of this paper is to tackle this issue in the case when the fluid is compressible. This brings many challenges which do not exist in the incompressible case. The main challenge in the compressible viscous fluid theory is dealing with pressure and our case is no different. The estimates based on Bogovskii operator for the pressure are very sensitive to the shape of the domain (and thus on deformations of the beam) and many other factors including dimension. This directly results in limitations in our result, i.e. the dimension of the fluid is two, the beam is viscoelastic and the fluid domain is periodic in horizontal and vertical direction which a priori excludes contact for the beam.
The paper is organized as follows. In Section 4 we present a way to obtain a priori estimates assuming the solution is sufficiently smooth. This procedure is split into several steps. In Section 5 we present the approximation scheme used in the proof of Theorem 2.1 and prove the existence of a solution to the approximated system. In Section 6 we pass to the limit in the number of time basis functions \(m\to\infty\) and present uniform estimates for the arising solution independent of \(n\). In Section 7 we pass to the limit in the number of spatial basis functions \(n\to\infty\), deduce uniform bounds independent of \(\varepsilon\) and introduce the coupled momentum equation. In Section 8, we perform the limit with the penalization and artificial density diffusion parameter \(\varepsilon\to 0\) and deduce uniform bounds independent of \(\delta\). Finally, in Section 9 we pass to the limit with \(\delta\to 0\), thus removing the artificial pressure term and finishing the proof of Theorem 2.1.
## 4 A priori estimates for smooth solutions
Before we start, let us introduce the energy associated to the studied system as
\[E(t):=\int_{\Omega}\left(\frac{1}{2}\rho|\mathbf{u}|^{2}+\frac{1}{\gamma-1} \rho^{\gamma}\right)(t)\,\mathrm{d}\mathbf{y}+\int_{\Gamma}\left(\frac{1}{2} |\eta_{t}|^{2}+\frac{1}{2}|\eta_{xx}|^{2}\right)(t)\,\mathrm{d}x\]
and we emphasize that replacing \(\Omega\) with \(\Omega^{\eta}(t)\) yields the same quantity, see Remark 2.2. Further, we denote
\[\mathcal{E}:=\sup_{(0,T)}E.\]
The goal of this section is to show that smooth solutions to the problem (1)-(7) satisfies the inequality (12). This will serve as base in the forthcoming sections, where approximate problems with similar properties will be studied. We note that since we assume in this section that the solution is smooth, we are allowed to consider unbounded functions \(b\) in (9).
### Part I - estimates of \(\nabla\mathbf{u}\) and \(\eta_{tx}\)
In order to obtain the estimates, we sum up (9) with \(b(\rho)=\rho^{\gamma}\) and \(\varphi=1\), (9) with \(b(\rho)=0\) and \(\varphi=\frac{1}{2}|\mathbf{u}|^{2}\) and (10) with \((\boldsymbol{\varphi},\psi)=(\mathbf{u},\eta_{t})\) to obtain
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\,\mathrm{d}\mathbf{ y}\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t=\int_{ \Gamma_{T}}f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\cdot \mathbf{F}\,\mathrm{d}y\mathrm{d}t\]
and thus
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}y\mathrm{d}t+c(L)\|\eta_{t}\|_{L^{2}(0,T;H^{1}(\Gamma))}^{2}\\ \leq\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}y\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{ d}t=\int_{\Gamma_{T}}f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{T}}\rho \mathbf{u}\cdot\mathbf{F}\,\mathrm{d}y\mathrm{d}t\\ \leq\|f\|_{L^{2}(\Gamma_{T})}\|\eta_{t}\|_{L^{2}(\Gamma_{T})}+\| \rho\|_{L^{\infty}(0,T;L^{p}(\Omega))}\|\mathbf{u}\|_{L^{2}(0,T;L^{q}(\Omega) )}\|\mathbf{F}\|_{L^{2}(0,T;L^{\infty}(\Omega))}\\ \leq C(f,L)+\frac{c(L)}{2}\|\eta_{t}\|_{L^{2}(0,T;H^{1}(\Gamma))} ^{2}+C(\mathbf{F})\|\rho\|_{L^{\infty}(0,T;L^{p}(\Omega))}\|\mathbf{u}\|_{L^{2 }(0,T;L^{q}(\Omega))}\]
for any \(p>1\) and \(q=\frac{p}{p-1}\) by the Poincare inequality for \(\eta\). We have just deduced that
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\,\mathrm{d}y\mathrm{ d}t+\|\eta_{t}\|_{L^{2}(0,T;H^{1}(\Gamma))}^{2}\leq C+C\|\rho\|_{L^{\infty}(0,T;L^{p}( \Omega))}\|\mathbf{u}\|_{L^{2}(0,T;L^{q}(\Omega))}. \tag{13}\]
From here onward, we omit the dependence of constants on \(\Omega,f,\mathbf{F}\), since they are given and do not depend on functions \(\rho,\mathbf{u},\eta\).
Next, we shift to the moving domain \(\Omega^{\eta}(t)\) given in (8). We have
\[\|\eta_{t}\mathbf{e}_{2}\|_{L^{2}(0,T;H^{1}(\Omega^{\eta}(t)))}=2H\|\eta_{t}\| _{L^{2}(0,T;H^{1}(\Gamma))}.\]
Due to the kinematic coupling, we have that \(\mathbf{u}-\eta\mathbf{e}_{2}=0\) on \(\Gamma^{\eta}(t)\) and \(\Gamma^{\eta}(t)+2H\), so by using the Korn identity on \(\Omega^{\eta}(t)\)
\[\|\nabla\mathbf{u}-\nabla(\eta_{t}\mathbf{e}_{2})\|_{L^{2}(Q_{T}^ {2})}^{2}+\|\nabla\cdot(\mathbf{u}-\eta_{t}\mathbf{e}_{2})\|_{L^{2}(Q_{T}^{2}) }^{2}=2\|\mathbb{D}(\mathbf{u}-\eta_{t}\mathbf{e}_{2})\|_{L^{2}(Q_{T}^{2})}^{2} \\ \leq C\|\mathbb{S}(\nabla\mathbf{u}-\nabla(\eta_{t}\mathbf{e}_{2}) \|_{L^{2}(Q_{T}^{2})}^{2}\leq C\left(\int_{Q_{T}^{2}}\mathbb{S}(\nabla \mathbf{u}):\nabla\mathbf{u}\,\mathrm{d}y\mathrm{d}t+\|\eta_{t}\|_{L^{2}(0,T;H^ {1}(\Gamma))}^{2}\right),\]
where \(C\) only depends on \(\mu,\zeta\). The Poincare inequality yields
\[\|\mathbf{u}-\eta_{t}\mathbf{e}_{2}\|_{H^{1}(\Omega^{\eta}(t))}^{2}\leq C\| \nabla\mathbf{u}-\nabla(\eta_{t}\mathbf{e}_{2})\|_{L^{2}(\Omega^{\eta}(t))}^{2 }.\]
Note that the constant \(C\) is independent of \(\eta\) - this follows directly from the proof of the inequality for the steady domain [3, Theorem 6.30]. We use
\[\|\eta_{t}\|_{L^{\infty}(\Gamma)}\leq C\|\eta_{t}\|_{H^{1}(\Gamma)}\]
and \(\mathbf{u}-\eta_{t}\mathbf{e}_{2}=0\) on \(\Gamma^{\eta}(t)\cup\Gamma^{\eta}(t)+2H\) to conclude
\[\|\mathbf{u}\|_{L^{2}(0,T;L^{q}(\Omega^{\eta}(t)))}^{2}\leq 2\| \mathbf{u}-\eta_{t}\mathbf{e}_{2}\|_{L^{2}(0,T;L^{q}(\Omega^{\eta}(t)))}^{2}+2\| \eta_{t}\mathbf{e}_{2}\|_{L^{2}(0,T;L^{q}(\Omega^{\eta}(t)))}^{2}\\ \leq C\|\mathbf{u}-\eta_{t}\mathbf{e}_{2}\|_{L^{2}(0,T;H^{1}( \Omega^{\eta}(t)))}^{2}+C\|\eta_{t}\mathbf{e}_{2}\|_{L^{2}(0,T;H^{1}(\Omega^{ \eta}(t)))}^{2}\\ \leq C\int_{Q_{T}^{\eta}}\mathbb{S}(\nabla\mathbf{u}):\nabla \mathbf{u}\,\mathrm{d}y\mathrm{d}t+C\|\eta_{t}\|_{L^{2}(0,T;H^{1}(\Gamma))}^{2} \tag{14}\]
for any \(1<q<\infty\). We set
\[\kappa:=\min\left\{\frac{1}{20},\frac{(\gamma-1)}{5\gamma},\frac{1}{5(\gamma-1)} \right\}, \tag{15}\]
\[\overline{p}=\overline{p}(\kappa):=\frac{2\gamma^{2}}{2\gamma^{2}-\kappa(\gamma-1)},\]
so we have
\[\overline{\theta}:=\frac{\gamma(\overline{p}-1)}{\overline{p}(\gamma-1)}=\frac{ \kappa}{2\gamma}.\]
Then for any \(1<p<\overline{p}\) we have for some \(\theta<\overline{\theta}\)
\[\|\rho\|_{L^{\infty}(0,T;L^{p}(\Omega^{q}(t)))}\leq\|\rho\|_{L^{\infty}(0,T;L ^{1}(\Omega^{q}(t)))}^{1-\theta}\|\rho\|_{L^{\infty}(0,T;L^{\gamma}(\Omega^{q} (t)))}^{\theta}\leq Cm_{0}^{1-\theta}\mathcal{E}^{\gamma\theta}\leq C(1+ \mathcal{E}^{\frac{\theta}{2}}). \tag{16}\]
Since
\[\|\rho\|_{L^{\infty}(0,T;L^{p}(\Omega^{q}(t)))}=\|\rho\|_{L^{\infty}(0,T;L^{p} (\Omega))},\quad\int_{Q_{T}^{\infty}}\mathbb{S}(\nabla\mathbf{u}):\nabla \mathbf{u}\,\mathrm{d}\mathbf{y}\mathrm{d}t=\int_{Q_{T}}\mathbb{S}(\nabla \mathbf{u}):\nabla\mathbf{u}\,\mathrm{d}\mathbf{y}\mathrm{d}t,\]
the inequalities (13), (14) and (16) yield
\[\|\mathbf{u}\|_{L^{2}(0,T;H^{1}(\Omega))}^{2}+\|\eta_{t}\|_{L^{2}(0,T;H^{1}( \Gamma))}^{2}\leq C(\kappa)(1+\mathcal{E}^{\kappa}), \tag{17}\]
for the original domain, and consequently
\[\|\mathbf{u}\|_{L^{2}(0,T;L^{q}(\Omega))}^{2}\leq C(\kappa,q)(1+\mathcal{E}^{ \kappa}) \tag{18}\]
for all \(q>1\).
### Part II - circular estimates
In order to deduce the energy inequality, we sum up (9) with \(b(\rho)=\rho^{\gamma}\) and \(\varphi=\chi_{[s,t]}\), (9) with \(b(\rho)=0\) and \(\varphi=\chi_{[s,t]}\frac{1}{2}|\mathbf{u}|^{2}\) and (10) with \((\boldsymbol{\varphi},\psi)=(\chi_{[s,t]}\mathbf{u},\chi_{[s,t]}\eta_{t})\) to obtain
\[E(t)+\int_{s}^{t}\int_{\Omega}\mathbb{S}(\nabla\mathbf{u}): \nabla\mathbf{u}\,\mathrm{d}\mathbf{y}\mathrm{d}\tau+\int_{s}^{t}\int_{\Gamma }|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}\tau\\ =E(s)+\int_{s}^{t}\int_{\Gamma}f\eta_{t}\,\mathrm{d}x\mathrm{d} \tau+\int_{s}^{t}\int_{\Omega}\rho\mathbf{u}\cdot\mathbf{F}\,\mathrm{d} \mathbf{y}\mathrm{d}\tau\\ \leq E(s)+C(\kappa)(1+\mathcal{E}^{\kappa})\leq E(s)+C(\kappa)+ \kappa\mathcal{E}\]
by (16), (17), (18) and the Young inequality. We integrate again over \((0,T)\) with respect to variable \(s\) and then we take a supremum in the variable \(t\) over \((0,T)\) on the left hand side to obtain
\[\mathcal{E}\leq C_{0}\left(1+\int_{0}^{T}E(s)\,\mathrm{d}s\right). \tag{19}\]
The constant \(C_{0}\) depends on the choice of \(\kappa\), however we recall that \(\kappa\) is already fixed. Our goal in the remaining part of the estimates is to show
\[\int_{0}^{T}E(s)\,\mathrm{d}s\leq\delta_{0}\mathcal{E}+C(\delta_{0})\]
for some \(\delta_{0}\in(0,\frac{1}{C_{0}})\).
### Part III - estimate of \(\eta_{xx}\)
In this section we need the following interpolation inequality.
**Lemma 4.1**.: _Let \(g\in H^{1}(0,T;L^{2}(\Gamma))\cap L^{2}(0,T;H^{1}(\Gamma))\). Then for any \(\alpha\in(0,1)\) it holds_
\[g\in H^{\alpha}(0,T;H^{1-\alpha}(\Gamma))\]
_and there exists a constant \(C>0\) independent of \(g\) such that_
\[\|g\|_{H^{\alpha}(0,T;H^{1-\alpha}(\Gamma))}\leq C\|g\|_{H^{1}(0,T;L^{2}( \Gamma))}^{\alpha}\|g\|_{L^{2}(0,T;H^{1}(\Gamma))}^{1-\alpha}.\]
Proof.: First, note that \(g\) can easily be extended to \(\mathbb{R}^{2}\) (also denoted as \(g\)) so that
\[\|g\|_{H^{1}(\mathbb{R};L^{2}(\mathbb{R}))}\leq C\|g\|_{H^{1}(0,T;L^{2}(\Gamma))},\quad\|g\|_{L^{2}(\mathbb{R};H^{1}(\mathbb{R}))}\leq C\|g\|_{L^{2}(0,T;H^{1}( \Gamma))}.\]
Denote as \(\mathcal{F}_{t}\), \(\mathcal{F}_{x}\) and \(\mathcal{F}_{t,x}\) the Fourier transform w.r.t. variables \(t\) and \(x\) and both \(t,x\), respectively. One has:
\[\|g\|_{H^{\alpha}(\mathbb{R};H^{1-\alpha}(\mathbb{R}))}^{2} \leq C\int_{\mathbb{R}}(1+\sigma^{2})^{\alpha}||\mathcal{F}_{t}(g)|| _{H^{1-\alpha}(\mathbb{R})}^{2}\ d\sigma\] \[\leq C\int_{\mathbb{R}}(1+\sigma^{2})^{\alpha}\int_{\mathbb{R}}(1+ \xi^{2})^{1-\alpha}|\mathcal{F}_{x}(\mathcal{F}_{t}(g))|^{2}\ d\xi d\sigma\] \[= C\int_{\mathbb{R}^{2}}(1+\sigma^{2})^{\alpha}(1+\xi^{2})^{1- \alpha}|\mathcal{F}_{t,x}(g)|^{2}\ d\xi d\sigma\] \[\leq C\left(\int_{\mathbb{R}^{2}}(1+\sigma^{2})|\mathcal{F}_{t,x}(g)| ^{2}\ d\xi d\sigma\right)^{2\alpha}\left(\int_{\mathbb{R}^{2}}(1+\xi^{2})| \mathcal{F}_{t,x}(g)|^{2}\ d\xi d\sigma\right)^{2(1-\alpha)}\] \[= C\|g\|_{H^{1}(\mathbb{R};L^{2}(\mathbb{R}))}^{2\alpha}\|g\|_{L^ {2}(\mathbb{R};H^{1}(\mathbb{R}))}^{2(1-\alpha)},\]
where we used Holder's inequality with indexes \(p=\frac{1}{\alpha}\) and \(q=\frac{1}{1-\alpha}\).
We use test functions \((\boldsymbol{\varphi},\psi)=(\eta\mathbf{e}_{2},\eta)\) in (10), we observe that \(\nabla\cdot(\eta\mathbf{e}_{2})=0\) and
\[\int_{\Gamma_{T}}\eta_{tx}\eta_{x}\,\mathrm{d}x\mathrm{d}t=\frac{1}{2}\int_{ \Gamma_{T}}(\eta_{x}^{2})_{t}\,\mathrm{d}x\mathrm{d}t=0.\]
Consequently,
\[\|\eta_{xx}\|_{L^{2}(\Gamma_{T})}^{2}=\int_{\Gamma_{T}}|\eta_{xx} |^{2}\,\mathrm{d}x\mathrm{d}t\\ =\int_{Q_{T}}\rho\mathbf{u}\cdot\eta_{t}\mathbf{e}_{2}\,\mathrm{d }\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\otimes\mathbf{u}:\nabla( \eta\mathbf{e}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t-\int_{Q_{T}}\mathbb{S}( \nabla\mathbf{u}):\nabla(\eta\mathbf{e}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t+ \int_{Q_{T}}\rho\eta\mathbf{e}_{2}\cdot\mathbf{F}\,\mathrm{d}\mathbf{y} \mathrm{d}t\\ +\int_{\Gamma_{T}}|\eta_{t}|^{2}\,\mathrm{d}x\mathrm{d}t+\int_{ \Gamma_{T}}f\eta\,\mathrm{d}x\mathrm{d}t. \tag{20}\]
We fix \(1<p<\overline{p}\), denote \(q=\frac{p}{p-1}\) and estimate the terms on the right hand side as follows. First,
\[\left|\int_{Q_{T}}\rho\mathbf{u}\cdot\eta_{t}\mathbf{e}_{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t\right|\leq C\|\rho\|_{L^{\infty}(0,T;L^{p}(\Omega))}\| \mathbf{u}\|_{L^{2}(0,T;L^{q}(\Omega))}\|\eta_{t}\|_{L^{2}(0,T;L^{\infty}( \Gamma))}\leq C(\kappa)\left(1+\mathcal{E}^{\frac{3\kappa}{2}}\right)\]
by using Sobolev embedding, (16), (17) and (18). In order to estimate the convective term, we utilize the following estimate
\[\|\eta_{x}\|_{L^{\infty}(0,T;L^{3\kappa}(\Gamma))}\leq C\|\eta_{ x}\|_{H^{\frac{1}{2}+\delta}(0,T;H^{\frac{1}{2}-\delta}(\Gamma))}\leq C\|\eta_{x}\|_{H^{1} (0,T;L^{2}(\Gamma))}^{\frac{1}{2}+\delta}\|\eta_{x}\|_{L^{2}(0,T;H^{1}(\Gamma))} ^{\frac{1}{2}-\delta}\\ \leq C\big{(}\|\eta_{x}\|_{L^{2}(0,T;L^{2}(\Gamma))}^{\frac{1}{2}+ \delta}+\|\eta_{tx}\|_{L^{2}(0,T;L^{2}(\Gamma))}^{\frac{1}{2}+\delta}\big{)} \|\eta_{x}\|_{L^{2}(0,T;H^{1}(\Gamma))}^{\frac{1}{2}-\delta}\\ =C\|\eta_{x}\|_{L^{2}(0,T;H^{1}(\Gamma))}+C\|\eta_{tx}\|_{L^{2}(0, T;L^{2}(\Gamma))}^{\frac{1}{2}+\delta}\|\eta_{x}\|_{L^{2}(0,T;H^{1}(\Gamma))}^{\frac{1}{2}- \delta}\\ \leq C\big{(}\|\eta_{x}\|_{L^{2}(0,T;H^{1}(\Gamma))}+\|\eta_{tx}\|_ {L^{2}(0,T;L^{2}(\Gamma))}^{\frac{1}{2}-\delta}\big{)}. \tag{21}\]
Here \(\delta>0\) is sufficiently small, we have used Sobolev embedding, Lemma 4.1 and the Young inequality for exponents \((\frac{1}{2}+\delta)^{-1}\) and \((\frac{1}{2}-\delta)^{-1}\). We use this estimate to write
\[\left|\int_{Q_{T}}\rho\mathbf{u}\otimes\mathbf{u}:\nabla(\eta \mathbf{e}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C\|\rho\|_{L^{ \infty}(0,T;L^{p}(\Omega)}\|\mathbf{u}\|_{L^{2}(0,T;L^{3\kappa}(\Omega))}^{2} \|\eta_{x}\|_{L^{\infty}(0,T;L^{3\kappa}(\Gamma))}\\ \leq C(\kappa)(1+\mathcal{E}^{\frac{3\kappa}{2}})\left(\|\eta_{x}\| _{L^{2}(0,T;H^{1}(\Gamma))}+\|\eta_{tx}\|_{L^{2}(0,T;L^{2}(\Gamma))}\right)\leq C (\kappa)\left(1+\mathcal{E}^{3\kappa}\right)+\frac{1}{8}\|\eta_{xx}\|_{L^{2}( \Gamma_{T})}^{2},\]
where we have used again (16), (17), (18), and the Young inequality. The viscous term is estimated by
\[\left|\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla(\eta\mathbf{e}_ {2})\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C\|\mathbb{S}(\nabla\mathbf{u}) \|_{L^{2}(Q_{T})}\|\eta_{x}\|_{L^{2}(0,T;L^{2}(\Gamma))}\] \[\leq C\|\mathbb{S}(\nabla\mathbf{u})\|_{L^{2}(Q_{T})}^{2}+\frac{ 1}{8}\|\eta_{xx}\|_{L^{2}(\Gamma_{T})}^{2}\leq C(\kappa)\left(1+\mathcal{E}^{ \kappa}\right)+\frac{1}{8}\|\eta_{xx}\|_{L^{2}(\Gamma_{T})}^{2}\]
using (17). We also use (17) directly to estimate
\[\int_{\Gamma_{T}}|\eta_{t}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(\kappa)\left(1+ \mathcal{E}^{\kappa}\right).\]
Finally,
\[\left|\int_{Q_{T}}\rho\eta\mathbf{e}_{2}\cdot\mathbf{F}\,\mathrm{d}\mathbf{y} \mathrm{d}t\right|\leq C\|\rho\|_{L^{\infty}(0,T;L^{1}(\Omega))}\|\eta\|_{L^{ 2}(0,T;L^{\infty}(\Gamma))}\|\mathbf{F}\|_{L^{2}(0,T;L^{\infty}(\Omega))}\leq C +\frac{1}{8}\|\eta_{xx}\|_{L^{2}(\Gamma_{T})}^{2}\]
and
\[\left|\int_{\Gamma_{T}}f\eta\,\mathrm{d}x\mathrm{d}t\right|\leq\|f\|_{L^{ \infty}(\Gamma_{T})}\|\eta\|_{L^{1}(\Gamma_{T})}\leq C+\frac{1}{8}\|\eta_{xx} \|_{L^{2}(\Gamma_{T})}^{2}\]
by using the Poincare inequality twice together with the boundary condition (5). All the estimates together with (20) yield
\[\int_{\Gamma_{T}}|\eta_{xx}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(\kappa)(1+ \mathcal{E}^{3\kappa}). \tag{22}\]
### Part IV - density/pressure estimates
Denote the Bogovskii operator as \(\mathcal{B}_{\Omega}:L_{0}^{p}(\Omega)\to W_{0}^{1,p}(\Omega)\). This operator satisfies
\[\nabla\cdot\mathcal{B}_{\Omega}[f]=f,\]
where \(L_{0}^{p}(\Omega):=\{f\in L^{p}(\Omega):\int_{\Omega}f=0\}\) and \(W_{0}^{1,p}(\Omega):=\{f\in W^{1,p}(\Omega):f\!\!\restriction_{\partial\Omega} =0\}\). Moreover,
\[\|\mathcal{B}_{\Omega}[f]\|_{W^{1,p}(\Omega)}\leq C\|f\|_{L^{p}(\Omega)}.\]
Throughout the rest of this section, we will repeatedly use the following estimate. For \(0<\alpha<\frac{1}{2}\), we have
\[\left\|\mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\, \mathrm{d}\mathbf{y}\right]\right\|_{L^{\infty}(\Omega)}\leq C\left\|\mathcal{ B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\,\mathrm{d} \mathbf{y}\right]\right\|_{W^{1,\frac{1}{\alpha}}(\Omega)}\leq C\|\rho^{ \alpha}\|_{L^{\frac{1}{\alpha}}(\Omega)}=Cm_{0}^{\alpha}. \tag{23}\]
We cannot use \(\mathcal{B}_{\Omega}[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}]\) as a test function \(\boldsymbol{\varphi}\) in (10) since its trace on \(\Gamma^{\eta}\) is not regular enough in general. Therefore, we split the procedure into estimates near the viscoelastic structure and estimates in the interior of the fluid domain. To this end we fix \(0<h<\frac{H}{2}\) and we emphasize that constants appearing in the calculations below may depend on \(h\).
We shift to the moving domain \(\Omega^{\eta}(t)\) and we deal with the interior estimates first. Note that the function \(\mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\,\mathrm{ d}\mathbf{y}\right]\) shifted to \(\Omega^{\eta}(t)\) does not vanish on its boundary \(\Gamma^{\eta}(t)\) and \(\Gamma^{\eta}(t)+2H\). For that reason, we define a cut-off function
\[\phi_{h}(t,x,z):=\begin{cases}\frac{z-\eta(t,x)}{h},&\text{for }\eta(t,x)<z< \eta(t,x)+h,\\ 1,&\text{for }\eta(t,x)+h<z<\eta(t,x)+2H-h,\\ \frac{2H+\eta(t,x)-z}{h},&\text{for }\eta(t,x)+2H-h<z<\eta(t,x)+2H,\end{cases}\]
and
\[\boldsymbol{\varphi}_{h}:=\phi_{h}\mathcal{B}_{\Omega}\left[\rho^{\alpha}- \int_{\Omega}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right], \tag{24}\]
where
\[0<\alpha:=\min\left\{\frac{2}{5},\frac{\gamma-1}{2}\right\}\]
is fixed from now on. We emphasize that this choice of \(\alpha\) ensures \(\alpha<\frac{1}{2}\), so we can use the estimate (23). Moreover due to (15) it holds
\[\frac{3}{2}\kappa(\gamma-1)<\alpha<\gamma-1-\kappa\gamma, \tag{25}\]
which will be important later.
We test the coupled momentum equation (10) by \((\boldsymbol{\varphi}_{h},0)\) to obtain
\[\int_{Q_{T}^{\alpha}}\rho^{\gamma+\alpha}\phi_{h}\,\mathrm{d} \mathbf{y}\mathrm{d}t=\int_{Q_{T}^{\alpha}}\rho^{\gamma}\left(\int_{\Omega^{ \gamma}(t)}\rho^{\alpha}(t)\,\mathrm{d}\mathbf{y}\right)\phi_{h}\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ -\int_{Q_{T}^{\alpha}}\rho^{\gamma}\left(\mathcal{B}_{\Omega} \left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right] \cdot\nabla\phi_{h}\right)\,\mathrm{d}\mathbf{y}\mathrm{d}t-\int_{Q_{T}^{ \alpha}}\rho\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi}_{h}\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ -\int_{Q_{T}^{\alpha}}\rho\mathbf{u}\otimes\mathbf{u}:\nabla \boldsymbol{\varphi}_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}^{\alpha} }\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{\varphi}_{h}\,\mathrm{d} \mathbf{y}\mathrm{d}t-\int_{Q_{T}^{\alpha}}\rho\mathbf{F}\cdot\boldsymbol{ \varphi}_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t. \tag{26}\]
We proceed to bound the terms on the right-hand side. Notice that
\[\int_{\Omega^{\eta}(t)}\rho^{\alpha}(t)\,\mathrm{d}\mathbf{y}\leq\left(\int_{ \Omega^{\eta}(t)}\rho(t)\,\mathrm{d}\mathbf{y}\right)^{\alpha}|\Omega^{\eta}( t)|^{1-\alpha}\leq Cm_{0}^{\alpha}\]
and therefore
\[\int_{Q_{T}^{\alpha}}\rho^{\gamma}\left(\int_{\Omega^{\eta}(t)}\rho^{\alpha}( t)\,\mathrm{d}\mathbf{y}\right)\phi_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t\leq C \mathcal{E}m_{0}^{\alpha}. \tag{27}\]
Moreover,
\[\left|\int_{Q_{T}^{\alpha}}\rho^{\gamma}\mathcal{B}_{\Omega} \left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right] \cdot\nabla\phi_{h}\right|\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ \leq C\int_{0}^{T}\|\rho^{\gamma}\|_{L^{1}(\Omega^{\eta}(t))} \left\|\mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\, \mathrm{d}x\right]\right\|_{L^{\infty}(\Omega)}(1+\|\eta_{x}\|_{L^{\infty}( \Gamma)})\,\mathrm{d}t\\ \leq C\|\rho^{\gamma}\|_{L^{\infty}(0,T;L^{1}(\Omega^{\eta}(t)))} m^{\alpha}\left(1+\|\eta\|_{L^{2}(0,T;H^{2}(\Gamma))}\right)\leq C(\kappa)\left( \mathcal{E}^{1+\frac{\alpha}{\alpha}}\right). \tag{28}\]
In order to estimate the third term on the right hand side of (26), we fix \(1<p<\overline{p}\) and \(q>1\) such that \(\frac{1}{\gamma}+\frac{1}{q}+\frac{1}{p}=1\). Since the Bogovskii operator commutes with the derivative with respect to time, we deduce
\[\partial_{t}\boldsymbol{\varphi}_{h}=\phi_{h}\partial_{t} \mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega^{\eta}(t)}\rho^{\alpha} \,\mathrm{d}\mathbf{y}\right]+\partial_{t}\phi_{h}\mathcal{B}_{\Omega}\left[ \rho^{\alpha}-\int_{\Omega^{\eta}(t)}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right]\\ =\phi_{h}\mathcal{B}_{\Omega}\left[\partial_{t}\left(\rho^{\alpha }-\int_{\Omega^{\eta}(t)}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right)\right]+ \partial_{t}\phi_{h}\mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega^{ \eta}(t)}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right].\]
The continuity equation implies
\[\partial_{t}\rho^{\alpha}=-\nabla\cdot(\rho^{\alpha}\mathbf{u})+(1-\alpha) \rho^{\alpha}\nabla\cdot\mathbf{u}\]
which is used to estimate
\[\left\|\mathcal{B}_{\Omega}\left[\partial_{t}\rho^{\alpha}-\partial_ {t}\int_{\Omega^{\eta}(t)}\rho^{\alpha}\,\mathrm{d}\mathbf{y}\right]\right\|_{L^ {2}(0,T;L^{p}(\Omega^{\eta}(t)))}\\ =\left\|\mathcal{B}_{\Omega}\left[\nabla\cdot(\rho^{\alpha} \mathbf{u})+(\alpha-1)\rho^{\alpha}\nabla\cdot\mathbf{u}-(\alpha-1)\left(\int_ {\Omega^{\eta}(t)}\rho^{\alpha}\nabla\cdot\mathbf{u}\,\mathrm{d}\mathbf{y} \right)\right]\right\|_{L^{2}(0,T;L^{p}(\Omega^{\eta}(t)))}\\ \leq\|\rho^{\alpha}\mathbf{u}\|_{L^{2}(0,T;L^{p}(\Omega^{\eta}(t) ))}+C\|\mathcal{B}_{\Omega}[\rho^{\alpha}\nabla\cdot\mathbf{u}]\|_{L^{2}(0,T;L ^{p}(\Omega^{\eta}(t)))}\\ \leq\|\rho^{\alpha}\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma- \alpha p}}(\Omega^{\eta}(t)))}\|\mathbf{u}\|_{L^{2}(0,T;L^{\frac{\gamma}{\gamma- \alpha p}}(\Omega^{\eta}(t)))}+C\|\rho^{\alpha}\|_{L^{\infty}(0,T;L^{\frac{ \gamma}{\gamma}}(\Omega^{\eta}(t)))}\|\nabla\cdot\mathbf{u}\|_{L^{2}(0,T)}\\ \leq C(\kappa)\left(1+\mathcal{E}^{\frac{\alpha}{\gamma}+\frac{ \alpha}{\gamma}}\right),\]
where \(r=\max\{1,\frac{2\rho}{2+p}\}\). Since
\[\partial_{t}\phi_{h}=-\frac{1}{h}\eta_{t}\]
on the set where it is not zero, it holds that
\[\left|\int_{\Omega^{\eta}_{T}}\rho\mathbf{u}\cdot\partial_{t} \boldsymbol{\varphi}_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq\|\rho\|_ {L^{\infty}(0,T;L^{\gamma}(\Omega^{\eta}(t)))}\|\mathbf{u}\|_{L^{2}(0,T;L^{ \eta}(\Omega^{\eta}(t)))}\|\partial_{t}\boldsymbol{\varphi}_{h}\|_{L^{2}(0,T; L^{p}(\Omega^{\eta}(t)))}\\ \leq C(\kappa)\left(1+\mathcal{E}^{\frac{1}{\gamma}+\frac{\alpha} {\gamma}}\right)\left(\|\phi_{h}\|_{L^{\infty}(\Omega^{\eta}_{T})}\mathcal{E}^ {\frac{\alpha}{\gamma}+\frac{\alpha}{\gamma}}+\|\eta_{t}\|_{L^{2}(0,T;L^{p}( \Gamma))}m^{\alpha}_{0}\right)\leq C(\kappa)\left(1+\mathcal{E}^{\frac{1}{ \gamma}+\frac{\alpha}{\gamma}+\kappa}\right). \tag{29}\]
We continue with the fourth term on the right hand side of (26). Here we take \(q=\frac{2\gamma}{\gamma-1-\alpha}\) and deduce
\[\left|\int_{\Omega^{\eta}_{T}}\rho\mathbf{u}\otimes\mathbf{u}: \nabla\boldsymbol{\varphi}_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq\| \rho\|_{L^{\infty}(0,T;L^{\gamma}(\Omega^{\eta}(t)))}\|\mathbf{u}\|_{L^{2}(0,T ;L^{q}(\Omega^{\eta}(t)))}^{2}\|\nabla\boldsymbol{\varphi}_{h}\|_{L^{\infty}( 0,T;L^{\frac{\gamma}{\gamma}}(\Omega^{\eta}(t)))}\\ \leq C(\kappa)\mathcal{E}^{\frac{1}{\gamma}+\kappa}\left(\left\| \nabla\mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\, \mathrm{d}\mathbf{y}\right]\right\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma}}( \Omega^{\eta}(t)))}+\left\|\nabla\phi_{h}\right\|_{L^{\infty}(0,T;L^{\frac{ \gamma}{\gamma}}(\Omega^{\eta}(t)))}m^{\alpha}_{0}\right)\\ \leq C(\kappa)\left(1+\mathcal{E}^{\frac{1}{\gamma}+\kappa})( \|\rho^{\alpha}\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma}}(\Omega^{\eta}(t)))}+1 +\|\eta_{x}\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma}}(\Gamma))}\right)\\ \leq C(\kappa)(1+\mathcal{E}^{\frac{1}{\gamma}+\kappa})\Big{(}\| \rho^{\alpha}\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma}}(\Omega^{\eta}(t)))}+1 +\|\eta_{x}\|_{L^{2}(0,T;H^{1}(\Gamma))}+\|\eta_{tx}\|_{L^{2}(0,T;L^{2}(\Gamma) )}\Big{)}\\ \leq C(\kappa)(1+\mathcal{E}^{\frac{1}{\gamma}+\kappa})\Big{(}1+ \mathcal{E}^{\frac{\alpha}{\gamma}}+\mathcal{E}^{\frac{\alpha}{\gamma}}\Big{)} \leq C(\kappa)\left(1+\mathcal{E}^{\frac{1+\alpha}{\gamma}+\kappa}+\mathcal{E}^ {\frac{1}{\gamma}+\frac{\alpha}{\gamma}}\right)\]
by (21) and (22). The elliptic term satisfies
\[\left|\int_{\Omega^{\eta}_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla \boldsymbol{\varphi}_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq\|\mathbb{S }(\nabla\mathbf{u})\|_{L^{2}(0,T;L^{2}(\Omega^{\eta}(t)))}\|\nabla\boldsymbol{ \varphi}_{h}\|_{L^{2}(0,T;L^{\frac{\gamma}{\alpha}}(\Omega^{\eta}(t)))}\\ \leq C(\kappa)\left(1+\mathcal{E}^{\frac{\alpha}{\gamma}}\right) \left(\|\nabla\mathcal{B}_{\Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{ \alpha}\,\mathrm{d}\mathbf{y}\right]\right\|_{L^{2}(0,T;L^{\frac{\gamma}{\alpha} }(\Omega^{\eta}(t)))}+\left\|\nabla\phi_{h}\right\|_{L^{2}(0,T;L^{\frac{\gamma}{ \alpha}}(\Omega^{\eta}(t)))}m^{\alpha}_{0}\right)\\ \leq C(\kappa)\left(1+\mathcal{E}^{\frac{\alpha}{\gamma}}\right) \left(\|\rho^{\alpha}\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma}}(\Omega^{\eta}(t)) )}+(1+\|\eta_{x}\|_{L^{2}(0,T;L^{\frac{\gamma}{\alpha}}(\Omega^{\eta}(t)))}) \right)\\ \leq C(\kappa)\left(1+\mathcal{E}^{\frac{\alpha}{\gamma}}\right) \left(1+\|\rho^{\alpha}\|_{L^{\infty}(0,T;L^{\frac{\gamma}{\gamma}}(\Omega^{\eta}(t) ))}+\|\eta_{xx}\|_{L^{2}(0,T;L^{2}(\Gamma))}\right)\leq C(\kappa)(1+ \mathcal{E}^{\frac{\alpha}{\gamma}+\frac{\alpha}{\gamma}}+\mathcal{E}^{2\kappa}).\]
Finally,
\[\left|\int_{\Omega^{\eta}_{T}}\rho\mathbf{F}\cdot\boldsymbol{\varphi}_{h}\, \mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C\|\rho\|_{L^{\infty}(0,T;L^{1}(\Omega ^{\eta}(t)))}\|\mathbf{F}\|_{L^{2}(0,T;L^{\infty}(\Omega^{\eta}(t)))}\| \boldsymbol{\varphi}_{h}\|_{L^{\infty}(\Omega^{\eta}_{T})}\leq Cm^{1+\alpha}_{0} \leq C.\]
We observe that due to (15) and (25) the largest power of \(\mathcal{E}\) in all of the above estimates is \(\mathcal{E}^{1+\frac{3\alpha}{2}}\). We combine these estimates to get
\[\int_{0}^{T}\int_{(\eta+k<z<\eta+2H-h)}\rho^{\gamma+\alpha}\,\mathrm{d} \mathbf{y}\mathrm{d}t\leq\int_{\Omega^{\eta}_{T}}\rho^{\gamma+\alpha}\phi_{h}\, \mathrm{d}\mathbf{y}\mathrm{d}t\leq C(\kappa)\left(1+\mathcal{E}^{1+\frac{3 \alpha}{2}}\right), \tag{30}\]
which then gives us by the interpolation of Lebesgue spaces
\[\left(\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho^{\gamma}\,\mathrm{ d}\mathbf{y}\mathrm{d}t\right)^{\frac{1}{\gamma}}\\ \leq\left(\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho^{\gamma+ \alpha}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right)^{\frac{\theta}{\gamma+\alpha}} \left(\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho\,\mathrm{d}\mathbf{y} \mathrm{d}t\right)^{1-\theta}\\ \leq C(\kappa)(1+\mathcal{E}^{1+\frac{2\alpha}{\gamma}})^{\frac{ \theta}{\gamma+\alpha}}m_{0}^{1-\theta},\]
where
\[\theta=\frac{(\gamma-1)(\gamma+\alpha)}{(\gamma+\alpha-1)\gamma}.\]
The choice of \(\kappa\) and \(\alpha\) which satisfy (15) and (25) ensures that
\[\left(1+\frac{3\kappa}{2}\right)\frac{\gamma\theta}{\gamma+\alpha}=\left(1+ \frac{3\kappa}{2}\right)\frac{\gamma-1}{\gamma+\alpha-1}<1. \tag{31}\]
We define
\[\kappa^{\prime}:=1-\left(1+\frac{3\kappa}{2}\right)\frac{\gamma-1}{\gamma+ \alpha-1}\]
which yields
\[\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho^{\gamma}\,\mathrm{d}\mathbf{y} \mathrm{d}t\leq C(\kappa)\left(1+\mathcal{E}^{1-\kappa^{\prime}}\right). \tag{32}\]
Next, we deal with the near boundary estimates. Recall that we have fixed \(0<h<\frac{H}{2}\). This time we define
\[\varphi_{h}(t,x,z):=\begin{cases}z-\eta(t,x),&\text{for }\eta(t,x)<z<\eta(t,x)+h, \\ -\frac{h}{H-h}(z-(\eta(t,x)+h))+h,&\text{for }\eta(t,x)+h<z<\eta(t,x)+2H-h,\\ z-(\eta(t,x)+2H),&\text{for }\eta(t,x)+2H-h<z<\eta(t,x)+2H.\end{cases} \tag{33}\]
Note that for fixed \((t,x)\), \(\varphi_{h}(t,x,z)\) is piecewise linear in the \(z\) variable with slope equal to \(1\) near the boundary of the domain. We choose \((\boldsymbol{\varphi},\psi)=(\varphi_{h}\mathbf{e}_{2},0)\) as test functions in (10) to obtain
\[\int_{0}^{T}\int_{\{\eta<z<\eta+h\}\cup\{\eta+2H-h<z<\eta+2H\}} \rho^{\gamma}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ =\frac{h}{H-h}\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho^{ \gamma}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\int_{Q_{T}^{\alpha}}\rho\mathbf{u} \cdot\partial_{t}(\varphi_{h}\mathbf{e}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t \\ -\int_{Q_{T}^{\alpha}}\rho\mathbf{u}\otimes\mathbf{u}:\nabla( \varphi_{h}\mathbf{e}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}^{ \alpha}}\mathbb{S}(\nabla\mathbf{u}):\nabla(\varphi_{h}\mathbf{e}_{2})\, \mathrm{d}\mathbf{y}\mathrm{d}t-\int_{Q_{T}^{\alpha}}\rho\mathbf{F}\cdot( \varphi_{h}\mathbf{e}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t. \tag{34}\]
We use (32) to bound the first term on the right hand side. In order to bound the remaining terms, we use similar estimates as in the case of the interior estimates. In fact, the estimates are now more simple as there are no terms with the Bogovskii operator and the derivatives act directly on the function \(\varphi_{h}\) and consequently on \(\eta\). Therefore we obtain
\[\int_{0}^{T}\int_{\{\eta<z<\eta+h\}\cup\{\eta+2H-h<z<\eta+2H\}} \rho^{\gamma}\,\mathrm{d}\mathbf{y}\mathrm{d}t\leq C(\kappa)\left(1+\mathcal{E }^{1-\kappa^{\prime}}\right)+C(\kappa)\left(1+\mathcal{E}^{\frac{1}{\gamma}+ \frac{\alpha}{\gamma}+\kappa}\right)\\ +C(\kappa)\left(1+\mathcal{E}^{\frac{1+\alpha}{\gamma}+\kappa}+ \mathcal{E}^{\frac{1}{\gamma}+\frac{5\kappa}{2}}\right)+C(\kappa)\left(1+ \mathcal{E}^{\frac{\alpha}{\gamma}+\frac{\alpha}{2}}+\mathcal{E}^{2\kappa} \right)\leq C(\kappa)\left(1+\mathcal{E}^{1-\kappa^{\prime\prime}}\right), \tag{35}\]
where
\[\kappa^{\prime\prime}:=\min\left\{\kappa^{\prime},1-\kappa-\frac{1+\alpha}{ \gamma},1-\frac{1}{\gamma}-\frac{5\kappa}{2}\right\}. \tag{36}\]
The conditions (15) and (25) ensure that \(\kappa^{\prime\prime}>0\). We sum up (32) and (35) and we go back to \(\Omega\) to finally deduce
\[\int_{Q_{T}}\rho^{\gamma}\,\mathrm{d}\mathbf{y}\mathrm{d}t\leq C(\kappa)\left(1 +\mathcal{E}^{1-\kappa^{\prime\prime}}\right),\]
where \(\kappa\) and \(\kappa^{\prime\prime}\) are related through (36).
### Part V - closing the estimates
We notice that for \(q=\frac{2\gamma}{\gamma-1}\)
\[\int_{Q_{T}}\rho|\mathbf{u}|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\leq C\|\rho \|_{L^{\infty}(0,T;L^{\gamma}(\Omega))}\|\mathbf{u}\|_{L^{2}(0,T;L^{\alpha}( \Omega))}^{2}\leq C(\kappa)\left(1+\mathcal{E}^{\frac{1}{\gamma}+\kappa} \right).\]
Since \(\frac{1}{\gamma}+\kappa<1-\kappa^{\prime\prime}\) we finally obtain by previous estimates
\[\int_{0}^{T}E(s)\,\mathrm{d}s\leq C(\kappa)\left(1+\mathcal{E}^{1-\kappa^{ \prime\prime}}\right)\leq C(\delta_{0})+\delta_{0}\mathcal{E}\]
for any \(\delta_{0}>0\). This together with (19) yields
\[\mathcal{E}\leq C_{0}\left(1+\int_{0}^{T}E(s)\,\mathrm{d}s\right)\leq C_{0}(1 +\delta_{0}\mathcal{E}+C(\delta_{0}))\]
and, consequently,
\[\mathcal{E}\leq C,\]
where \(C\) depends on \(f,\mathbf{F},m_{0},L,H\), \(h\) and the choice of \(\kappa\). However, we can choose \(h=\frac{H}{4}\), and the choice of \(\kappa\) depends only on the value of \(\gamma\) so the constant \(C\) in the end depends only on \(f,\mathbf{F},m_{0},\gamma,L\) and \(H\), i.e. the given data and parameters of the problem.
## 5 Approximate decoupled problem
We introduce the orthogonal basis of \(L^{2}_{\#}(0,T)\) denoted by \(\{\tau_{i}(t)\}_{i\in\mathbb{N}\cup\{0\}}\), more precisely we set for \(k\in\mathbb{N}\cup\{0\}\)
\[\tau_{2k}(t)=\cos\left(\frac{2\pi kt}{T}\right),\qquad\tau_{2k+1}(t)=\sin \left(\frac{2\pi kt}{T}\right).\]
We denote by \(\{s_{i}(x)\}_{i\in\mathbb{N}}\) the orthogonal basis of \(H^{1}_{\#,0}(\Gamma)\cap H^{2}_{\#}(\Gamma)\) and by \(\{\mathbf{f}_{i}(x,z)\}_{i\in\mathbb{N}}\) the orthogonal basis of \(H^{1}_{\#}(\Omega)\). We define finite-dimensional spaces
\[\mathcal{P}^{str}_{n,m} :=\mathrm{span}\{s_{i}(x)\tau_{j}(t)\}_{1\leq i\leq n,0\leq j\leq 2 m},\] \[\mathcal{P}^{fl}_{n,m} :=\mathrm{span}\{\mathbf{f}_{i}(x,z)\tau_{j}(t)\}_{1\leq i\leq n, 0\leq j\leq 2m}.\]
We fix \(m,n\in\mathbb{N}\), we introduce parameters \(\varepsilon>0\) and \(\delta>0\), and we fix \(a\geq 5\). Here, \(\varepsilon\) denotes the artificial diffusion in the continuity equation, but also denotes the penalization parameter between the trace of the fluid velocity field on the viscoelastic beam and the velocity of the beam itself. The parameter \(\delta\) then denotes an artificial pressure coefficient \(\delta\rho^{a}\) in the momentum equation and it appears in other artificial terms which help us to get good estimates at the beginning of the proof but have to disappear from the equations later.
We are ready to present the approximate decoupled and penalized problem which is the starting point of our existence proof. We fix \(\beta\in(0,1)\), our goal is to find \(\rho\in C^{0,\beta}_{\#}(0,T;C^{2,\beta}_{\#}(\Omega))\cap C^{1,\beta}_{\#}( 0,T;C^{0,\beta}_{\#}(\Omega))\), \(\mathbf{u}\in\mathcal{P}^{fl}_{n,m}\) and \(\eta\in\mathcal{P}^{str}_{n,m}\) which satisfy the following identities.
1. The **structure momentum equation** \[\int_{\Gamma_{T}}\eta_{t}\psi_{t}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\eta_{ xx}\psi_{xx}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\eta_{tx}\psi_{x}\, \mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\frac{\eta_{t}-\mathbf{v}\cdot\mathbf{e _{2}}}{\varepsilon}\psi\,\mathrm{d}x\mathrm{d}t=-\int_{\Gamma_{T}}f\psi\, \mathrm{d}x\mathrm{d}t\] (37) holds for all \(\psi\in\mathcal{P}^{str}_{n,m}\), where \(\mathbf{v}=\gamma_{|\Gamma^{\eta}}\mathbf{u}\).
2. The **damped continuity equation** \[\partial_{t}\rho+\nabla\cdot(\rho\mathbf{u})-\varepsilon\Delta\rho+\varepsilon \rho=\varepsilon M,\] (38) complemented with periodic boundary conditions for \(\rho\) holds in the classical sense in \(\Omega\), where \(M=\frac{m_{0}}{|\Omega|}\).
3. The **fluid momentum equation** \[\delta\int_{Q_{T}}\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi} \,\mathrm{d}y\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\cdot\partial_{t} \boldsymbol{\varphi}\,\mathrm{d}y\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\otimes \mathbf{u}:\nabla\boldsymbol{\varphi}\,\mathrm{d}y\mathrm{d}t+\int_{Q_{T}}( \rho^{\gamma}+\delta\rho^{a})\nabla\cdot\boldsymbol{\varphi}\,\mathrm{d}y \mathrm{d}t\\ -\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{ \varphi}\,\mathrm{d}y\mathrm{d}t-\delta\int_{Q_{T}}|\mathbf{u}|^{2}\mathbf{u} \cdot\boldsymbol{\varphi}\,\mathrm{d}y\mathrm{d}t-\varepsilon\int_{Q_{T}} \nabla\rho\otimes\boldsymbol{\varphi}:\nabla\mathbf{u}\,\mathrm{d}y\mathrm{d}t \\ +\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho)\mathbf{u}\cdot \boldsymbol{\varphi}\,\mathrm{d}y\mathrm{d}t-\int_{\Gamma_{T}}\frac{\mathbf{v} -\eta_{t}\mathbf{e_{2}}}{\varepsilon}\cdot\boldsymbol{\psi}\,\mathrm{d}x \mathrm{d}t=-\int_{Q_{T}}\rho\mathbf{F}_{\delta}\cdot\boldsymbol{\varphi}\, \mathrm{d}y\mathrm{d}t,\] (39) holds for all \(\boldsymbol{\varphi}\in\mathcal{P}^{fl}_{n,m}\), where \(\boldsymbol{\psi}=\gamma_{|\Gamma^{\eta}}\boldsymbol{\varphi}\) and \(\mathbf{v}=\gamma_{|\Gamma^{\eta}}\mathbf{u}\). Here \(\mathbf{F}_{\delta}\) denotes a smooth approximation of \(\mathbf{F}\).
### Uniform estimates
We derive the uniform estimates for solutions to the approximate problem (37)-(39). We choose \(\psi=\eta_{t}\) in (37), multiply (38) with \(\frac{\gamma}{\gamma-1}\rho^{\gamma-1}\), then \(\frac{\delta a}{a-1}\rho^{a-1}\) and \(\frac{1}{2}|\mathbf{u}|^{2}\) and finally choose \(\boldsymbol{\varphi}=\mathbf{u}\) in (39), and then sum up these identities to obtain
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}y\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d}y\mathrm{d }t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t+\varepsilon\gamma \int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d}y\mathrm{d}t\\ +\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\, \mathrm{d}y\mathrm{d}t+\varepsilon\delta a\int_{Q_{T}}\rho^{a-2}|\nabla\rho|^{ 2}\,\mathrm{d}y\mathrm{d}t+\frac{\varepsilon\delta a}{a-1}\int_{Q_{T}}\rho^{a }\,\mathrm{d}y\mathrm{d}t+\frac{1}{\varepsilon}\int_{\Gamma_{T}}|\mathbf{v}- \eta_{t}\mathbf{e_{2}}|^{2}\,\mathrm{d}x\mathrm{d}t\\ =\int_{\Gamma_{T}}f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{T}} \rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d}y\mathrm{d}t+\varepsilon \int_{Q_{T}}M\frac{\gamma}{\gamma-1}\rho^{\gamma-1}\,\mathrm{d}y\mathrm{d}t+ \varepsilon\delta\int_{Q_{T}}M\frac{a}{a-1}\rho^{a-1}\,\mathrm{d}y\mathrm{d}t\\ \leq\|f\|_{L^{2}(\Gamma_{T})}\|\eta_{t}\|_{L^{2}(\Gamma_{T})}+C \|\rho\|_{L^{a}(Q_{T})}\|\mathbf{u}\|_{L^{4}(Q_{T})}\|\mathbf{F}_{\delta}\|_{L^ {\infty}(Q_{T})}\\ +\frac{\varepsilon\gamma}{4(\gamma-1)}\|\rho\|_{L^{\gamma}(Q_{T})} ^{\gamma}+\frac{\varepsilon\delta a}{4(a-1)}\|\rho\|_{L^{a}(Q_{T})}^{a}+C( \varepsilon,\delta)\\ \leq C(\varepsilon,\delta)+\frac{1}{2}\int_{\Gamma_{T}}|\eta_{tx}| ^{2}\,\mathrm{d}x\mathrm{d}t+\frac{\varepsilon\gamma}{4(\gamma-1)}\|\rho\|_{L^{ \gamma}(Q_{T})}^{\gamma}+\frac{\varepsilon\delta a}{2(a-1)}\|\rho\|_{L^{a}(Q_{ T})}^{a}+\frac{\delta}{2}\|\mathbf{u}\|_{L^{4}(Q_{T})}^{4}, \tag{40}\]
where we used
\[\|\rho\|_{L^{a}(Q_{T})}\|\mathbf{u}\|_{L^{4}(Q_{T})}\|\mathbf{F}_{ \delta}\|_{L^{\infty}(Q_{T})}\leq C\|\rho\|_{L^{a}(Q_{T})}\|\mathbf{u}\|_{L^{4} (Q_{T})}\\ \leq\frac{\varepsilon\delta a}{4(a-1)}\|\rho\|_{L^{a}(Q_{T})}^{a}+ \frac{\delta}{2}\|\mathbf{u}\|_{L^{4}(Q_{T})}^{4}+C(\varepsilon,\delta)\]
which follows from the Young inequality. Some terms on the right hand side of (40) might be absorbed in the left hand side and thus we deduce
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}y\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d}y \mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t+\varepsilon \gamma\int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d}y\mathrm{d}t+\frac{ \varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\,\mathrm{d}y\mathrm{d}t\\ +\varepsilon\delta a\int_{Q_{T}}\rho^{a-2}|\nabla\rho|^{2}\,\mathrm{ d}y\mathrm{d}t+\frac{\varepsilon\delta a}{a-1}\int_{Q_{T}}\rho^{a}\,\mathrm{d}y \mathrm{d}t+\frac{1}{\varepsilon}\int_{\Gamma_{T}}|\mathbf{v}-\eta_{t}\mathbf{e _{2}}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(\varepsilon,\delta).\]
Next, we integrate (38) over \(\Omega\) to deduce
\[\frac{d}{dt}\int_{\Omega}\rho(t)\,\mathrm{d}\mathbf{y}+\varepsilon\int_{\Omega} \rho(t)\,\mathrm{d}\mathbf{y}=\varepsilon m_{0},\]
which yields the only time-periodic solution
\[\int_{\Omega}\rho(t)\,\mathrm{d}\mathbf{y}=m_{0}.\]
Further estimates of density are deduced by the \(L^{p}-L^{q}\) theory for parabolic equations applied to the continuity equation (38). To this end, we estimate the term
\[\nabla\cdot(\rho\mathbf{u})=\rho\nabla\cdot\mathbf{u}+\mathbf{u}\cdot\nabla\rho\]
in \(L^{p}(0,T,L^{q}(\Omega))\) using the information we already have. The term \(\rho\nabla\cdot\mathbf{u}\) is easy, as we have bounds for \(\rho\in L^{a}(Q_{T})\) and \(\nabla\mathbf{u}\in L^{2}(Q_{T})\). For the other term we use the bound \(\mathbf{u}\in L^{4}(Q_{T})\) and \(\nabla\rho\in L^{2}(Q_{T})\), where the latter follows from a straightforward manipulation with the continuity equation. Hence, we end up with
\[\|\partial_{t}\rho\|_{L^{p}(0,T;L^{q}(\Omega))}+\|\Delta\rho\|_{L^{p}(0,T;L^{ q}(\Omega))}\leq C(\varepsilon,\delta)\]
for some \(p,q\in(1,2)\), more specifically one can take \(p=q=\frac{4}{3}\). Finally, we choose \(\psi=\eta\) in (37) to obtain
\[\int_{\Gamma_{T}}|\eta_{xx}|^{2}\,\mathrm{d}x\mathrm{d}t=\frac{1}{\varepsilon }\int_{\Gamma_{T}}\mathbf{v}\cdot\mathbf{e}_{2}\eta\,\mathrm{d}x\mathrm{d}t+ \int_{\Gamma_{T}}|\eta_{t}|^{2}\,\mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}f\eta \,\mathrm{d}x\mathrm{d}t\leq C(\varepsilon,\delta)+\frac{1}{2}\int_{\Gamma_{T }}|\eta_{xx}|^{2}\,\mathrm{d}x\mathrm{d}t.\]
To sum up, we have the following set of estimates independent of \(m,n\in\mathbb{N}\).
\[\|\eta_{xx}\|_{L^{2}(\Gamma_{T})} \leq C(\varepsilon,\delta),\] \[\|\mathbf{u}\|_{L^{4}(Q_{T})} \leq C(\varepsilon,\delta),\] \[\|\mathbf{u}\|_{L^{2}(0,T;H^{1}(\Omega))} \leq C(\varepsilon,\delta),\] \[\|\mathbf{u}\|_{L^{2}(0,T;L^{p}(\Omega))} \leq C(\varepsilon,\delta,p),\quad\text{for any }p\in(1,\infty), \tag{41}\] \[\|\rho\|_{L^{a}(Q_{T})} \leq C(\varepsilon,\delta),\] \[\|\partial_{t}\rho\|_{L^{p}(0,T;L^{q}(\Omega))} +\|\Delta\rho\|_{L^{p}(0,T;L^{q}(\Omega))} \leq C(\varepsilon,\delta,p,q),\quad\text{for some }p,q\in(1,2),\] \[\|\eta\|_{L^{2}(0,T;H^{2}(\Gamma))} \leq C(\varepsilon,\delta).\]
### Solution to the approximate problem
**Lemma 5.1**.: _Assume \(f\in L^{2}_{\#}(\Gamma_{T})\), \(\hat{\mathbf{u}}\in\mathcal{P}^{fl}_{n,m}\), and \(\tilde{\eta}\in\mathcal{P}^{str}_{n,m}\) are given and let \(\tilde{\mathbf{v}}=\gamma_{\|\Gamma^{\prime}}\hat{\mathbf{u}}\) (or equivalently \(\tilde{\mathbf{v}}(t,x)=\tilde{\mathbf{u}}(t,x,\tilde{\eta}(t,x))\)). Then, the following problem_
\[\int_{\Gamma_{T}}\eta_{tt}\psi\,\mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}\eta _{xx}\psi_{xx}\,\mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}\eta_{tx}\psi_{x}\, \mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}\frac{\eta_{t}-\tilde{\mathbf{v}} \cdot\mathbf{e}_{2}}{\varepsilon}\psi\,\mathrm{d}x\mathrm{d}t=\int_{\Gamma_{T }}f\psi\,\mathrm{d}x\mathrm{d}t \tag{42}\]
_for all \(\psi\in\mathcal{P}^{str}_{n,m}\) and all \(t\in(0,T)\) has a unique solution \(\eta\in\mathcal{P}^{str}_{n,m}\). Moreover, the mapping \((\tilde{\mathbf{u}},\tilde{\eta})\mapsto\eta\) is compact from \(\mathcal{P}^{fl}_{n,m}\times\mathcal{P}^{str}_{n,m}\) to \(\mathcal{P}^{str}_{n,m}\)._
Proof.: The idea is to solve (42) in \(\eta_{t}\) instead of \(\eta\). Note that, due to time periodicity of \(\eta\), function \(\eta_{t}\) must be mean-value free in time and therefore cannot contain the constant function in time from the time basis. Therefore, we define \(S_{0}=\mathcal{P}^{str}_{n,0}=\mathrm{span}\{s_{i}(x)\}_{1\leq i\leq n}\) and \(S:=(\mathcal{P}^{str}_{n,m}\setminus S_{0},||\cdot||_{L^{2}(\Gamma_{T})})\) and the mappings \(B:S\times S\to\mathbb{R}\) and \(a:S\to\mathbb{R}\) as
\[B(u,v):=\int_{\Gamma_{T}}u_{t}v\,\mathrm{d}x\mathrm{d}t+\int_{ \Gamma_{T}}U_{xx}v_{xx}\,\mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}u_{x}v_{x}\, \mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}\frac{u}{\varepsilon}v\,\mathrm{d}x \mathrm{d}t,\] \[a(v)=\int_{\Gamma_{T}}fv\,\mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T }}\frac{\tilde{\mathbf{v}}\cdot\mathbf{e}_{2}}{\varepsilon}v\,\mathrm{d}x \mathrm{d}t\]
where \(U(t,x):=\int_{0}^{t}u(s,x)\,\mathrm{d}s\). Then, our problem can be formulated as finding \(\eta_{t}=u\in S\) such that \(B(u,v)=a(v)\) for all \(v\in S\). Obviously, \(B\) is bi-linear and \(a\) is bounded and linear. Moreover, by the equivalence of norms in finite basis \(\mathcal{P}^{str}_{n,m}\), one has \(B(u,v)\leq C||u||_{L^{2}(\Gamma_{T})}||v||_{L^{2}(\Gamma_{T})}\). Finally, due to time-periodicity, one has
\[B(u,u)=||u_{x}||_{L^{2}(\Gamma_{T})}^{2}+\frac{1}{\varepsilon}||u||_{L^{2}( \Gamma_{T})}^{2}\geq C||u||_{L^{2}(\Gamma_{T})}^{2}.\]
Therefore, the solution \(\eta_{t}=u\in S\) follows directly by Lax-Milgram Lemma. Since \(\int_{0}^{t}\eta_{t}(s,x)\,\mathrm{d}s\) in general does not belong to the space \(S\) due to integrals of \(\tau_{2k+1}(t)\), we find \(\eta\) in the form \(\eta(t,x)=P_{S}(\int_{0}^{t}\eta_{t}(s,x)ds)+G(x)\), where \(P_{S}\) is a projection from \(\mathcal{P}^{str}_{n,m}\) onto the space \(S\) and \(G(x)\in S_{0}\) is a solution to the elliptic equation
\[-\int_{\Gamma_{T}}G_{xx}\psi_{xx}\,\mathrm{d}x+\int_{\Gamma_{T}}\frac{\tilde {\mathbf{v}}\cdot\mathbf{e}_{2}}{\varepsilon}\psi\,\mathrm{d}x=-\int_{\Gamma_ {T}}f\psi\,\mathrm{d}x\]
for all \(\psi\in S_{0}\). The continuity of mapping \((\tilde{\mathbf{u}},\tilde{\eta})\mapsto\eta\) is a direct consequence of linearity of the equation.
**Lemma 5.2**.: _([19, Lemma 2]) Let \(\tilde{\mathbf{u}}\in\mathcal{P}^{fl}_{n,m}\). Then, there exists a unique solution \(\rho\) to the following problem_
\[\partial_{t}\rho+\nabla\cdot(\rho\tilde{\mathbf{u}})-\varepsilon\Delta\rho+ \varepsilon\rho=\varepsilon M.\]
_Moreover, \(\rho\in C^{\infty}_{\#}(0,T;W^{2,p}_{\#}(\Omega))\) for any \(p\in(1,\infty)\), the mapping \(\tilde{\mathbf{u}}\mapsto\rho\) is continuous and compact from \(\mathcal{P}^{fl}_{n,m}\) to \(W^{1,p}_{\#}(Q_{T})\) and \(\rho\geq 0\)._
**Lemma 5.3**.: _Let \(\tilde{\mathbf{u}}\in\mathcal{P}^{fl}_{n,m}\), \(\tilde{\eta}\in\mathcal{P}^{str}_{n,m}\) and \(\rho\in C^{\infty}_{\#}(0,T;W^{2,p}_{\#}(\Omega))\). Then, there exists a solution \(\mathbf{u}\in\mathcal{P}^{fl}_{n,m}\) of_
\[\delta\int_{Q_{T}}\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi }\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\tilde{\mathbf{u}}\cdot \partial_{t}\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T} }\rho\tilde{\mathbf{u}}\otimes\tilde{\mathbf{u}}:\nabla\boldsymbol{\varphi} \,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}(\rho^{\gamma}+\delta\rho^{a}) \nabla\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ -\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{ \varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\delta\int_{Q_{T}}|\mathbf{u}|^{2} \mathbf{u}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t- \varepsilon\int_{Q_{T}}\nabla\rho\otimes\boldsymbol{\varphi}:\nabla\tilde{ \mathbf{u}}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho)\tilde{\mathbf{u}} \cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\int_{\Gamma_{T}} \frac{\mathbf{v}-\tilde{\eta}_{t}\mathbf{e}_{2}}{\varepsilon}\cdot \boldsymbol{\psi}\,\mathrm{d}x\mathrm{d}t=-\int_{Q_{T}}\rho\mathbf{F}_{ \delta}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t, \tag{43}\]
_for all \(\boldsymbol{\varphi}\in\mathcal{P}^{fl}_{n,m}\), where \(\boldsymbol{\psi}=\gamma_{|\tilde{\Gamma}^{\delta}}\boldsymbol{\varphi}\) and \(\mathbf{v}=\gamma_{|\tilde{\Gamma}^{\delta}}\mathbf{u}\). Moreover, the mapping \((\rho,\tilde{\mathbf{u}},\tilde{\eta})\mapsto\mathbf{u}\) is continuous from \(W^{1,p}_{\#}(Q_{T})\times\mathcal{P}^{fl}_{n,m}\times\mathcal{P}^{str}_{n,m}\) to \(\mathcal{P}^{fl}_{n,m}\)._
Proof.: The existence of solution is straightforward. Indeed, (43) may be rewritten as
\[A\mathbf{u}=RHS\]
where
\[A\mathbf{u}=\mathcal{P}\left(\delta\mathbf{u}_{t}-\nabla\cdot\mathbb{S}( \nabla\mathbf{u})+\delta|\mathbf{u}|^{2}\mathbf{u}+\frac{1}{\varepsilon} \mathbf{v}\right)\]
where \(\mathcal{P}\) denotes the projection to \(\mathcal{P}^{fl}_{n,m}\) and \(RHS\) contains all the other terms. The operator \(A\) is a coercive operator on \(\mathcal{P}^{fl}_{n,m}\) and the classical result then yields that \(A\) is also surjective - we refer to [42, Theorem 2.6].
To prove the continuity, let \(\rho_{1},\rho_{2}\in C^{\infty}_{\#}(0,T;W^{2,p}_{\#}(\Omega))\), \(\tilde{\mathbf{u}}_{1},\tilde{\mathbf{u}}_{2}\in\mathcal{P}^{fl}_{n,m}\) and \(\tilde{\eta}_{1},\tilde{\eta}_{2}\in\mathcal{P}^{str}_{n,m}\) be given, and let \(\mathbf{u}_{1},\mathbf{u}_{2}\in\mathcal{P}^{fl}_{n,m}\) be the corresponding solutions. Denote \(\mathbf{v}_{i}=\gamma_{|\tilde{\Gamma}^{\delta}_{i}}\mathbf{u}_{i}\) for \(i=1,2\). We take the difference of the equation for \(\mathbf{u}_{1}\) tested with \(\boldsymbol{\varphi}=(\mathbf{u}_{1}-\mathbf{u}_{2})\) and the equation for \(\mathbf{u}_{2}\) tested with \(\boldsymbol{\varphi}=(\mathbf{u}_{1}-\mathbf{u}_{2})\). We emphasize that even though the test functions \(\boldsymbol{\varphi}\) in both equations are the same, the corresponding \(\boldsymbol{\psi}\) are different in both equations, as they are traces of \(\boldsymbol{\varphi}\) on different curves \(\tilde{\eta}_{i}\). Since
\[\frac{1}{4}|\mathbf{u}_{1}-\mathbf{u}_{2}|^{4}\leq(|\mathbf{u}_{1}|^{2} \mathbf{u}_{1}-|\mathbf{u}_{2}|^{2}\mathbf{u}_{2})\cdot(\mathbf{u}_{1}- \mathbf{u}_{2})\]
we get
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}_{1}-\nabla\mathbf{u}_{2}): \nabla(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{ \delta}{4}\int_{Q_{T}}|\mathbf{u}_{1}-\mathbf{u}_{2}|^{4}\,\mathrm{d}\mathbf{y} \mathrm{d}t+\frac{1}{\varepsilon}\int_{\Gamma_{T}}|\mathbf{v}_{1}-\mathbf{v}_{2 }|^{2}\,\mathrm{d}x\mathrm{d}t\\ \leq\int_{Q_{T}}(\rho_{1}\tilde{\mathbf{u}}_{1}-\rho_{2}\tilde{ \mathbf{u}}_{2})\cdot\partial_{t}(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{Q_{T}}(\rho_{1}\tilde{\mathbf{u}}_{1}\otimes\tilde{ \mathbf{u}}_{1}-\rho_{2}\tilde{\mathbf{u}}_{2}\otimes\tilde{\mathbf{u}}_{2}): \nabla(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +\int_{Q_{T}}(\rho_{1}^{\gamma}-\rho_{2}^{\gamma}+\delta\rho_{1} ^{a}-\delta\rho_{2}^{a})\nabla\cdot(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d} \mathbf{y}\mathrm{d}t-\varepsilon\int_{Q_{T}}(\nabla\rho_{1}-\nabla\rho_{2}) \otimes(\mathbf{u}_{1}-\mathbf{u}_{2}):\nabla\tilde{\mathbf{u}}_{1}\,\mathrm{ d}\mathbf{y}\mathrm{d}t\\ +\varepsilon\int_{Q_{T}}\nabla\rho_{2}\otimes(\mathbf{u}_{1}- \mathbf{u}_{2}):\nabla(\tilde{\mathbf{u}}_{2}-\tilde{\mathbf{u}}_{1})\,\mathrm{ d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon}{2}\int_{Q_{T}}M(\tilde{\mathbf{u}}_{1}- \tilde{\mathbf{u}}_{2})\cdot(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ -\frac{\varepsilon}{2}\int_{Q_{T}}(\rho_{1}-\rho_{2})\tilde{ \mathbf{u}}_{2}\cdot(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d}\mathbf{y} \mathrm{d}t-\frac{\varepsilon}{2}\int_{Q_{T}}\rho_{1}(\tilde{\mathbf{u}}_{1}- \tilde{\mathbf{u}}_{2})\cdot(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ +\int_{Q_{T}}(\rho_{1}-\rho_{2})\mathbf{F}_{\delta}\cdot(\mathbf{ u}_{1}-\mathbf{u}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t-\frac{1}{\varepsilon} \int_{\Gamma_{T}}(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{ 1}-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2})\cdot( \gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2}-\gamma_{|\Gamma ^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t\\ -\frac{1}{\varepsilon}\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime} \tilde{\mathbf{u}}_{2}}\mathbf{u}_{2}\cdot(\gamma_{|\Gamma^{\prime}\tilde{ \mathbf{u}}_{2}}(\mathbf{u}_{2}-\mathbf{u}_{1})-\gamma_{|\Gamma^{\prime}\tilde {\mathbf{u}}_{1}}(\mathbf{u}_{2}-\mathbf{u}_{1}))\,\mathrm{d}x\mathrm{d}t\\ +\frac{1}{\varepsilon}\int_{\Gamma_{T}}(\tilde{\eta}_{tt}-\tilde{ \eta}_{2t})\mathbf{e}_{2}\cdot\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}( \mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t\\ +\frac{1}{\varepsilon}\int_{\Gamma_{T}}\tilde{\eta}_{tt}\mathbf{e} _{2}\cdot(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}(\mathbf{u}_{1}- \mathbf{u}_{2})-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}}(\mathbf{u}_{1}- \mathbf{u}_{2}))\,\mathrm{d}x\mathrm{d}t\]
where we used that
\[\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}} \mathbf{u}_{1}\cdot\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}(\mathbf{u}_{ 1}-\mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\gamma_{|\Gamma^{ \prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2}\cdot\gamma_{|\Gamma^{\prime} \tilde{\mathbf{u}}_{2}}(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t\\ =\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}} \mathbf{u}_{1}\cdot(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}-\gamma_{| \Gamma^{\prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t- \int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2} \cdot(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{1}-\gamma_{| \Gamma^{\prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t\\ +\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}} \mathbf{u}_{1}\cdot(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{ 2}-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{2})\,\mathrm{d}x \mathrm{d}t-\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}} \mathbf{u}_{2}\cdot(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{ 1}-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{1})\,\mathrm{d}x \mathrm{d}t\\ =\int_{\Gamma_{T}}\underbrace{|\gamma_{|\Gamma^{\prime}\tilde{ \mathbf{u}}_{1}}\mathbf{u}_{1}-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}|^{2}}}_{ =|\mathbf{v}_{1}-\mathbf{v}_{2}|^{2}}\,\mathrm{d}x\mathrm{d}t+\int_{\Gamma_{T}}( \gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}\mathbf{u}_{1}-\gamma_{|\Gamma^{ \prime}\tilde{\mathbf{u}}_{2}}\mathbf{u}_{2})\cdot(\gamma_{|\Gamma^{\prime} \tilde{\mathbf{u}}_{2}}\mathbf{u}_{2}-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}} \mathbf{u}_{2})\,\mathrm{d}x\mathrm{d}t\\ +\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}} \mathbf{u}_{2}\cdot(\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{2}}(\mathbf{u}_{2}- \mathbf{u}_{1})-\gamma_{|\Gamma^{\prime}\tilde{\mathbf{u}}_{1}}(\mathbf{u}_{2}- \mathbf{u}_{1}))\,\mathrm{d}x\mathrm{d}t.\]
The convective term is treated as follows
\[\int_{Q_{T}}(\rho_{1}\tilde{\mathbf{u}}_{1}\otimes\tilde{\mathbf{ u}}_{1}-\rho_{2}\tilde{\mathbf{u}}_{2}\otimes\tilde{\mathbf{u}}_{2}):\nabla(\mathbf{u}_{1}- \mathbf{u}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ \leq C\int_{Q_{T}}(|\rho_{1}-\rho_{2}|^{2}+|\tilde{\mathbf{u}}_{1}- \tilde{\mathbf{u}}_{2}|^{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t+c\int_{Q_{T}}| \nabla(\mathbf{u}_{1}-\mathbf{u}_{2})|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\]
where \(c\) is taken small enough to absorb the term into the left hand side using the Korn inequality and \(C\) depends on the functions and \(m,n,\varepsilon,\delta\) and \(c\). The remaining terms on \(Q_{T}\) are estimated in a similar fashion. The most involved boundary term is the following
\[\frac{1}{\varepsilon}\int_{\Gamma_{T}}\gamma_{|\Gamma^{\prime_{2}}} \mathbf{u}_{2}\cdot(\gamma_{|\Gamma^{\prime_{2}}}(\mathbf{u}_{2}-\mathbf{u}_{1})- \gamma_{|\Gamma^{\prime_{1}}}(\mathbf{u}_{2}-\mathbf{u}_{1}))\,\mathrm{d}x \mathrm{d}t\leq C\int_{\Gamma_{T}}(\tilde{\eta}_{1}-\tilde{\eta}_{2})\|\partial_{ z}\mathbf{u}_
which follows by the mean value theorem. We estimate the other terms similarly and we end up with
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}_{1}-\nabla\mathbf{u}_{2}): \nabla(\mathbf{u}_{1}-\mathbf{u}_{2})\,\mathrm{d}\mathbf{y}\mathrm{d}t+\delta \int_{Q_{T}}|\mathbf{u}_{1}-\mathbf{u}_{2}|^{4}\,\mathrm{d}\mathbf{y}\mathrm{d}t +\frac{1}{\varepsilon}\int_{\Gamma_{T}}|\mathbf{v}_{1}-\mathbf{v}_{2}|^{2}\, \mathrm{d}x\mathrm{d}t\\ \leq C\int_{Q_{T}}|\nabla\rho_{1}-\nabla\rho_{2}|^{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t+C\int_{Q_{T}}|\rho_{1}-\rho_{2}|^{2}\,\mathrm{d}\mathbf{y }\mathrm{d}t+C\int_{Q_{T}}|\tilde{\mathbf{u}}_{1}-\tilde{\mathbf{u}}_{2}|^{2} \,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +C\int_{\Gamma_{T}}|\tilde{\eta}_{1}-\tilde{\eta}_{2}|^{2}\, \mathrm{d}x\mathrm{d}t+C\int_{\Gamma_{T}}|\tilde{\eta}_{1t}-\tilde{\eta}_{2t}| ^{2}\,\mathrm{d}x\mathrm{d}t,\]
so the solution mapping is continuous.
**Lemma 5.4**.: _There exists a solution \((\rho,\mathbf{u},\eta)\) to the approximate problem \((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:
Time basis limit \(m\to\infty\)
Denote the approximate solution obtained in previous section as \((\rho_{m},\mathbf{u}_{m},\eta_{m})\). One obtains from (39) and (41) that \(\partial_{t}\mathbf{u}_{m}\) is bounded by a constant independent from \(m\) in \(L^{1}(0,T;\mathrm{span}\{\mathbf{f}_{i}\}_{1\leq i\leq n})\). This means that \(\mathbf{u}_{m}\) is bounded in \(L^{\infty}(0,T;\mathrm{span}\{\mathbf{f}_{i}\}_{1\leq i\leq n})\), so one can again estimate \(\partial_{t}\mathbf{u}_{m}\) in a better space \(L^{p}_{\#}(0,T;\mathrm{span}\{\mathbf{f}_{i}\}_{1\leq i\leq n})\), for any \(p<\infty\). Similarly, the equation (37) implies \(\partial_{tt}\eta_{m}\in L^{p}_{\#}(0,T;\mathrm{span}\{s_{i}\}_{1\leq i\leq n})\) for any \(p<\infty\). This together with (41) allow us to pass to the limit \(m\to\infty\) in most terms in the system (37)-(39). The following lemma allows us to pass to the limit in the trace terms.
**Lemma 6.1**.: _Let \(\mathbf{u}_{m}\rightharpoonup\mathbf{u}\) weakly in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\) and let \(\eta_{m}\rightharpoonup\eta\) weakly in \(L^{\infty}_{\#}(0,T;H^{2}_{\#}(\Gamma))\) and in \(H^{1}_{\#}(0,T;H^{1}_{\#,0}(\Gamma))\). Then_
\[\int_{\Gamma_{T}}\mathbf{u}_{m}(t,x,\eta_{m}(t,x))\cdot\boldsymbol{\psi}(t,x) \,\mathrm{d}x\mathrm{d}t\to\int_{\Gamma_{T}}\mathbf{u}(t,x,\eta(t,x))\cdot \boldsymbol{\psi}(t,x)\,\mathrm{d}x\mathrm{d}t\]
_for all \(\boldsymbol{\psi}\in C^{\infty}_{\#,0}(\Gamma_{T})\)._
Proof.: Denote \(\tilde{\mathbf{u}}_{m}(t,x,z)=\mathbf{u}_{m}(t,x,z+\eta_{m}(t,x))\). The Sobolev embedding theorem implies \((\eta_{m})_{x}\) is bounded in \(L^{\infty}(\Gamma_{t})\) and therefore \(\tilde{\mathbf{u}}_{m}\) is bounded in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\). We extract a subsequence converging to some \(\mathbf{U}\) weakly in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\). Our aim is to identify the limit as \(\mathbf{U}(t,x,z)=\tilde{\mathbf{u}}(t,x,z):=\mathbf{u}(t,x,z+\eta(t,x))\). Denote \(\mathbf{w}_{m}:=\mathbf{u}_{m}-\mathbf{u}\). We have
\[(\tilde{\mathbf{u}}_{m}-\tilde{\mathbf{u}})(t,x,z)=\mathbf{w}_{m}(t,x,z+\eta _{m}(t,x))+\mathbf{u}(t,x,z+\eta_{m}(t,x))-\mathbf{u}(t,x,z+\eta(t,x))\]
Fix \(\boldsymbol{\varphi}\in C^{\infty}_{\#}(Q_{T})\). Then
\[\int_{Q_{T}}\mathbf{w}_{m}(t,x,z+\eta_{m}(t,x))\cdot\boldsymbol{\varphi}(t,x,z)\,\mathrm{d}\mathbf{y}\mathrm{d}t=\int_{Q_{T}}\mathbf{w}_{m}(t,x,z)\cdot \boldsymbol{\varphi}(t,x,z-\eta_{m}(t,x))\,\mathrm{d}\mathbf{y}\mathrm{d}t,\]
where \(\mathbf{w}_{m}\) converges weakly in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\) to zero and \(\boldsymbol{\varphi}(t,x,z-\eta_{m}(t,x))\) converges strongly in, say, \(L^{2}_{\#}(Q_{T})\) to \(\boldsymbol{\varphi}(t,x,z-\eta(t,x))\), since \(\eta_{m}\to\eta\) uniformly in \(\Gamma_{T}\). The same property implies also
\[\mathbf{u}(t,x,z+\eta_{m}(t,x))-\mathbf{u}(t,x,z+\eta(t,x))\to 0\quad\text{ a.e. in }Q_{T}.\]
This proves that \(\tilde{\mathbf{u}}_{m}\rightharpoonup\tilde{\mathbf{u}}\) weakly in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\) and the claim of the Lemma follows.
We pass to the limit \(m\to\infty\) in (37)-(39). We denote by \((\rho,\mathbf{u},\eta)\) the limit of \((\rho_{m},\mathbf{u}_{m},\eta_{m})\). The tripple \((\rho,\mathbf{u},\eta)\) fulfills
\[\rho\in W^{1,p}_{\#}(0,T;L^{q}_{\#}(\Omega))\cap L^{p}_{\#}(0,T; W^{2,q}_{\#}(\Omega)),\text{ for some }p,q\in(1,2),\] \[\mathbf{u}\in W^{1,p}_{\#}(0,T;\mathrm{span}\{\mathbf{f}_{i}\}_ {1\leq i\leq n}),\text{ for any }p<\infty,\] \[\eta\in W^{2,p}_{\#}(0,T;\mathrm{span}\{s_{i}\}_{1\leq i\leq n}), \text{ for any }p<\infty.\]
The **structure momentum equation**
\[\int_{\Gamma_{T}}\eta_{t}\psi_{t}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}} \eta_{xx}\psi_{xx}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\eta_{tx}\psi_{x} \,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\frac{\eta_{t}-\mathbf{v}\cdot \mathbf{e}_{2}}{\varepsilon}\psi\,\mathrm{d}x\mathrm{d}t=-\int_{\Gamma_{T}}f \psi\,\mathrm{d}x\mathrm{d}t \tag{45}\]
holds for all \(\psi\in C^{\infty}_{\#}(0,T;\mathrm{span}\{s_{i}\}_{1\leq i\leq n})\).
The **damped continuity equation**
\[\partial_{t}\rho+\nabla\cdot(\rho\mathbf{u})-\varepsilon\Delta\rho+\varepsilon \rho=\varepsilon M, \tag{46}\]
holds almost everywhere in \(Q_{T}\).
The **fluid momentum equation**
\[\delta\int_{Q_{T}}\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi} \,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\cdot\partial_{t} \boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u} \otimes\mathbf{u}:\nabla\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+ \int_{Q_{T}}(\rho^{\gamma}+\delta\rho^{a})\nabla\cdot\boldsymbol{\varphi}\, \mathrm{d}\mathbf{y}\mathrm{d}t\] \[-\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{ \varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\delta\int_{Q_{T}}|\mathbf{u}|^{2} \mathbf{u}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\varepsilon \int_{Q_{T}}\nabla\rho\otimes\boldsymbol{\varphi}:\nabla\mathbf{u}\,\mathrm{d} \mathbf{y}\mathrm{d}t\] \[+\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho)\mathbf{u}\cdot \boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\int_{\Gamma_{T}}\frac{ \mathbf{v}-\eta_{\mathbf{e}}\mathbf{e}_{2}}{\varepsilon}\cdot\boldsymbol{ \psi}\,\mathrm{d}x\mathrm{d}t=-\int_{Q_{T}}\rho\mathbf{F}_{\delta}\cdot \boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t \tag{47}\]
holds for all \(\boldsymbol{\varphi}\in C_{\#}^{\infty}(0,T;\operatorname{span}\{\mathbf{f}_{i} \}_{1\leq i\leq n})\), where \(\boldsymbol{\psi}=\gamma_{|\hat{\Gamma}^{u}}\boldsymbol{\varphi}\) and \(\mathbf{v}=\gamma_{|\hat{\Gamma}^{u}}\mathbf{u}\) in both (45) and (47).
### Uniform estimates independent of \(n\)
First, we take \(\phi\in C_{\#}^{\infty}(0,T)\) and choose \(\psi=\phi\eta_{t}\) in (45), then multiply (46) with \(\frac{\gamma}{\gamma-1}\phi\rho^{\gamma-1}\), then \(\frac{\delta\sigma}{a-1}\phi\rho^{a-1}\) and \(\frac{1}{2}\phi|\mathbf{u}|^{2}\) and finally choose \(\boldsymbol{\varphi}=\phi\mathbf{u}\) in (47), and then sum up these identities to obtain
\[-\int_{0}^{T}\phi_{t}(t)E_{\delta}(t)\,\mathrm{d}t+\int_{Q_{T}} \phi\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\,\mathrm{d}\mathbf{y} \mathrm{d}t+\delta\int_{Q_{T}}\phi|\mathbf{u}|^{4}\,\mathrm{d}\mathbf{y} \mathrm{d}t+\int_{\Gamma_{T}}\phi|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t\\ +\varepsilon\gamma\int_{Q_{T}}\phi\rho^{\gamma-2}|\nabla\rho|^{2 }\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T }}\phi\rho^{\gamma}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta a\int_ {Q_{T}}\phi\rho^{a-2}|\nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon\delta a}{a-1}\int_{Q_{T}}\phi\rho^{a}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\frac{1}{\varepsilon}\int_{\Gamma_{T}}\phi| \mathbf{v}-\eta_{\mathbf{e}}\mathbf{e}_{2}|^{2}\,\mathrm{d}x\mathrm{d}t=\\ =\int_{\Gamma_{T}}\phi f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{ T}}\phi\rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d}\mathbf{y}\mathrm{d}t+ \varepsilon\int_{Q_{T}}M\frac{\gamma}{\gamma-1}\phi\rho^{\gamma-1}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\varepsilon\delta\int_{Q_{T}}M\frac{a}{a-1}\phi\rho^{a -1}\,\mathrm{d}\mathbf{y}\mathrm{d}t \tag{48}\]
where
\[E_{\delta}(t):=\int_{\Omega}\left(\frac{1}{2}\rho|\mathbf{u}|^{2}+\frac{\delta }{2}|\mathbf{u}|^{2}+\frac{1}{\gamma-1}\rho^{\gamma}+\frac{\delta}{a-1}\rho^{ a}\right)(t)\,\mathrm{d}\mathbf{y}+\int_{\Gamma}\left(\frac{1}{2}|\eta_{t}|^{2}+ \frac{1}{2}|\eta_{xx}|^{2}\right)(t)\,\mathrm{d}x. \tag{49}\]
Choose \(\phi=1\) to get
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t+ \varepsilon\gamma\int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta a\int_{Q_{T}}\rho^{a-2}| \nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\delta a}{a-1 }\int_{Q_{T}}\rho^{a}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{1}{\varepsilon} \int_{\Gamma_{T}}|\mathbf{v}-\eta_{\mathbf{e}}\mathbf{e}_{2}|^{2}\,\mathrm{d}x \mathrm{d}t=\\ =\int_{\Gamma_{T}}f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{T}} \rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d}\mathbf{y}\mathrm{d}t+ \varepsilon\int_{Q_{T}}M\frac{\gamma}{\gamma-1}\rho^{\gamma-1}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\varepsilon\delta\int_{Q_{T}}M\frac{a}{a-1}\rho^{a-1}\, \mathrm{d}\mathbf{y}\mathrm{d}t.\]
We deduce similarly to (40)
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t+ \varepsilon\gamma\int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d}\mathbf{y} \mathrm{d}t\\ +\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta a\int_{Q_{T}}\rho^{a-2}| \nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\delta a}{a-1 }\int_{Q_{T}}\rho^{a}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +\frac{1}{\varepsilon}\int_{\Gamma_{T}}|\mathbf{v}-\eta_{\mathbf{e}} \mathbf{e}_{2}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(\varepsilon,\delta). \tag{50}\]
Next, we take a sequence of \(\phi_{k}\to\chi_{[s,t]}\), we integrate over \((0,T)\) w.r.t. \(s\) and take a supremum over \(t\) to deduce
\[\sup_{t\in(0,T)}E_{\delta}(t)\leq\frac{1}{T}\int_{0}^{T}E_{\delta }(s)\,\mathrm{d}s+\int_{\Gamma_{T}}|f\eta_{t}|\,\mathrm{d}x\mathrm{d}\tau\\ +\int_{Q_{T}}|\rho\mathbf{u}\cdot\mathbf{F}_{\delta}|\,\mathrm{d} \mathbf{y}\mathrm{d}\tau+\varepsilon M\int_{Q_{T}}\left(\frac{\gamma}{\gamma-1} \rho^{\gamma-1}+\frac{\delta a}{a-1}\rho^{a-1}\right)\,\mathrm{d}\mathbf{y} \mathrm{d}\tau. \tag{51}\]
The last four terms can be bounded as in (40). Moreover, (50) implies
\[\int_{Q_{T}}\left(\frac{1}{2}\rho|\mathbf{u}|^{2}+\frac{\delta}{2}|\mathbf{u}|^{2 }+\frac{1}{\gamma-1}\rho^{\gamma}+\frac{\delta}{a-1}\rho^{a}\right)\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{\Gamma_{T}}\frac{1}{2}|\eta_{t}|^{2}\,\mathrm{d}x \mathrm{d}t\leq C(\varepsilon,\delta).\]
We choose \(\psi=\eta\) in (45) to obtain \(\int_{\Gamma_{T}}|\eta_{xx}|^{2}\leq C(\varepsilon,\delta)\). Thus, (51) and previous estimates yield
\[\sup_{t\in(0,T)}E_{\delta}(t)\leq C(\varepsilon,\delta). \tag{52}\]
We showed that (41) still holds and moreover we have additional bounds independent of \(n\in\mathbb{N}\) from (52), namely
\[\|\eta_{xx}\|_{L^{\infty}(0,T;L^{2}(\Gamma))} \leq C(\varepsilon,\delta),\] \[\|\eta_{t}\|_{L^{\infty}(0,T;L^{2}(\Gamma))} \leq C(\varepsilon,\delta),\] \[\|\mathbf{u}\|_{L^{\infty}(0,T;L^{2}(\Omega))} \leq C(\varepsilon,\delta), \tag{53}\] \[\|\sqrt{\rho}\mathbf{u}\|_{L^{\infty}(0,T;L^{2}(\Omega))} \leq C(\varepsilon,\delta),\] \[\|\rho\|_{L^{\infty}(0,T;L^{\alpha}(\Omega))} \leq C(\varepsilon,\delta).\]
## 7 Spatial basis limit \(n\to\infty\)
Denote the solution obtained in previous section as \((\rho_{n},\mathbf{u}_{n},\eta_{n})\). The uniform bounds (41) and (53) give rise to convergences
\[\rho_{n}\rightharpoonup\rho\quad\text{ weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;L^{ \alpha}_{\#}(\Omega))\quad\text{ and weakly in }W^{1,p}_{\#}(0,T;L^{\alpha}_{\#}(\Omega))\cap L^{p}_{\#}(0,T;W^{2,q}_{\#}( \Omega)),\] \[\mathbf{u}_{n}\rightharpoonup\mathbf{u}\quad\text{ weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;L^{2}_{\#}(\Omega))\quad\text{ and weakly in }L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega)),\] \[\eta_{n}\rightharpoonup\eta\quad\text{ weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;H^{2}_{\#}(\Gamma))\quad\text{ and weakly in }H^{1}_{\#}(0,T;H^{1}_{\#,0}(\Gamma)),\]
for some \(p,q\in(1,2)\). Our goal now is to pass to the limit \(n\to\infty\) in (45), (46), (47) and (48).
### Limit in the structure momentum equation
First, (45) is a linear equation and thus the weak convergence is sufficient to claim
\[\int_{\Gamma_{T}}\eta_{t}\psi_{t}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}} \eta_{xx}\psi_{xx}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\eta_{tx}\psi_{x} \,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\frac{\eta_{t}-\mathbf{v}\cdot \mathbf{e}_{2}}{\varepsilon}\psi\,\mathrm{d}x\mathrm{d}t=-\int_{\Gamma_{T}}f \psi\,\mathrm{d}x\mathrm{d}t, \tag{54}\]
for all \(\psi\in C^{\infty}_{\#,0}(\Gamma_{T})\). We have \(\|\partial_{tt}\eta_{n}\|_{L^{2}(0,T;(H^{2}_{\#,0}(\Gamma)^{*}))}\leq C( \varepsilon,\delta)\) due to (45). This together with \(\|\partial_{t}\eta_{n}\|_{L^{2}_{\#}(0,T;H^{1}(\Gamma))}\leq C(\varepsilon,\delta)\) imply that
\[\partial_{t}\eta_{n}\to\partial_{t}\eta\quad\text{ strongly in }L^{2}_{\#}( \Gamma_{T}). \tag{55}\]
We choose \(\psi=\eta_{n}\) in (45) and \(\psi=\eta\) in (54) and we compare these two identities to conclude
\[\int_{\Gamma_{T}}|\partial_{xx}\eta_{n}|^{2}\,\mathrm{d}x\mathrm{d}t\to\int_{ \Gamma_{T}}|\partial_{xx}\eta|^{2}\,\mathrm{d}x\mathrm{d}t. \tag{56}\]
### Limit in the continuity equation
We proceed to a limit in the continuity equation. Estimates (41) and (53) yield that (upon passing to a suitable subsequence)
\[\partial_{t}\rho+\nabla\cdot(\rho\mathbf{u})-\varepsilon\Delta\rho+ \varepsilon\rho=\varepsilon M \tag{57}\]
almost everywhere in \(Q_{T}\). We multiply (46) by \(\rho_{n}\), integrate the resulting equation over \(Q_{T}\) and we pass to the limit \(n\to\infty\). We compare the result with (57) multiplied by \(\rho\) and integrated over \(Q_{T}\). We deduce
\[\int_{Q_{T}}|\nabla\rho_{n}|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\to\int_{Q_{T} }|\nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\]
so
\[\nabla\rho_{n}\to\nabla\rho\quad\text{ strongly in }L^{2}(Q_{T}). \tag{58}\]
### Limit in the fluid momentum equation
We start with the observation that bounds (41) allow to bound \(\rho\mathbf{u}\) in \(L^{\frac{2n}{\theta}}(Q_{T})\), which in turn implies \(\|\nabla\rho_{n}\|_{L^{\frac{2n}{\theta}}(Q_{T})}\leq C(\varepsilon,\delta)\). Consequently, we use (39), to obtain
\[\|\partial_{t}((\delta+\rho_{n})\mathbf{u}_{n})\|_{(L^{20}_{\#}(0,T;W^{2,p}_{ \#}(\Omega)))^{*}}\leq C(\varepsilon,\delta)\]
for some \(p>2\). Moreover, uniform bounds yield \(\|(\delta+\rho_{n})\mathbf{u}_{n}\|_{L^{\infty}(0,T;L^{\frac{2n}{\theta+1}}( \Omega))}\leq C(\varepsilon,\delta)\) and we infer \(\|(\delta+\rho_{n})\mathbf{u}_{n}\|_{L^{\infty}_{\#}(0,T;(W^{s,2}_{\#}(\Omega) )^{*})}\leq C(\varepsilon,\delta)\) for some \(s<1\). This however means that
\[(\delta+\rho_{n})\mathbf{u}_{n}\to(\delta+\rho)\mathbf{u}\quad\text{ strongly in }L^{\infty}_{\#}(0,T;(W^{s^{\prime},2}_{\#}(\Omega))^{*}) \tag{59}\]
for some \(s<s^{\prime}<1\), and consequently by the weak convergence \(\mathbf{u}_{n}\rightharpoonup\mathbf{u}\) in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\)
\[(\rho_{n}+\delta)\mathbf{u}_{n}\otimes\mathbf{u}_{n}\rightharpoonup(\rho+ \delta)\mathbf{u}\quad\text{ weakly in }L^{p}_{\#}(Q_{T})\text{ for some }p>1. \tag{60}\]
Since \(0\leq\frac{\rho_{n}}{\rho_{n}+\delta}<1\) and \(\rho_{n}\to\rho\) a.e. in \(Q_{T}\), one concludes that \(\frac{\rho_{n}}{\rho_{n}+\delta}\to\frac{\rho}{\rho+\delta}\) in \(L^{q}_{\#}(Q_{T})\) for any \(q\in[1,\infty)\) so
\[\frac{\rho_{n}}{\rho_{n}+\delta}(\rho_{n}+\delta)\mathbf{u}_{n}\otimes \mathbf{u}_{n}=\rho_{n}\mathbf{u}_{n}\otimes\mathbf{u}_{n}\rightharpoonup \rho\mathbf{u}\otimes\mathbf{u}\quad\text{in }L^{1}_{\#}(Q_{T}).\]
The weak convergence \(\mathbf{u}_{n}\rightharpoonup\mathbf{u}\) in \(L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega))\) and the strong convergence of \(\nabla\rho_{n}\) in \(L^{2}_{\#}(Q_{T})\) obtained in (58) yield
\[\int_{Q_{T}}\nabla\rho_{n}\otimes\boldsymbol{\varphi}:\nabla\mathbf{u}_{n} \operatorname{d}\!\mathbf{y}\mathrm{d}t\to\int_{Q_{T}}\nabla\rho\otimes \boldsymbol{\varphi}:\nabla\mathbf{u}\operatorname{d}\!\mathbf{y}\mathrm{d}t,\]
for any \(\boldsymbol{\varphi}\in C^{\infty}_{\#}(Q_{T})\). The remaining terms are dealt with in a straightforward fashion by means of uniform bounds and Lemma 6.1 is used to pass to the limit in the trace term. Therefore, when we let \(n\to\infty\) in (47) we end up with
\[\delta\int_{Q_{T}}\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi }\operatorname{d}\!\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\cdot \partial_{t}\boldsymbol{\varphi}\operatorname{d}\!\mathbf{y}\mathrm{d}t+\int_ {Q_{T}}\rho\mathbf{u}\otimes\mathbf{u}:\nabla\boldsymbol{\varphi}\operatorname {d}\!\mathbf{y}\mathrm{d}t+\int_{Q_{T}}(\rho^{\gamma}+\delta\rho^{a})\nabla \cdot\boldsymbol{\varphi}\operatorname{d}\!\mathbf{y}\mathrm{d}t\\ -\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{ \varphi}\operatorname{d}\!\mathbf{y}\mathrm{d}t-\delta\int_{Q_{T}}|\mathbf{u} |^{2}\mathbf{u}\cdot\boldsymbol{\varphi}\operatorname{d}\!\mathbf{y}\mathrm{d} t-\varepsilon\int_{Q_{T}}\nabla\rho\otimes\boldsymbol{\varphi}:\nabla \mathbf{u}\operatorname{d}\!\mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho)\mathbf{u}\cdot \boldsymbol{\varphi}\operatorname{d}\!\mathbf{y}\mathrm{d}t-\int_{\Gamma_{T}} \frac{\mathbf{v}-\eta_{t}\mathbf{e}_{2}}{\varepsilon}\cdot\boldsymbol{\psi} \operatorname{d}\!\mathbf{x}\mathrm{d}t=\int_{Q_{T}}\rho\mathbf{F}_{\delta} \cdot\boldsymbol{\varphi}\operatorname{d}\!\mathbf{y}\mathrm{d}t, \tag{61}\]
for all \(\boldsymbol{\varphi}\in C^{\infty}_{\#}(Q_{T})\) and \(\boldsymbol{\psi}\in C^{\infty}_{\#}(\Gamma_{T})\) such that \(\boldsymbol{\varphi}(t,x,\hat{\eta}(t,x))=\boldsymbol{\psi}(t,x)\) on \(\Gamma_{T}\), where \(\mathbf{v}=\gamma_{[\hat{\Gamma}^{*}}\mathbf{u}\).
### Limit in the energy inequality
The information gathered above is clearly sufficient to pass to the limit in all terms on the right hand side of (48). In order to pass to the limit on the left hand side we first note that (55), (56) together with (60) and the information about the sequence of densities allows us to pass to the limit in the first term on the left hand side of (48). Finally, we assume that \(\phi\in C^{\infty}_{\#}(0,T)\) satisfies moreover \(\phi\geq 0\) and we use weak lower semicontinuity of convex functions to deduce that in the limit, (48) holds as an inequality
\[-\int_{0}^{T}\phi_{t}(t)E_{\delta}(t)\,\mathrm{d}t+\int_{Q_{T}} \phi\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\operatorname{d}\!\mathbf{y} \mathrm{d}t+\delta\int_{Q_{T}}\phi|\mathbf{u}|^{4}\operatorname{d}\!\mathbf{y} \mathrm{d}t+\int_{\Gamma_{T}}\phi|\eta_{tx}|^{2}\operatorname{d}\!x\mathrm{d}t\\ +\varepsilon\gamma\int_{Q_{T}}\phi\rho^{\gamma-2}|\nabla\rho|^{2} \operatorname{d}\!\mathbf{y}\mathrm{d}t+\frac{\varepsilon\gamma}{\gamma-1}\int_ {Q_{T}}\phi\rho^{\gamma}\operatorname{d}\!\mathbf{y}\mathrm{d}t+\varepsilon \delta a\int_{Q_{T}}\phi\rho^{a-2}|\nabla\rho|^{2}\operatorname{d}\!\mathbf{y} \mathrm{d}t\\ +\frac{\varepsilon\delta a}{a-1}\int_{Q_{T}}\phi\rho^{a} \operatorname{d}\!\mathbf{y}\mathrm{d}t+\frac{1}{\varepsilon}\int_{\Gamma_{T}} \phi|\mathbf{v}-\eta_{t}\mathbf{e}_{2}|^{2}\operatorname{d}\!x\mathrm{d}t\leq \int_{\Gamma_{T}}\phi f\eta_{t}\operatorname{d}\!x\mathrm{d}t\\ +\int_{Q_{T}}\phi\rho\mathbf{u}\cdot\mathbf{F}_{\delta} \operatorname{d}\!\mathbf{y}\mathrm{d}t+\varepsilon\int_{Q_{T}}M\frac{\gamma}{ \gamma-1}\phi\rho^{\gamma-1}\operatorname{d}\!\mathbf{y}\mathrm{d}t+ \varepsilon\delta\int_{Q_{T}}M\frac{a}{a-1}\phi\rho^{a-1}\operatorname{d}\! \mathbf{y}\mathrm{d}t \tag{62}\]
where \(E_{\delta}\) is defined by (49).
### Uniform bounds independent of \(\varepsilon\)
We use the energy inequality (62) to deduce estimates of \((\rho,\mathbf{u},\eta)\) independent of \(\varepsilon\). We start by taking \(\phi=1\) in (62) to get
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\,\mathrm{ d}\mathbf{y}\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d}\mathbf{y} \mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t+ \varepsilon\gamma\int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta a\int_{Q_{T}}\rho^{a-2}| \nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\delta a}{a-1 }\int_{Q_{T}}\rho^{a}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{1}{\varepsilon} \int_{\Gamma_{T}}|\mathbf{v}-\eta_{t}\mathbf{e}_{2}|^{2}\,\mathrm{d}x\mathrm{ d}t\\ \leq\int_{\Gamma_{T}}f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{T }}\rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d}\mathbf{y}\mathrm{d}t+ \varepsilon\int_{Q_{T}}M\frac{\gamma}{\gamma-1}\rho^{\gamma-1}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\varepsilon\delta\int_{Q_{T}}M\frac{a}{a-1}\rho^{a-1}\, \mathrm{d}\mathbf{y}\mathrm{d}t. \tag{63}\]
The estimates here need to be more delicate than in the previous section as we no longer have directly information about the density independent of \(\varepsilon\) on the left hand side of (63). Therefore we introduce (recall (49))
\[\mathcal{E}_{\delta}:=\sup_{t\in(0,T)}E_{\delta}(t). \tag{64}\]
We take \(\phi\to\chi_{[s,t]}\) in (62), we integrate over \((0,T)\) with respect to \(s\) and finally we take the supremum over \(t\) to get
\[\mathcal{E}_{\delta}\leq\frac{1}{T}\int_{0}^{T}E_{\delta}(s)\, \mathrm{d}s+\int_{\Gamma_{T}}f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{Q_{T}} \rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +\varepsilon\int_{Q_{T}}M\frac{\gamma}{\gamma-1}\rho^{\gamma-1}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta\int_{Q_{T}}M\frac{a}{a-1} \rho^{a-1}\,\mathrm{d}\mathbf{y}\mathrm{d}t. \tag{65}\]
Our goal is therefore to bound the terms on the right-hand sides of (63) and (65). The first, third and fourth terms on the right-hand side of (63) can be absorbed as in (40). The second term has to be estimated in a different way. Let \(p>1\) be small and let \(q=\frac{p}{p-1}\). We have
\[\int_{Q_{T}}\rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d} \mathbf{y}\mathrm{d}t\leq C\|\rho\|_{L^{\infty}(0,T;L^{p}(\Omega))}\|\mathbf{u }\|_{L^{2}(0,T;L^{p}(\Omega))}\leq C\|\rho\|_{L^{\infty}(0,T;L^{p}(\Omega))} \|\mathbf{u}\|_{L^{2}(0,T;H^{1}(\Omega))}\\ \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{s})+\frac{\delta}{2} \left(\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d}\mathbf{y} \mathrm{d}t\right)\]
for \(s>0\) as small as we want, where we interpolated \(L^{p}\) between \(L^{1}\) and \(L^{a}\). Provided \(\delta<1\), these terms can be absorbed so it leads to
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t +\varepsilon\gamma\int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta a\int_{Q_{T}}\rho^{a-2}| \nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\delta a}{a-1 }\int_{Q_{T}}\rho^{a}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{1}{\varepsilon} \int_{\Gamma_{T}}|\mathbf{v}-\eta_{t}\mathbf{e}_{2}|^{2}\,\mathrm{d}x\mathrm{ d}t\\ \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{s}).\]
The last four terms on the right hand side of (65) are treated the same way, hence it remains to show
\[\int_{0}^{T}E_{\delta}(s)\,\mathrm{d}s\leq C(1+\mathcal{E}_{\delta}^{\beta}) \tag{66}\]
for some \(\beta<1\).
We observe that
\[\int_{Q_{T}}\frac{1}{2}(\rho+\delta)|\mathbf{u}|^{2}\,\mathrm{d}\mathbf{y} \mathrm{d}t+\int_{\Gamma_{T}}\frac{1}{2}|\eta_{t}|^{2}\,\mathrm{d}x\mathrm{d}t \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{\frac{s}{2}+\frac{s}{2}}+\mathcal{E} _{\delta}^{s}).\]
We multiply (46) by \(\rho\) and integrate over \(Q_{T}\) to get
\[\varepsilon\int(\rho^{2}+|\nabla\rho|^{2})\,\mathrm{d}\mathbf{y} \mathrm{d}t=\int_{Q_{T}}-\frac{1}{2}\rho^{2}\nabla\cdot\mathbf{u}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{Q_{T}}\varepsilon M\rho\,\mathrm{d}\mathbf{y} \mathrm{d}t\\ \leq\left(\int_{Q_{T}}\rho^{4}\,\mathrm{d}\mathbf{y}\mathrm{d}t \right)^{\frac{1}{2}}\|\mathbf{u}\|_{L^{2}(0,T;H^{1}(\Omega))}+C\leq\left(\int_ {Q_{T}}\rho^{a}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right)^{\frac{2}{2}}C(s, \delta)(1+\mathcal{E}_{\delta}^{s})\leq C(s,\delta)(1+\mathcal{E}_{\delta}^{s+ \frac{2}{a}}). \tag{67}\]
Next, we choose \(\psi=\eta\) in (45) and sum up the resulting equation with (47) with the choice \(\boldsymbol{\varphi}=\eta\mathbf{e}_{2}\). Most of the calculations can be done in the same way as in Section 4.3, however we need to estimate several additional terms multiplied by approximation parameters, namely
\[\left|\int_{Q_{T}}\delta\mathbf{u}\cdot\eta_{t}\mathbf{e}_{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t\right|\leq C(\delta)\|\mathbf{u}\|_{L^{4}(Q_{T})}\|\eta _{t}\|_{L^{2}(0,T;L^{\infty}(\Gamma))}\leq C(s,\delta)(1+\mathcal{E}_{\delta}^ {\frac{3}{4}s}),\]
\[\left|\int_{Q_{T}}\delta|\mathbf{u}|^{2}\mathbf{u}\cdot\eta \mathbf{e}_{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)\| \mathbf{u}\|_{L^{4}(Q_{T})}^{3}\|\eta\|_{L^{4}(\Gamma_{T})}\leq C(s,\delta)(1+ \mathcal{E}_{\delta}^{\frac{3}{4}s})(\|\eta_{t}\|_{L^{2}(\Gamma_{T})}+\|\eta_ {t}\|_{L^{2}(\Gamma_{T})})\\ \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{\frac{3}{4}s})+\frac{1}{ 16}\|\eta_{xx}\|_{L^{2}(\Gamma_{T})}^{2},\]
\[\left|\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho)\mathbf{u}\cdot \eta\mathbf{e}_{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)(1+\| \rho\|_{L^{\infty}(0,T;L^{\rho}(\Omega))})\|\mathbf{u}\|_{L^{2}(0,T;L^{\alpha }(\Omega))}\|\eta\|_{L^{2}(0,T;L^{\infty}(\Gamma))}\\ \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{2s})+\frac{1}{16}\|\eta _{xx}\|_{L^{2}(\Gamma_{T})}^{2},\]
and
\[\left|\varepsilon\int_{Q_{T}}\nabla\rho\otimes(\eta\mathbf{e}_{2} ):\nabla\mathbf{u}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)\| \sqrt{\varepsilon}\nabla\rho\|_{L^{2}(Q_{T})}\|\nabla\mathbf{u}\|_{L^{2}(Q_{T} )}\|\eta\|_{L^{\infty}(\Gamma_{T})}\\ \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{2s+\frac{2}{a}})+\frac {1}{16}\|\eta_{xx}\|_{L^{2}(\Gamma_{T})}^{2}.\]
Eventually we end up with the estimate
\[\int_{\Gamma_{T}}|\eta_{xx}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(s,\delta)(1+ \mathcal{E}_{\delta}^{s^{\prime}}),\]
for some \(0<s^{\prime}<1\).
It remains to show
\[\int_{\Omega}\left(\frac{1}{\gamma-1}\rho^{\gamma}+\frac{\delta}{a-1}\rho^{a} \right)\,\mathrm{d}\mathbf{y}\mathrm{d}t\leq C(s,\delta)(1+\mathcal{E}_{ \delta}^{s^{\prime\prime}}), \tag{68}\]
for some \(0<s^{\prime\prime}<1\) similarly to Section 4.4. To this end we use \(\boldsymbol{\varphi}_{h}\) defined in (24) as a test function in (61). As above in the estimate of second spatial derivatives of \(\eta\), we obtain four more terms to estimate. The term \(\delta\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi}_{h}\) is handled similarly as \(\rho\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi}_{h}\). The remaining three additional terms are easy to handle due to the estimate
\[\|\boldsymbol{\varphi}_{h}\|_{L^{\infty}(Q_{T})}\leq\left\|\mathcal{B}_{ \Omega}\left[\rho^{\alpha}-\int_{\Omega}\rho^{\alpha}\,\mathrm{d}x\right] \right\|_{L^{\infty}(Q_{T})}\leq C\]
which follows from (23). Therefore
\[\left|\int_{Q_{T}}\delta|\mathbf{u}|^{2}\mathbf{u}\cdot\boldsymbol{\varphi}_{h} \,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)\|\mathbf{u}\|_{L^{4}(Q_ {T})}^{3}\leq C(s,\delta)(1+\mathcal{E}_{\delta}^{\frac{3}{4}s}),\]
\[\left|\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho)\mathbf{u}\cdot\boldsymbol{ \varphi}_{h}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)(1+\|\rho\|_{L ^{\infty}(0,T;L^{\rho}(\Omega))})\|\mathbf{u}\|_{L^{2}(0,T;L^{4}(\Omega))} \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{s}),\]
\[\left|\varepsilon\int_{Q_{T}}\nabla\rho\otimes\boldsymbol{\varphi}_{h}:\nabla \mathbf{u}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)\|\sqrt{ \varepsilon}\nabla\rho\|_{L^{2}(Q_{T})}\|\nabla\mathbf{u}\|_{L^{2}(Q_{T})} \leq C(s,\delta)(1+\mathcal{E}_{\delta}^{s+\frac{1}{\varepsilon}}).\]
In the second part of this procedure we use the test function \(\boldsymbol{\varphi}=\varphi_{h}\mathbf{e}_{2}\) in (61) with \(\varphi_{h}\) defined in (33). The estimates are again either similar to those in Section 4.4 or to those presented above and we recover (68). This however means that (66) is proved which yields
\[\mathcal{E}_{\delta}\leq C(\delta), \tag{69}\]
and
\[\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4}\,\mathrm{d} \mathbf{y}\mathrm{d}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{d}t +\varepsilon\gamma\int_{Q_{T}}\rho^{\gamma-2}|\nabla\rho|^{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t\\ +\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\rho^{\gamma}\, \mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon a\delta\int_{Q_{T}}\rho^{a-2}| \nabla\rho|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\delta a}{a -1}\int_{Q_{T}}\rho^{a+1}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ +\frac{1}{\varepsilon}\int_{\Gamma_{T}}|\mathbf{v}-\eta_{t} \mathbf{e}_{2}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(\delta). \tag{70}\]
### Coupled back momentum equation
We sum up the momentum equation (61) for test functions \((\boldsymbol{\varphi},\psi)\) and the structure momentum equation (54) for test function \(\psi\). This way the penalization terms get cancelled and we obtain that \((\rho,\mathbf{u},\eta)\) satisfy the coupled momentum equation
\[\delta\int_{Q_{T}}\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi }\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{u}\cdot\partial_{t} \boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}\rho\mathbf{ u}\otimes\mathbf{u}:\nabla\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+ \int_{Q_{T}}(\rho^{\gamma}+\delta\rho^{a})\nabla\cdot\boldsymbol{\varphi}\, \mathrm{d}\mathbf{y}\mathrm{d}t\\ -\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{ \varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t-\delta\int_{Q_{T}}|\mathbf{u}|^{2} \mathbf{u}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t- \varepsilon\int_{Q_{T}}\nabla\rho\otimes\boldsymbol{\varphi}:\nabla\mathbf{u} \,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon}{2}\int_{Q_{T}}(M-\rho) \mathbf{u}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t\\ -\int_{\Gamma_{T}}\eta_{t}\psi_{t}\,\mathrm{d}x\mathrm{d}t-\int _{\Gamma_{T}}\eta_{xx}\psi_{xx}\,\mathrm{d}x\mathrm{d}t-\int_{\Gamma_{T}}\eta _{tx}\psi_{x}\,\mathrm{d}x\mathrm{d}t=-\int_{\Gamma_{T}}f\psi\,\mathrm{d}x \mathrm{d}t-\int_{Q_{T}}\rho\mathbf{F}_{\delta}\cdot\boldsymbol{\varphi}\, \mathrm{d}\mathbf{y}\mathrm{d}t, \tag{71}\]
which holds for all \(\boldsymbol{\varphi}\in C_{\#}^{\infty}(Q_{T})\) and \(\psi\in C_{\#,0}^{\infty}(\Gamma_{T})\) such that \(\boldsymbol{\varphi}(t,x,\hat{\eta}(t,x))=\psi(t,x)\mathbf{e}_{2}\) on \(\Gamma_{T}\). Note however, that at this point, the problem is still not fully coupled since we cannot ensure that \(\eta_{t}\mathbf{e}_{2}=\gamma_{|\Gamma^{*}}\mathbf{u}\).
### Improved estimate of \(\eta_{xx}\)
The following approach comes from [40], where the improved regularity of displacement was obtained for the interaction problem between an incompressible viscous fluid and a nonlinear Koiter shell (see also [46, Theorem 2.2] for the compressible counterpart). We start with introducing the notation \(D_{h}^{s}[\eta]\) defined as
\[D_{h}^{s}[\eta](x):=\frac{\eta(t,x+h)-\eta(t,x)}{|h|^{s-1}h},\quad s>0,h\in \mathbb{R}.\]
The idea is to take \(s<\frac{1}{4}\) and test the coupled momentum equation (71) with a suitable test function to obtain an estimate on \(\int_{\Gamma_{T}}|D_{h}^{s}[\eta_{xx}]|^{2}\,\mathrm{d}x\mathrm{d}t\) independent on \(h<h_{0}\) for some \(h_{0}>0\). The integration by parts formula for \(D_{h}^{s}\) holds for periodic functions, i.e.
\[\int_{\Gamma}D_{h}^{s}[u](x)v(x)\,\mathrm{d}x=-\int_{\Gamma}u(x)D_{-h}^{s}[v](x )\,\mathrm{d}x\]
for all periodic \(u,v\) such that the integrals are finite. We set
\[\psi_{h}(t,x)=D_{-h}^{s}[D_{h}^{s}[\eta(t,x)]]-\frac{1}{|h|^{2s}}(\eta(t,-h)+ \eta(t,h))=:\psi_{1,h}(t,x)-\psi_{2,h}(t)\]
and use \((\psi_{h}{\bf e}_{2},\psi_{h})\) as a test function couple in (71) (note that this is an admissible test function because \(\psi_{h}(t,0)=0\)). This gives rise to
\[-\int_{\Gamma_{T}}\eta_{xx}(\psi_{h})_{xx}\,\mathrm{d}x\mathrm{d}t=RHS,\]
so by taking into account that \((\psi_{h})_{xx}=D^{s}_{-h}[D^{s}_{h}[\eta(t,x)_{xx}]]\) which implies
\[\int_{\Gamma_{T}}|D^{s}_{h}[\eta_{xx}(t,x)]|^{2}\,\mathrm{d}x\mathrm{d}t=-\int _{\Gamma_{T}}\eta_{xx}(\psi_{h})_{xx}\,\mathrm{d}x\mathrm{d}t,\]
the proof will follow once we show that RHS is bounded.
First, note that
\[\|D^{s}_{-h}[D^{s}_{h}[\eta_{t}]]\|_{L^{p}(\Gamma)} \leq C\|\eta_{tx}\|_{L^{2}(\Gamma)}, \tag{72}\] \[\|D^{s}_{-h}[D^{s}_{h}[\eta_{x}]]\|_{L^{p}(\Gamma)} \leq C\|\eta_{xx}\|_{L^{2}(\Gamma)}, \tag{73}\]
for any \(p>1\) and \(s<\frac{1}{4}\) by embedding theorems (see [44, Proposition 2] and [45, Proposition 4.6]). Moreover, since \(||\eta_{tx}||_{L^{2}(\Gamma_{T})}\leq C(\delta)\), we get \(\eta_{t}\in L^{2}(0,T;C^{\frac{1}{2}}(\Gamma))\) and thus
\[\frac{\eta_{t}(t,\pm h)}{|h|^{\frac{1}{2}}}=\frac{\eta_{t}(t,\pm h)-\eta_{t}(t,0)}{|h|^{\frac{1}{2}}}\in L^{2}(0,T)\]
with its \(L^{2}\)-norm bounded by \(C(\delta)\). This means that for \(s<\frac{1}{4}\) it holds \(\psi_{2,t}\in L^{2}(0,T)\) and \(||\psi_{2,t}||_{L^{2}(0,T)}\leq C(\delta)\). This combined with (72) implies
\[\|D^{s}_{-h}[D^{s}_{h}[(\psi_{h})_{t}]]\|_{L^{2}(0,T;L^{p}(\Gamma))}\leq C\| \eta_{tx}\|_{L^{2}(0,T;L^{2}(\Gamma))}\leq C(\delta), \tag{74}\]
while (73) implies
\[\|D^{s}_{-h}[D^{s}_{h}[(\psi_{h})_{x}]]\|_{L^{\infty}(0,T;L^{p}(\Gamma))}\leq C \|\eta_{xx}\|_{L^{\infty}(0,T;L^{2}(\Gamma))}\leq C(\delta), \tag{75}\]
for any \(p>1\) and \(s<\frac{1}{4}\). Finally, since \(||\eta_{xx}||_{L^{\infty}(0,T;L^{2}(\Gamma))}\leq C(\delta)\) a simple first order Taylor expansion of \(\eta\) yields
\[\psi_{2}(t)\leq C(\delta)|h|^{1-2s}\leq C(\delta),\]
so
\[\|D^{s}_{-h}[D^{s}_{h}[(\psi_{h})]]\|_{L^{\infty}(\Gamma_{T})}\leq C(\|\eta_{ xx}\|_{L^{\infty}(0,T;L^{2}(\Gamma))}+||\psi_{2,h}||_{L^{\infty}}(0,T))\leq C( \delta). \tag{76}\]
Now, we are ready to show that the arising terms are bounded. First, the bounds of the terms involving time derivatives of \(\psi_{h}\) are bounded as follows
\[\left|\int_{Q_{T}}\rho{\bf u}\cdot(\partial_{t}\psi_{h}{\bf e}_{2})\,\mathrm{ d}\mathrm{d}y\mathrm{d}t\right|\leq C\|\rho\|_{L^{\infty}(0,T;L^{\gamma}( \Omega))}||{\bf u}||_{L^{2}(0,T;L^{p}(\Omega))}\|D^{s}_{-h}[D^{s}_{h}[(\psi_{h} )_{t}]]\|_{L^{2}(0,T;L^{p}(\Gamma))}\leq C(\delta)\]
for \(p=\frac{2\gamma}{\gamma-1}\) by (74), and
\[\delta\left|\int_{Q_{T}}{\bf u}\cdot(\partial_{t}\psi_{h}{\bf e}_ {2})\,\mathrm{d}y\mathrm{d}t\right|\leq C\delta^{\frac{3}{4}}||\delta^{\frac{ 1}{4}}{\bf u}||_{L^{4}(Q_{T})}\|D^{s}_{-h}[D^{s}_{h}[(\psi_{h})_{t}]]\|_{L^{2}( \Gamma_{T})}\leq C(\delta),\] \[\left|\int_{\Gamma_{T}}\eta_{t}(\psi_{h})_{t}\,\mathrm{d}x \mathrm{d}t\right|\leq||\eta_{t}||_{L^{2}(\Gamma_{T})}\|D^{s}_{-h}[D^{s}_{h}[( \psi_{h})_{t}]]\|_{L^{2}(\Gamma_{T})}\leq C(\delta),\]
by (74) and uniform bounds. Next, the pressure term vanishes since \(\nabla\cdot(D^{s}_{-h}[D^{s}_{h}[\eta]](x){\bf e}_{2})=0\). The remaining terms all include at most one spatial derivative on \(\psi_{h}\). Let us bound only the most "difficult" terms:
\[\left|\varepsilon\int_{Q_{T}}\nabla\rho\otimes(\psi_{h}{\bf e}_{2}):\nabla{ \bf u}\,\mathrm{d}y\mathrm{d}t\right|\leq\sqrt{\varepsilon}||\sqrt{ \varepsilon}\nabla\rho||_{L^{2}(Q_{T})}||\psi_{h}||_{L^{\infty}(\Gamma_{T})}|| \nabla{\bf u}||_{L^{2}(Q_{T})}\leq C(\delta)\]
by (76), and
\[\left|\int_{Q_{T}}\rho\mathbf{u}\otimes\mathbf{u}:\nabla\boldsymbol{\varphi}_{h} \,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C||\rho||_{L^{\infty}(0,T;L^{ \gamma}(\Omega))}||\mathbf{u}||_{L^{2}(0,T;L^{p}(\Omega))}^{2}||(\psi_{h})_{x}|| _{L^{\infty}(0,T;L^{p}(\Omega))}\]
for \(p=\frac{3\gamma}{\gamma-1}\), by (75). The remaining terms are bounded in a similar fashion, so we conclude
\[\int_{\Gamma_{T}}|D_{h}^{*}[\eta_{xx}]|^{2}\leq C(\delta)\]
and as a direct consequence of imbedding and uniform bound on \(\eta\) in \(L^{2}(0,T;H^{2}(\Gamma))\), one finally obtains
\[||\eta||_{L^{2}(0,T;H^{2+*}(\Gamma))}\leq C(\delta) \tag{77}\]
for any \(s<\frac{1}{4}\).
## 8 Limit \(\varepsilon\to 0\)
Denote the solutions obtained in previous section as \((\rho_{\varepsilon},\mathbf{u}_{\varepsilon},\eta_{\varepsilon})\). The uniform bounds (69) and (70) give rise to the following weak convergencies
\[\rho_{\varepsilon}\rightharpoonup\rho\quad\text{weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;L^{a}_{\#}(\Omega)),\] \[\mathbf{u}_{\varepsilon}\rightharpoonup\mathbf{u}\quad\text{ weakly in }L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega)),\] \[\eta_{\varepsilon}\rightharpoonup\eta\quad\text{weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;H^{2}_{\#}(\Gamma))\quad\text{and weakly in }H^{1}_{\#}(0,T;H^{1}_{\#,0}(\Gamma)).\]
We pass to the limit in the equations (57), (71) and the energy inequality (62).
### Limit in the continuity equation
We use nowadays standard arguments for the continuity equation to get \(\rho_{\varepsilon}\to\rho\) in \(C_{w}([0,T];L^{a}(\Omega))\) and therefore \(\rho_{\varepsilon}\mathbf{u}_{\varepsilon}\rightharpoonup\rho\mathbf{u}\) weakly in \(L^{\infty}(0,T;L^{\frac{2a}{a+1}}(\Omega))\). Moreover, due to (67) and (69) we have \(\varepsilon\nabla\rho_{\varepsilon}\to 0\) in \(L^{2}(Q_{T})\). We conclude that the limiting functions \(\rho\) and \(\mathbf{u}\) satisfy the continuity equation in the weak sense, i.e.
\[\int_{Q_{T}}\rho(\partial_{t}\varphi+\mathbf{u}\cdot\nabla\varphi)\,\mathrm{d }\mathbf{y}\mathrm{d}t=0\]
for all \(\varphi\in C^{\infty}_{\#}(Q_{T})\). Since \(\rho\in L^{\infty}_{\#}(0,T;L^{a}_{\#}(\Omega))\) and \(a\geq 2\) we further get that the renormalized continuity equation is satisfied by \(\rho\) and \(\mathbf{u}\), i.e.
\[\int_{Q_{T}}\rho B(\rho)(\partial_{t}\varphi+\mathbf{u}\cdot\nabla\varphi)\, \mathrm{d}\mathbf{y}\mathrm{d}t=\int_{Q_{T}}b(\rho)(\nabla\cdot\mathbf{u}) \varphi\,\mathrm{d}\mathbf{y}\mathrm{d}t\]
for all functions \(\varphi\in C^{\infty}_{\#}(Q_{T})\) and any \(b\in L^{\infty}(0,\infty)\cap C[0,\infty)\) such that \(b(0)=0\) with \(B(\rho)=B(1)+\int_{1}^{\rho}\frac{b(z)}{z^{2}}dz\), see i.e. [20, Section 11.19].
### Limit in the coupled momentum equation
The limit in the equation (71) is more involved. The terms integrated over \(\Gamma_{T}\) are linear and their limits are straightforward. Regarding the terms integrated over \(Q_{T}\), we start similarly as in Section 7.3, deduce from the continuity equation that
\[\|\varepsilon\nabla\rho_{\varepsilon}\|_{L^{\frac{2a}{a}}(Q_{T})}\leq C(\delta) \tag{78}\]
and we use this information to estimate
\[\|\partial_{t}((\delta+\rho_{\varepsilon})\mathbf{u}_{\varepsilon})\|_{(L^{2 0}_{\#}(0,T,W^{2,p}_{\#}(\Omega)))^{*}}\leq C(\delta).\]
The continuity equation implies a similar estimate for the time derivative of the density, namely
\[\|\partial_{t}\rho_{\varepsilon}\|_{L^{\frac{10}{\sigma}}_{\#}(0,T,W^{1,2}_{\#}( \Omega)))^{*}}\leq C(\delta).\]
Using this information and the fact that the sequence of velocities is bounded in \(L^{4}(Q_{T})\) we get in particular that
\[\left|\int_{Q_{T}}\partial_{t}\rho_{\varepsilon}\mathbf{u}_{\varepsilon} \cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t\right|\leq C(\delta)\]
for any \(\boldsymbol{\varphi}\in L^{20}_{\#}(0,T,W^{2,p}_{\#}(\Omega))\). Therefore we obtain
\[\delta\|\partial_{t}\mathbf{u}_{\varepsilon}\|_{(L^{20}_{\#}(0,T,W^{2,p}_{\#}(\Omega)))^{*}} \leq\|(\delta+\rho_{\varepsilon})\partial_{t}\mathbf{u}_{ \varepsilon}\|_{(L^{20}_{\#}(0,T,W^{2,p}_{\#}(\Omega)))^{*}}\] \[\leq\|\partial_{t}((\delta+\rho_{\varepsilon})\mathbf{u}_{ \varepsilon})\|_{(L^{20}_{\#}(0,T,W^{2,p}_{\#}(\Omega)))^{*}}+\|\mathbf{u}_{ \varepsilon}\partial_{t}\rho_{\varepsilon}\|_{(L^{20}_{\#}(0,T,W^{2,p}_{\#} (\Omega)))^{*}}\leq C(\delta).\]
This bound together with the Aubin-Lions lemma is enough to pass to the limit in the term \(\delta\int_{Q_{T}}|\mathbf{u}|^{2}\mathbf{u}\cdot\boldsymbol{\varphi}\, \mathrm{d}\mathbf{y}\mathrm{d}t\). We also obtain similar convergences as in (59) and (60), where we combine the latter with the fact that
\[\mathbf{u}_{\varepsilon}\otimes\mathbf{u}_{\varepsilon}\to\mathbf{u}\otimes \mathbf{u}\quad\text{ in }L^{p}(Q_{T})\text{ for some }p>1\]
to pass to the limit in the convective term.
The only remaining term without properly identified limit is the pressure term. Regarding this term, we first observe that when deriving (68), we proved that \(\rho_{\varepsilon}^{a}\) has a better than \(L^{1}\) integrability in the interior of the domain \(Q_{T}^{\eta}\). However, it is still possible that \(\{\rho_{\varepsilon}\}_{\varepsilon>0}\) might generate some concentrations near the elastic boundary. We define
\[\varphi_{h}^{\varepsilon}(t,x,z):=\begin{cases}\frac{z-\eta_{\varepsilon}(t,x )}{h},&\text{ for }\eta_{\varepsilon}(t,x)<z<\eta_{\varepsilon}(t,x)+h,\\ -\frac{1}{H-h}(z-(\eta_{\varepsilon}(t,x)+h))+1,&\text{ for }\eta_{\varepsilon}(t,x)+h<z< \eta_{\varepsilon}(t,x)+2H-h,\\ \frac{z-(\eta_{\varepsilon}(t,x)+2H)}{h},&\text{ for }\eta_{\varepsilon}(t,x)+2H-h<z< \eta_{\varepsilon}(t,x)+2H.\end{cases}\]
We choose \(\boldsymbol{\varphi}=\varphi_{h}^{\varepsilon}\mathbf{e}_{2}\) in (71) (with \(\psi=0\)) and we compute similarly as in (35) to get
\[\int_{0}^{T}\int_{\{\eta<z<\eta+h\}\cup\{\eta+2H-h<z<\eta+2H\}}(\rho_{ \varepsilon}^{\gamma}+\delta\rho_{\varepsilon}^{a})\,\mathrm{d}\mathbf{y} \mathrm{d}t\leq C(\delta)h^{s}, \tag{79}\]
for some \(s>0\). Indeed, to obtain this kind of estimate it is enough to observe that all arising terms have better than \(L^{1}\) integrability in the space variable. Here we in particular use once again (78).
Estimate (79) means that the sequence \(\{\rho_{\varepsilon}^{\gamma}+\delta\rho_{\varepsilon}^{a}\}_{\varepsilon>0}\) is uniformly integrable so there exists its weak limit in \(L^{1}(Q_{T})\) denoted as \(\overline{p_{\delta}(\rho)}\). In order to identify \(\overline{p_{\delta}(\rho)}\), one can use the standard approach on compact subsets of \(Q_{T}^{\eta}\) based on convergence of effective viscous flux, renormalized continuity equation and monotonicity argument (see [20]) in order to conclude that
\[\rho_{\varepsilon}\to\rho,\quad\text{a.e. in }Q_{T}.\]
This is enough to identify \(\overline{p_{\delta}(\rho)}\) as \(\rho^{\gamma}+\delta\rho^{a}\).
Finally, let us point out that the kinematic coupling \(\partial_{t}\eta\mathbf{e}_{2}=\gamma_{|\Gamma^{\eta}}\mathbf{u}\) is recovered due to the bound (70). We have proved that the limit functions \((\rho,\mathbf{u},\eta)\) satisfy
\[\int_{Q_{T}}(\delta+\rho)\mathbf{u}\cdot\partial_{t}\boldsymbol{ \varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{Q_{T}}(\rho\mathbf{u}\otimes \mathbf{u}):\nabla\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{ Q_{T}}(\rho^{\gamma}+\delta\rho^{a})(\nabla\cdot\boldsymbol{\varphi})\, \mathrm{d}\mathbf{y}\mathrm{d}t\\ -\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}):\nabla\boldsymbol{ \varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\delta\int_{Q_{T}}|\mathbf{u}|^{2} \mathbf{u}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\int_{ \Gamma_{T}}\eta_{t}\psi_{t}\,\mathrm{d}\mathrm{z}\mathrm{d}t-\int_{\Gamma_{T}} \eta_{xx}\psi_{xx}\,\mathrm{d}\mathbf{x}\mathrm{d}t-\int_{\Gamma_{T}}\eta_{xx} \psi_{x}\,\mathrm{d}\mathbf{x}\mathrm{d}t\\ =-\int_{\Gamma_{T}}f\psi\,\mathrm{d}\mathbf{x}\mathrm{d}t-\int_{Q_ {T}}\rho\mathbf{F}_{\delta}\cdot\boldsymbol{\varphi}\,\mathrm{d}\mathbf{y} \mathrm{d}t \tag{80}\]
for all \(\boldsymbol{\varphi}\in C^{\infty}_{\#}(Q_{T})\) and \(\psi\in C^{\infty}_{\#,0}(\Gamma_{T})\) such that \(\boldsymbol{\varphi}(t,x,\hat{\eta}(t,x))=\psi(t,x)\mathbf{e}_{2}\) on \(\Gamma_{T}\).
### Limit in the energy inequality
Our aim here is to pass to the limit in (62), where \(\phi\in C^{\infty}_{\#}(0,T)\), \(\phi\geq 0\). First, it is easy to pass to the limit on the right hand side, in particular the last two terms converge to zero. On the left hand side we simply discard the penalization term
\[\frac{1}{\varepsilon}\int_{\Gamma_{T}}\phi|\mathbf{v}_{\varepsilon}-(\eta_{ \varepsilon})_{\epsilon}\mathbf{e}_{2}|^{2}\,\mathrm{d}x\mathrm{d}t,\]
because it is obviously non-negative. We apply the same argument for the terms
\[\varepsilon\gamma\int_{Q_{T}}\phi\rho_{\varepsilon}^{\gamma-2}|\nabla\rho_{ \varepsilon}|^{2}\,\mathrm{d}\mathbf{y}\mathrm{d}t+\varepsilon\delta a\int_{Q _{T}}\phi\rho_{\varepsilon}^{a-2}|\nabla\rho_{\varepsilon}|^{2}\,\mathrm{d} \mathbf{y}\mathrm{d}t.\]
The uniform bounds (69) and (64) imply that
\[\frac{\varepsilon\gamma}{\gamma-1}\int_{Q_{T}}\phi\rho_{\varepsilon}^{\gamma }\,\mathrm{d}\mathbf{y}\mathrm{d}t+\frac{\varepsilon\delta a}{a-1}\int_{Q_{T} }\phi\rho_{\varepsilon}^{a}\to 0\,\mathrm{d}\mathbf{y}\mathrm{d}t.\]
Next, we use the weak lower semicontinuity of convex functions to pass to the limit in the terms
\[\int_{Q_{T}}\phi\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We sum up (82) and (83) and by (81) we deduce
\[\frac{1}{2}\int_{\Gamma_{T}}|\partial_{t}\eta_{\varepsilon}|^{2}\phi_{t}(t)\, \mathrm{d}x\mathrm{d}t\to\frac{1}{2}\int_{\Gamma_{T}}|\partial_{t}\eta|^{2} \phi_{t}(t)\,\mathrm{d}x\mathrm{d}t.\]
Thus, \((\rho,\mathbf{u},\eta)\) satisfies
\[-\int_{0}^{T}\phi_{t}(t)E_{\delta}(t)\,\mathrm{d}t+\int_{Q_{T}} \phi\mathcal{S}(\nabla\mathbf{u}):\nabla\mathbf{u}\,\mathrm{d}y\mathrm{d}t+ \delta\int_{Q_{T}}\phi|\mathbf{u}|^{4}\,\mathrm{d}y\mathrm{d}t+\int_{\Gamma_{T} }\phi|\eta_{\varepsilon x}|^{2}\,\mathrm{d}x\mathrm{d}t\\ \leq\int_{\Gamma_{T}}\phi f\eta_{t}\,\mathrm{d}x\mathrm{d}t+\int_{ Q_{T}}\phi\rho\mathbf{u}\cdot\mathbf{F}_{\delta}\,\mathrm{d}y\mathrm{d}t \tag{84}\]
for all \(\phi\in C^{\infty}_{\#}(0,T)\), \(\phi\geq 0\).
### Estimates independent of \(\delta\)
At this point, one can adjust the calculations from Section 4 to take into account terms with \(\delta\) in (80) in order to deduce estimates independent of \(\delta\). We only list main changes with respect to Section 4 here. The starting point is the energy inequality (84), where we first use test function \(\phi=1\) and follow Section 4.1 to get
\[\delta\|\mathbf{u}\|_{L^{4}(Q_{T})}^{4}+\|\mathbf{u}\|_{L^{2}(0,T;H^{1}( \Omega))}^{2}+\|\eta_{t}\|_{L^{2}(0,T;H^{1}(\Gamma))}^{2}\leq C(\kappa)(1+ \mathcal{E}_{\delta}^{\kappa}). \tag{85}\]
Next, using the notation for \(E_{\delta}(t)\) and \(\mathcal{E}_{\delta}\) introduced in (49) and (64) respectively, we take a sequence of test functions \(\phi_{k}\to\chi_{[s,t]}\), pass to the limit with \(k\to\infty\) and using calculations of Section 4.2 we get
\[\mathcal{E}_{\delta}\leq C_{0}\left(1+\int_{0}^{T}E_{\delta}(s)\,\mathrm{d}s \right).\]
All terms are handled similarly to their counterparts in Section 4.3, there are however two additional terms with respect to (20). These are treated as follows
\[\delta\left|\int_{Q_{T}}|\mathbf{u}|^{2}\mathbf{u}\cdot\eta\mathbf{e}_{2}\, \mathrm{d}y\mathrm{d}t\right|\leq\delta\|\mathbf{u}\|_{L^{4}(Q_{T})}^{3}\| \eta\|_{L^{4}(\Gamma_{T})}\\ \leq C(\kappa)(1+\mathcal{E}_{\delta}^{\frac{3\pi}{4}})(\|\eta_{ t}\|_{L^{2}(0,T;L^{4}(\Gamma))}+\|\eta\|_{L^{2}(0,T;L^{4}(\Gamma))})\leq C( \kappa)(1+\mathcal{E}_{\delta}^{\frac{3\pi}{4}})+\frac{1}{8}\|\eta_{xx}\|_{L^{ 2}(\Gamma_{T})}^{2},\]
and
\[\delta\left|\int_{Q_{T}}\mathbf{u}\cdot\eta_{t}\mathbf{e}_{2}\,\mathrm{d}y \mathrm{d}t\right|\leq C\|\mathbf{u}\|_{L^{2}(0,T;L^{4}(\Omega^{n}(t)))}\|\eta _{t}\|_{L^{2}(0,T;L^{\infty}(\Gamma))}\leq C(\kappa)\left(1+\mathcal{E}_{ \delta}^{\kappa}\right).\]
Eventually we recover
\[\int_{\Gamma_{T}}|\eta_{xx}|^{2}\,\mathrm{d}x\mathrm{d}t\leq C(\kappa)(1+ \mathcal{E}_{\delta}^{3\kappa}).\]
Finally, (26) contain the additional term \(\delta\int_{Q_{T}^{n}}\rho^{\mu+\alpha}\,\mathrm{d}y\mathrm{d}t\) on the left hand side and four more terms on the right hand side. Two terms arise from the \(\delta\rho^{a}\) in the pressure and these terms are estimated exactly as in (27) and (28). Next, similarly as in (29)
\[\delta\left|\int_{Q_{T}^{n}}\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi}_{ h}\,\mathrm{d}y\mathrm{d}t\right|\leq C(\kappa)\left(1+\mathcal{E}_{\delta}^{ \frac{\alpha}{3}+\kappa}\right),\]
and
\[\delta\left|\int_{Q_{T}^{n}}|\mathbf{u}|^{2}\mathbf{u}\cdot\boldsymbol{ \varphi}_{h}\,\mathrm{d}y\mathrm{d}t\right|\leq C(1+\mathcal{E}_{\delta}^{ \frac{3}{3}})\]
We then continue as in Section 4.4 and end with (30) and thanks to the choice of parameters \(\alpha,\kappa\) we get (32). We want a similar bound also for \(\delta\rho^{a}\), however we can not use the same combination of
parameters \(\alpha\) and \(\kappa\), because the inequality (31) might not hold if \(\gamma\) is replaced by \(a\). Therefore, we next set \(\bar{\kappa}:=\frac{1}{5(a-1)}\) and \(\bar{\alpha}:=\frac{2}{5}\), repeat the calculations of Sections 4.1-4.3 and Section 4.4 in order to deduce
\[\delta\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho^{a+\bar{\alpha}}\,\mathrm{ d}\mathbf{yd}t\leq\delta\int_{Q_{T}^{\eta}}\rho^{a+\bar{\alpha}}\phi_{h}\, \mathrm{d}\mathbf{yd}t\leq C(\bar{\kappa})\left(1+\mathcal{E}_{\delta}^{1+ \frac{3\bar{\kappa}}{2}}\right).\]
By interpolation
\[\delta\int_{0}^{T}\int_{\{\eta+h<z<\eta+2H-h\}}\rho^{a}\,\mathrm{d}\mathbf{yd} t\leq C(\bar{\kappa})\left(1+\mathcal{E}_{\delta}^{1-\bar{\kappa}^{\prime}} \right), \tag{86}\]
where
\[\bar{\kappa}^{\prime}:=1-\left(1+\frac{3\bar{\kappa}}{2}\right)\frac{a-1}{a+ \bar{\alpha}-1}.\]
We continue with estimates of the pressure near the boundary using the function (33). Again, we encounter some additional terms in equation (34). To be more precise, terms \(\delta\rho^{a}\) appear both on the left hand side and in the first term on the right hand side. The left hand side provides the information we seek, while the term on the right hand side is bounded using (86). The integrals of \(\delta\mathbf{u}\cdot\partial_{t}(\varphi_{h}\mathbf{e}_{2})\) and \(\delta|\mathbf{u}|^{2}\mathbf{u}\cdot(\varphi_{h}\mathbf{e}_{2})\) yield the powers \(\mathcal{E}_{\delta}^{\kappa}\) and \(\mathcal{E}_{\delta}^{\frac{1}{2}}\), respectively. Hence, we conclude that there exists \(\kappa^{\prime\prime}>0\) such that
\[\int_{Q_{T}}\rho^{\gamma}+\delta\rho^{a}\,\mathrm{d}\mathbf{yd}t\leq C\left( 1+\mathcal{E}^{1-\kappa^{\prime\prime}}\right).\]
Finally, in Section 4.5 we estimate \(\delta\int_{Q_{T}}|\mathbf{u}|^{2}\) by (85) and we obtain
\[\mathcal{E}_{\delta}\leq C,\qquad\int_{Q_{T}}\mathbb{S}(\nabla\mathbf{u}): \nabla\mathbf{u}\,\mathrm{d}\mathbf{yd}t+\delta\int_{Q_{T}}|\mathbf{u}|^{4} \,\mathrm{d}\mathbf{yd}t+\int_{\Gamma_{T}}|\eta_{tx}|^{2}\,\mathrm{d}x\mathrm{ d}t\leq C. \tag{87}\]
Similarly to Section 7.7, we obtain
\[\|\eta\|_{L^{2}(0,T;H^{2+s}(\Gamma))}^{2}\leq C, \tag{88}\]
for some \(s>0\).
## 9 Limit \(\delta\to 0\)
Denote the solution obtained in previous section as \((\rho_{\delta},\mathbf{u}_{\delta},\eta_{\delta})\). The goal is to pass to the limit \(\delta\to 0\) to conclude that the limiting functions \((\rho,\mathbf{u},\eta)\) represent a weak solution in the sense of Definition 2.1. The uniform estimates deduced in Section 8.4 give rise to the following convergencies
\[\rho_{\delta}\rightharpoonup\rho\quad\text{ weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;L^{\gamma}_{\#}(\Omega)),\] \[\mathbf{u}_{\delta}\rightharpoonup\mathbf{u}\quad\text{ weakly in }L^{2}_{\#}(0,T;H^{1}_{\#}(\Omega)),\] \[\eta_{\delta}\rightharpoonup\eta\quad\text{ weakly}^{*}\text{ in }L^{\infty}_{\#}(0,T;H^{2}_{\#}(\Gamma))\quad\text{ and weakly in }H^{1}_{\#}(0,T;H^{1}_{\#,0}(\Gamma)).\]
### Limit in the continuity equation
We employ standard arguments from the existence theory of weak solutions to the compressible Navier-Stokes equations (see i.e. [20]) to deduce that functions \(\rho\) and \(\mathbf{u}\) satisfy the continuity equation in the weak sense, i.e.
\[\int_{Q_{T}}\rho(\partial_{t}\varphi+\mathbf{u}\cdot\nabla\varphi)\,\mathrm{d }x\mathrm{d}t=0\]
for all \(\varphi\in C^{\infty}_{\#}(Q_{T})\). The validity of the renormalized continuity equation remains open at this moment since \(\rho\) may not possess enough regularity to use a direct argument.
### Limit in the coupled momentum equation
First, the kinematic coupling \(\mathbf{u}(t,x,\hat{\eta}(t,x))=\eta_{t}(t,x)\mathbf{e}_{2}\) is recovered using Lemma 6.1. Our aim is to pass with \(\delta\) to zero in (80). Once again, the terms integrated over \(\Gamma_{T}\) are linear and therefore their limits are straightforward. Estimates (87) are enough to identify \(0\) as a limit of terms \(\int_{Q_{T}}\delta\mathbf{u}\cdot\partial_{t}\boldsymbol{\varphi}\,\mathrm{d} \mathbf{y}\mathrm{d}t\) and \(\int_{Q_{T}}\delta|\mathbf{u}|^{2}\mathbf{u}\cdot\boldsymbol{\varphi}\, \mathrm{d}\mathbf{y}\mathrm{d}t\). The limit in the last term on the right hand side is easy. In the remaining terms we follow the existence theory of weak solutions to the compressible Navier-Stokes equations and the main task is to deduce the limit in the pressure term, which is closely related to the validity of the renormalized continuity equation. Both issues are solved by means of the effective viscous flux identity and boundedness of the oscillations defect measure. We get the pointwise convergence of \(\rho_{\delta}\to\rho\) a.e. in \(Q_{T}\) and thus recover both (9) and (10).
### Limit in the energy inequality
Finally we need to pass to the limit in (84) in order to prove (11). The limits of the terms on the right hand side are simple. On the left hand side we simply discard the term \(\delta\int_{Q_{T}}\phi|\mathbf{u}|^{4}\) since it is surely nonnegative and for the second and fourth term on the left hand since we use lower semicontinuity of convex functions. Therefore it remains to deal with the first term on the left hand side. First, the kinetic energy term is treated the same way as the convective term in the coupled momentum equation. Next, it is easy to use (87) to pass to zero in the term containing \(\delta|\mathbf{u}|^{2}\). Pointwise convergence of densities allows us to pass to the limit in the pressure terms of \(E_{\delta}\). Improved estimate (88) allows us to pass to the limit in the last term of \(E_{\delta}\), while a similar procedure as in (82)-(83) provides necessary information to pass to the limit in the term \(|\eta_{t}|^{2}\) of \(E_{\delta}\). Thus we recover (11). The validity of (12) follows from the calculations in Section 4 with the starting point being the energy inequality (11).
**Acknowledgments:** The work of O. K., V. M. and S. N. was supported by Praemium Academiae of S. Necasova and by the Czech Science Foundation (GACR) through project GA22-01591S. The Institute of Mathematics, Czech Academy of Sciences, is supported by RVO:67985840.
|
2303.10774 | Cross-GAN Auditing: Unsupervised Identification of Attribute Level
Similarities and Differences between Pretrained Generative Models | Generative Adversarial Networks (GANs) are notoriously difficult to train
especially for complex distributions and with limited data. This has driven the
need for tools to audit trained networks in human intelligible format, for
example, to identify biases or ensure fairness. Existing GAN audit tools are
restricted to coarse-grained, model-data comparisons based on summary
statistics such as FID or recall. In this paper, we propose an alternative
approach that compares a newly developed GAN against a prior baseline. To this
end, we introduce Cross-GAN Auditing (xGA) that, given an established
"reference" GAN and a newly proposed "client" GAN, jointly identifies
intelligible attributes that are either common across both GANs, novel to the
client GAN, or missing from the client GAN. This provides both users and model
developers an intuitive assessment of similarity and differences between GANs.
We introduce novel metrics to evaluate attribute-based GAN auditing approaches
and use these metrics to demonstrate quantitatively that xGA outperforms
baseline approaches. We also include qualitative results that illustrate the
common, novel and missing attributes identified by xGA from GANs trained on a
variety of image datasets. | Matthew L. Olson, Shusen Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Weng-Keen Wong | 2023-03-19T21:54:13Z | http://arxiv.org/abs/2303.10774v2 | Cross-GAN Auditing: Unsupervised Identification of Attribute Level Similarities and Differences between Pretrained Generative Models
###### Abstract
Generative Adversarial Networks (GANs) are notoriously difficult to train especially for complex distributions and with limited data. This has driven the need for tools to audit trained networks in human intelligible format, for example, to identify biases or ensure fairness. Existing GAN audit tools are restricted to coarse-grained, model-data comparisons based on summary statistics such as FID or recall. In this paper, we propose an alternative approach that compares a newly developed GAN against a prior baseline. To this end, we introduce _Cross-GAN Auditing (xGA) that, given an established "reference" GAN and a newly proposed "client" GAN, jointly identifies intelligible attributes that are either _common_ across both GANs, _novel_ to the client GAN, or _missing from the client GAN. This provides both users and model developers an intuitive assessment of similarity and differences between GANs. We introduce novel metrics to evaluate attribute-based GAN auditing approaches and use these metrics to demonstrate quantitatively that xGA outperforms baseline approaches. We also include qualitative results that illustrate the common, novel and missing attributes identified by xGA from GANs trained on a variety of image datasets1.
Footnote 1: Source code is available at [https://github.com/mattolson93/cross_gan_auditing](https://github.com/mattolson93/cross_gan_auditing)
## 1 Introduction
Generative Adversarial Networks (GANs) [19, 20, 12, 11] have become ubiquitous in a range of high impact commercial and scientific applications [13, 6, 8, 9, 10]. With this pro
life use comes a growing need for investigative tools that are able to evaluate, characterize and differentiate one GAN model from another, especially since such differences can arise from a wide range of factors - biases in training data, model architectures and hyper parameters used in training etc. In practice, this has been mostly restricted to comparing two or more GAN models against the dataset they were trained on using summary metrics such as Frechet Inception Distance (FID) [16] and precision/recall [20] scores.
However, in many real world scenarios, different models may not even be trained on the same dataset, thereby making such summary metrics incomparable. More formally, if we define the model comparison problem as one being between a known - and presumably well vetted - _reference_ GAN and a newly developed _client_ GAN. For example, the reference GANs can correspond to models purchased from public market places such as AWS [1], Azure [3], or GCP [2], or to community-wide standards. Furthermore, there is a critical need for more fine-grained, interpretable, investigative tools in the context of fairness and accountability. Broadly, these class of methods can be studied under the umbrella of AI model _auditing_[4, 7, 32]. Here, the interpretability is used in the context to indicate that the proposed auditing result will involves of human intelligible attributes, rather than summary statistic that do not have explicit association with meaningful semantics.
While auditing classifiers has received much attention in the past [32], GAN auditing is still a relatively new research problem with existing efforts focusing on model-data comparisons, such as identifying how faithfully a GAN recovers the original data distribution [4]. In contrast, we are interested in developing a more general framework that enables a user to visually audit a "client" GAN model with respect the "reference". This framework is expected to support different kinds of auditing tasks: (i) comparing different GAN models trained on the same dataset (e.g. StyleGAN3-Rotation and StyleGAN3-Translate on FFHQ); (ii) comparing models trained on datasets with different biases (e.g., StyleGAN with race imbalance vs StyleGAN with age imbalance); and finally (iii) comparing models trained using datasets that contain challenging distribution shifts (e.g., CelebA vs Toons). Since these tools are primarily intended for human experts and auditors, interpretability is critical. Hence, it is natural to perform auditing in terms of human intelligible attributes. Though there has been encouraging progress in automatically discovering such attributes from a single GAN in the recent years [40, 43, 28, 39, 14] they are not applicable to our setting with multiple GANs.
**Proposed work** We introduce cross-GAN auditing (xGA), an unsupervised approach for identifying attribute similarities and differences between client GANs and reference models (which could be pre-trained and potentially unrelated). Since the GANs are trained independently, their latent spaces are disparate and encode different attributes, and thus they are not directly comparable. Consequently, discovering attributes is only one part of the solution; we also need to 'align' humanly meaningful and commonly occurring attributes across the individual latent spaces.
Our audit identifies three distinct sets of attributes: (a) common: attributes that exist in both client and reference models; (b) novel: attributes encoded only in the client model; (c) missing: attributes present only in the reference. In order to identify common attributes, xGA exploits the fact that shared attributes should induce similar changes in the resulting images across both the models. On the other hand, to discover novel/missing attributes, xGA leverages the key insight that attribute manipulations unique to one GAN can be viewed as out of distribution (OOD) to the other GAN. Using empirical studies with a variety of StyleGAN models and benchmark datasets, we demonstrate that xGA is effective in providing a fine-grained characterization of generative models.
**Contributions** (i) We present the first cross-GAN auditing framework that uses an unified, attribute-centric method to automatically discover common, novel, and missing attributes from two or more GANs; (ii) Using an external, robust feature space for optimization, xGA produces high-quality attributes and achieves effective alignment even across challenging distribution shifts; (iii) We introduce novel metrics to evaluate attribute-based GAN auditing approaches; and (iv) We evaluate xGA using StyleGANs trained on CelebA, AFHQ, FFHQ, Toons, Disney and MetFaces, and also provide a suite of controlled experiments to evaluate cross-GAN auditing methods.
## 2 Related Work
**Attribute Discovery** Several approaches have been successful in extracting attribute directions in StyleGAN's latent space in the past few years. InterfaceGAN [33] used an external classifier and human annotations to label sampled images in order to build a simple linear model that captures the attribute direction in a GAN's latent space. GANSpace [14] applies PCA to these intermediate representations to find the large factors of variation and then projects these directions onto a GAN's latent space. Similarly, SeFa [34] directly captures these directions via matrix factorization of the affine mapping weights in styleGAN, which identify directions of large changes without the need to sample the latent space. An alternative strategy is to directly learn the interpretable directions through a jointly-trained predictive model by assuming that the more predictive variations are more likely to be semantically meaningful [39]- or that using a Hessian penalty [28], or Jacobian [40], in the image space enables learning of directions. LatentCLR [43] used a similar optimization framework, but instead of training a separate predictive model, it leveraged
the GAN's internal representation and adopted a contrastive loss [11] for attribute discovery.
**Model Auditing** With increased awareness of the societal impact of machine learning models, there is an increased interest in characterizing and criticizing model behavior under the broad umbrella of auditing [32, 42]. There has been relatively less work in auditing generative models. For example, [4] introduce a new performance metric for generative models that measures fidelity, diversity, and generalization. Another related work is from Bau et al., [7] who investigate what a GAN cannot generate, whereas our interest is in distinguishing a client GAN from a reference GAN.
**Interpretation of Domain Shift** Some of the most related work comes from methods that aim for characterizing domain shift [26, 27], but these methods are limited to specific settings: either relying on human intervention [26] or needing a disentangled generator in the input [27]. An indirect way to obtain aligned attributes is via _aligned GANs_- GANs where one is fine-tuned from the other [41, 29]. Here, the attribute direction will be inherent to the children models, eliminating the need to do joint discovery to identify similar attributes. However, obtaining an _aligned GAN_ through a separate fine-tuning process for attribute discovery across distributions is neither practical nor feasible.
## 3 Methods
We approach GAN auditing as performing attribute-level comparison to a reference GAN. For simplicity, we consider the setup where there is a single reference and client model to perform auditing, though xGA can be used even with multiple reference or client models (see experiments). Let us define the reference and client generators as \(\mathcal{G}_{r}:\mathcal{Z}_{r}\mapsto\mathcal{X}_{r}\) and \(\mathcal{G}_{c}:\mathcal{Z}_{c}\mapsto\mathcal{X}_{c}\) respectively. Here, \(\mathcal{Z}_{r}\) and \(\mathcal{Z}_{c}\) refer to the corresponding latent spaces and the generators are trained to approximate the data distributions \(P_{r}(\mathrm{x})\) and \(P_{c}(\mathrm{x})\). Our formulation encompasses the scenario where \(P_{r}(\mathrm{x})=P_{c}(\mathrm{x})\) but the model architectures are different, or the challenging setting of \(P_{r}(\mathrm{x})\neq P_{c}(\mathrm{x})\) (e.g., CelebA faces vs Met Faces datasets).
The key idea of xGA is to audit a client model \(\mathcal{G}_{c}\) via attribute (i.e., directions in the latent space) comparison to a reference model, in lieu of computing summary scores (e.g., FID, recall) from the synthesized images. In order to enable a fine-grained, yet interpretable, analysis of GANs, xGA performs automatic discovery and categorization of latent attributes: (i) _common_: attributes that are shared by both the models; (ii) _missing_: attributes that are captured by \(\mathcal{G}_{r}\), but not \(\mathcal{G}_{c}\); (iii) _novel_: attributes that are encoded in \(\mathcal{G}_{c}\) but not observed in the reference model. We express this new categorization scheme in figure 2. Together, these latent attributes can provide a holistic characterization of GANs, while circumventing the need for customized metrics or human-centric analysis.
**Latent attributes**: Following state-of-the-art approaches such as LatentCLR [43], we define attributes as direction vectors in the latent space of a GAN. For any sample \(\mathrm{z}\in\mathcal{Z}_{c}\) and a direction vector \(\delta_{n}\), we can induce attribute-specific manipulation to the corresponding image as
\[\mathcal{D}:(\mathrm{z},\delta_{n})\rightarrow\mathrm{z}+\alpha\delta_{n}, \text{ where }\delta_{n}=\frac{\mathbf{M}_{n}\mathrm{z}}{\|\mathbf{M}_{n}\mathrm{z}\|}, \tag{1}\]
for a scalar \(\alpha\), and a learnable matrix \(\mathbf{M}_{n}\). In other words, we consider the attribute change to be a linear model defined by the learnable direction \(\delta_{n}\). The manipulated image can then be obtained as \(\mathcal{G}_{c}(\mathcal{D}(\mathrm{z},\delta_{n}))\), or in shorter notation \(\mathcal{G}_{c}(\mathrm{z},\delta_{n})\). Note that these latent attributes are not pre-specified and are discovered as part of the auditing process.
### Common Attribute Discovery
Identifying common attributes between the client and reference GAN models is challenging, since it requires that the latent directions are _aligned_, i.e., the exact same semantic change must be induced in unrelated latent spaces. When distilling from a parent model, i.e., training Toons from Faces, attributes appear to align naturally, even under severe distribution shifts [41]. However, this does not hold true when the two models are trained independently, which requires us to solve the joint problem of identifying the attributes as well as explicitly aligning them.
Formally, for a common attribute, we want the semantic change (in the generated images) induced by manipulating any sample \(\mathrm{z}\in\mathcal{Z}_{c}\) along a direction \(\delta\) in the client GAN's latent space to match the change in the direction \(\bar{\delta}\) from the reference GAN's latent space for any \(\bar{\mathrm{z}}\in\mathcal{Z}_{r}\). In other words, \(\mathrm{S}(\mathcal{G}_{c}(\mathrm{z},\delta),\mathcal{G}_{c}(\mathrm{z})) \approx\mathrm{S}(\mathcal{G}_{r}(\bar{\mathrm{z}},\bar{\delta}),\mathcal{G} _{r}(\bar{\mathrm{z}})),\forall\;z\in\mathcal{Z}_{c},\bar{\mathrm{z}}\in \mathcal{Z}_{r}\). Here, \(\mathrm{S}\) denotes an _oracle_ detector (e.g., human subject test) which measures the semantic changes between the original sample and that obtained by manipulating the common attribute.
Figure 2: A table showing the proposed xGA modifications to typical contrastive loss with a simple two attribute model.
However, in practice, such a semantic change detector is not accessible and we need to construct a surrogate mechanism to quantify the alignment, i.e.,
\[\min_{\delta_{n},\bar{\delta}_{n}}\mathcal{L}\bigg{(}\mathcal{G}_{c}(\mathrm{z}, \delta_{n}),\mathcal{G}_{r}(\bar{\mathrm{z}},\bar{\delta}_{n})\bigg{)},\forall \mathrm{z}\in\,\mathcal{Z}_{c},\forall\bar{\mathrm{z}}\in\,\mathcal{Z}_{r}, \tag{2}\]
for a common attribute pair \((\delta_{n},\bar{\delta}_{n})\). Any choice of the loss function \(\mathcal{L}\) must satisfy two key requirements: (a) identify high-quality, latent directions within each of the latent spaces; (b) encourage cross-GAN alignment such that similar attributes end up being strongly correlated under the loss function. For example, in the case of a single GAN, the LatentCLR [43] approach learns distinct directions using a contrastive objective that defines positive samples as those that have all been perturbed in the same direction, while manipulations in all other directions are considered negative2. However, this approach is not suitable for our setting because of a key limitation - alignment requires us to operate in a common feature space so that semantics across the two models are comparable. To address this, we first modify the objective to operate in the latent space of an external, pre-trained feature extractor \(\mathcal{F}\). In order to support alignment even in the scenario where \(P_{c}(\mathrm{x})\neq P_{r}(\mathrm{x})\), we can choose \(\mathcal{F}\) that is robust to commonly occurring distributional shifts.
Footnote 2: Other single GAN methods could be adapted, but LatentCLR’s flexible loss requires less computation without the need to enforce orthogonality at every learning step.
Our approach works on mini-batches of size \(B\) samples each, randomly drawn from \(\mathcal{Z}_{c}\) and \(\mathcal{Z}_{r}\) respectively. For the \(i^{\text{th}}\) sample in a mini-batch from \(\mathcal{Z}_{c}\), let us define the vector \(\mathrm{h}_{i}^{n}\) as the divergence between the output of the GAN before and after perturbing along the \(n^{\text{th}}\) latent direction, computed in the feature space of \(\mathcal{F}\), _i.e._, \(h_{i}^{n}=\mathcal{F}(\mathcal{G}_{c}(\mathrm{z}_{i},\delta_{n}))-\mathcal{F} (\mathcal{G}_{c}(\mathrm{z}_{i}))\). Similarly, we define the divergence \(\bar{\mathrm{h}}_{j}^{n}=\mathcal{F}(\mathcal{G}_{r}(\bar{\mathrm{z}}_{j}, \bar{\delta}_{n}))-\mathcal{F}(\mathcal{G}_{r}(\bar{\mathrm{z}}_{j}))\) for the reference GAN. Next, we measure the semantic similarity between the divergence vectors as \(g(\mathrm{h}_{i}^{n},\bar{\mathrm{h}}_{j}^{n})=\exp(\cos(\mathrm{h}_{i}^{n}, \bar{\mathrm{h}}_{j}^{n})/\tau)\), where \(\tau\) is the temperature parameter, and \(\cos\) refers to cosine similarity. Now, the loss function for inferring a common attribute can be written as
\[\mathcal{L}_{\text{sent}}(\delta_{n},\bar{\delta}_{n},\lambda_{a })=\\ -\log\frac{\sum\limits_{i=1}^{B}\sum\limits_{j\neq i}^{B}g( \mathrm{h}_{i}^{n},\mathrm{h}_{j}^{n})+g(\bar{\mathrm{h}}_{i}^{n},\bar{ \mathrm{h}}_{j}^{n})+\lambda_{a}g(\bar{\mathrm{h}}_{i}^{n},\mathrm{h}_{j}^{n})} {\sum\limits_{j=1}^{B}\sum\limits_{l=1}^{N}\mathds{1}_{[l\neq n ]}\bigg{(}g(\mathrm{h}_{i}^{l},\mathrm{h}_{j}^{n})+g(\bar{\mathrm{h}}_{i}^{l},\bar{\mathrm{h}}_{j}^{n})+g(\bar{\mathrm{h}}_{i}^{l},\mathrm{h}_{j}^{n}) \bigg{)}} \tag{3}\]
Here \(N\) denotes the total number of attributes. While the first two terms in the numerator are aimed at identifying distinct attributes from \(\mathcal{G}_{c}\) and \(\mathcal{G}_{r}\), the third term enforces the pair \((\delta_{n},\bar{\delta}_{n})\) to induce similar semantic change. When the \(\lambda_{a}\) parameter is set to \(0\), this optimization reinforces self-similarity of the attributes, without cross-similar semantics. The terms in the denominator are based on the negative pairs (divergences from different latent directions) to enable contrastive training.
### Novel & Missing Attribute Discovery
A key component of our GAN auditing framework is the discovery of interpretable attributes that are unique to or missing from the client GAN's latent space. This allows practitioners to understand the novelty and limitations of a GAN model with respect to a well-established reference GAN. To this end, we exploit the key intuition that images synthesized by manipulating an attribute specific to the client model can manifest as out-of-distribution (OOD) to the reference model (and vice versa).
In order to characterize the OOD nature of such realizations, we define a likelihood score in the feature space from \(\mathcal{F}\), which indicates whether a given sample is out of distribution. More specifically, we use the Density Ratio Estimation (DRE) [25, 36] method that seeks to approximate the ratio: \(\gamma(\mathrm{x})=\frac{P(\mathrm{x})}{Q(\mathrm{x})}\) for any sample \(\mathrm{x}\). When the ratio is low, it is likely that \(\mathrm{x}\) is from the distribution \(Q\) and hence OOD to \(P\). We choose DRE, specifically the Kullback-Liebler Importance Estimation Procedure (KLIEP) [37], over other scoring functions because it is known to be highly effective at accurately detecting outliers [24].
We pre-train two separate DRE models to approximate \(\gamma_{c}(\mathrm{z})\), and \(\gamma_{r}(\bar{\mathrm{z}})\), wherein we treat data from \(\mathcal{F}(\mathcal{G}_{c}(\mathrm{z}))\) as \(P\) and \(\mathcal{F}(\mathcal{G}_{r}(\bar{\mathrm{z}}))\) as \(Q\) for the former, and vice versa for the latter. These DRE models are implemented as 2-layer MLP networks, \(f^{\mathrm{c}}_{\text{dre}}(.),f^{\mathrm{r}}_{\text{dre}}(.)\), such that
\[\hat{\gamma}_{c}(\mathrm{z})=f^{c}_{\text{dre}}(\mathcal{F}(\mathcal{G}_{c}( \mathrm{z})))\text{ and }\hat{\gamma}_{r}(\bar{\mathrm{z}})=f^{c}_{\text{dre}}(\mathcal{F}( \mathcal{G}_{r}(\bar{\mathrm{z}}))), \tag{4}\]
where \(\mathcal{F}\) is the same feature extractor from (3). We pass the
Figure 3: A diagram of our xGA model. \(\mathcal{G}_{r}\), \(\mathcal{G}_{c}\), and \(\mathcal{F}\) are fixed pretrained models. \(\delta_{n}\) and \(\bar{\delta}_{n}\) are direction models trained to learn aligned attributes between the two Generators using the features of \(\mathcal{F}\), and \(f_{dre}\) are regularization models for unique attributes.
output of the MLPs through a softplus (\(\varphi(\mathrm{x})=\log(1+e^{\mathrm{x}})\)) function to ensure non-negativity. As stated previously, we use the KLIEP method to train DRE models. Using Section 4.1 of [24], the KLIEP loss used for training is defined as:
\[\mathcal{L}_{\text{KLIEP}}^{c}=\frac{1}{T_{2}}\sum_{j=1}^{T_{2}}\hat{\gamma}_{c }\left(\bar{\mathrm{z}}_{j}\right)-\frac{1}{T_{1}}\sum_{i=1}^{T_{1}}\ln\hat{ \gamma}_{c}(\mathrm{z}_{i}), \tag{5}\]
where \(\bar{\mathrm{z}}_{j}\) and \(\mathrm{z}_{i}\) are random samples drawn from the latent spaces \(\mathcal{Z}_{r}\) and \(\mathcal{Z}_{c}\) respectively (with \(T_{1}\) and \(T_{2}\) total samples). Similarly, we can define the KLIEP loss term for the reference model as:
\[\mathcal{L}_{\text{KLIEP}}^{r}=\frac{1}{T_{1}}\sum_{i=1}^{T_{1}}\hat{\gamma}_ {r}\left(\mathrm{z}_{i}\right)-\frac{1}{T_{2}}\sum_{j=1}^{T_{2}}\ln\hat{ \gamma}_{r}(\bar{\mathrm{z}}_{j}). \tag{6}\]
We also investigated using log-loss functions to train the DRE model, but found it to be consistently inferior to the KLIEP losses (see supplement for details). Finally, we use the pre-trained DRE models from the client and reference GAN data to identify novel and missing attributes, where for a given attribute \(n\) in the reference GAN, we can enforce its uniqueness by utilizing the client DRE model to give us \(\mathcal{L}_{\text{Unique}}^{r}(\delta_{n})=\hat{\gamma}_{c}(\mathrm{z}, \delta_{n})\) and similarly for the client GAN we can use the reference DRE model \(\mathcal{L}_{\text{Unique}}^{c}(\bar{\delta}_{n})=\hat{\gamma}_{r}(\bar{ \mathrm{z}},\bar{\delta}_{n})\) Note, we interpret the novel attributes from the reference GAN as the missing attributes for the client GAN.
### Overall Objective
We now present the overall objective of xGA to identify \(N_{c}\) common, \(N_{n}\) novel and \(N_{m}\) missing attributes simultaneously. Denoting the total number of attributes \(N=N_{c}+\text{max}(N_{n},N_{m})\), the total loss can be written as:
\[\mathcal{L}_{\text{xGA}}= \sum_{n=1}^{N}\mathcal{L}_{\text{xent}}(\delta_{n},\bar{\delta} _{n},\mathds{1}_{[n\leq N_{c}]}\lambda_{a})\] \[+\lambda_{b}\bigg{[}\sum_{p=N_{c}+1}^{N_{c}+N_{n}}\mathcal{L}_{ \text{Unique}}^{c}(\bar{\delta}_{p})+\sum_{q=N_{c}+1}^{N_{c}+N_{m}}\mathcal{ L}_{\text{Unique}}^{r}(\delta_{q})\bigg{]}\]
Here, the hyper-parameter \(\lambda_{b}\) is the penalty for enforcing the attributes between the two latent spaces to be disparate (missing/novel). And we set \(g(.,.)=0\) in \(L_{xent}\) if one of the directions vectors does not exist (i.e. when \(N_{n}\neq N_{m}\)).
## 4 Experiments
In order to systematically evaluate the efficacy of our proposed GAN audit approach, we consider a suite of GAN models trained using several benchmark datasets. In this section, we present both qualitative and quantitative assessments of xGA, and additional results are included in the Supplementary Material.
### Datasets and GAN Models
For most experiments, we used a StyleGANv2 [20] trained on the CelebA [23] dataset as our reference GAN model. This choice is motivated both by its wide-spread use as well as the availability of fine-grained, ground truth attributes for each of the face images in CelebA, and to ensure that this model is fully independent from other client GANs (e.g., ToonGAN is finetuned from FFHQ GAN). In one experiment for the AFHQ dataset, we used a StyleGANv2 trained using only _cat_ images from AFHQ as the reference. Also, we considered FFHQ-trained StyleGANv3 [18] and non-StyleGAN architectures such as GANformer [17] for defining the reference (see supplement).
In our empirical study, we constructed a variety of (StyleGANv2) client models and performed xGA: (i) \(5\) trained with different CelebA subsets constructed by excluding images specific to a chosen attribute (hat, glasses, male, female and beard); (ii) \(2\) trained with CelebA subsets constructed by excluding images containing any of a chosen set of attributes (beards\(|\)hats, smiles\(|\)glasses\(|\)ties); (iv) \(3\) transferred GANs for Met Faces, cartoons [5], and Disney images [5] respectively.
### Training Settings
In all our experiments, xGA training is carried out for \(10,000\) iterations with random samples drawn from \(\mathcal{Z}_{c}\) and \(\mathcal{Z}_{r}\). We fixed the desired number of attributes to be \(N_{c}=12\), \(N_{n}=4\) and \(N_{m}=4\). Note, this choice was to enable training xGA on a single 15GB Tesla T4 GPU. With the StyleGAN2 models, our optimization takes \(4\) hours; StyleGAN3 takes \(12\) hours due to gradient check-pointing. For all latent directions \(\{\delta_{n}\}\) and \(\{\bar{\delta}_{n}\}\), we set \(\alpha=3\) and this controls how far we manipulate each sample in a given direction. In each iteration, the effective batch size was \(10\), wherein \(2\) samples were used to construct a positive pair and a subset of \(5\) directions were randomly chosen for updating (enforced due to memory constraints). We used the Adam [22] optimizer with learning rate \(0.001\) to update the latent direction parameters. Note, all other model parameters (generators, feature extractor, DRE models) were fixed and never updated. Following common practice with StyleGANs, the attributes are modeled in the style space and the generator's outputs are appropriately resized to fit the size requirements of the chosen feature extractor.
For our optimization objective, we set the hyper-parameter \(\lambda_{a}=0.1\) in \(\mathcal{L}_{\text{xGA}}\). To perform DRE training, we used 2-layer MLPs trained via the Adam optimizer for \(1000\) iterations to minimize the KLIEP losses specified in (5) and (6). At each step, we constructed batches of \(32\) samples from both reference and client GANs, and projected them into the feature space of \(\mathcal{F}\). Lastly, we set \(\lambda_{b}=1.0\); we explore tuning this parameter in the supplement, finding it to be relatively insensitive.
### Evaluation: Common Attribute Discovery
We begin by evaluating the ability of xGA in recovering common attributes across reference and client models. As mentioned earlier, for effective alignment, the choice of the feature extractor is critical. More specifically, \(\mathcal{F}\) must be sufficiently expressive to uncover aligned attributes from both client and reference models. Furthermore, it is important to handle potential distribution shifts across the datasets used to train the GAN models. Hence, a feature extractor that can be robust to commonly occurring distribution shifts is expected to achieve effective alignment via (3). In fact, we make an interesting observation that performing attribute discovery in such an external feature space leads to improved disentanglement in the inferred latent directions. For all results reported here, we used a robust variant of ResNet that was trained to be adversarially robust to style variations [35]. Please refer to the ablation in Section 4.5 for a comparison of different choices.
**Qualitative results** In Figure 4, we show several examples of common attributes identified by xGA for different client-reference pairs, we observe that xGA finds non-trivial attributes. For example, the "sketchify" attribute which naturally occurs in Met Faces (a dataset of paintings), is surprisingly encoded even in the reference CelebA GAN (which only consists of photos of people). We also show examples of other interesting attributes such as "orange fur" in the case of dog-GAN \(\times\) cat-GAN or "blonde hair" in the case of Disney-GAN \(\times\) CelebA-GAN. These results indicate that our proposed alignment objective, when coupled with a robust feature space, can effectively reveal common semantic directions across the client and reference models. We include several additional examples in the supplement.
**Quantitative results** To perform more rigorous quantitative comparisons, we setup a controlled experiment using \(7\) client models corresponding to different CelebA subsets (obtained by excluding images pertinent to specific characteristics). As discussed earlier, we use a standard CelebA StyleGANv2 as the common reference model across all \(7\) experiments. Next, we introduce a score of merit for common attribute discovery based on the intuition that images perturbed along the same attribute will result in similar prediction changes, when measured through an "oracle" attribute classifier [23].
We first generate a batch of random samples from the latent spaces of client and reference GANs, and manipulate them along a common attribute direction \((\delta_{n},\bar{\delta}_{n})\) inferred using xGA. In other words, we synthesize pairs of original and attribute-manipulated images from the two GANs and for each pair, we measure the discrepancy in the predictions from an "oracle" attribute classifier. Mathematically, this can be expressed as \(\mathrm{a}_{i}^{n}=|\mathcal{C}(\mathcal{G}_{c}(\mathrm{z}_{i},\delta_{n}))- \mathcal{C}(\mathcal{G}_{c}(\mathrm{z}_{i}))|\) and \(\bar{\mathrm{a}}_{j}^{n}=|\mathcal{C}(\mathcal{G}_{r}(\bar{\mathrm{z}}_{j}, \bar{\delta}_{n}))-\mathcal{C}(\mathcal{G}_{r}(\bar{\mathrm{z}}_{j}))|\), where \(\mathcal{C}\) is the attribute classifier trained using the labeled CelebA dataset. Finally, we define an alignment score that compares the expected prediction discrepancy across the two GANs using cosine similarity (higher value indicates alignment).
\[\mathcal{A}_{\text{score}}= \mathbb{E}_{n}\bigg{[}\cos\bigg{(}\mathbb{E}_{i}[\mathrm{a}_{i} ^{n}],\mathbb{E}_{j}[\bar{\mathrm{a}}_{j}^{n}]\bigg{)}\bigg{]}, \tag{7}\]
where the inner expectations are w.r.t. the batch of samples and the outer expectation is w.r.t. the \(N_{c}\) common attributes.
We implement \(5\) baseline approaches that apply state-of-the-art attribute discovery methods to the client and reference GANs (independently), and subsequently peform greedy, post-hoc alignment. In particular, we consider SeFa [34], Voynov [39], LatentCLR [43], Jacobian [40], and Hessian [28] methods for attribute discovery. Given the attributes for the two GANs, we use predictions from the "oracle" attribute classifier to measure the degree of alignment between every pair of directions. For example, the pair with the highest cosine similarity score is selected as the first common attribute. Next, we use the remaining latent directions to greedily pick the next attribute, and this process is repeated until we obtain \(N_{c}=12\) attributes. We compute the alignment score from (7)
Figure 4: Visualizing common attributes discovered using xGA for different client-reference GAN pairs. For each case, we illustrate one common attribute (indicated by our description in green) with two random samples from the GAN latent space.
report results from the \(7\) controlled experiments in Table 1. Interestingly, we find that, despite using the "oracle" classifier for alignment, the performance of the baseline methods is significantly inferior to xGA. This clearly evidences the efficacy of our optimization strategy.
### Evaluation: Novel/Missing Attribute Discovery
In this section, we study the effectiveness of xGA in discovering novel (only present in the client) and missing (only present in the reference model) attributes.
**Qualitative results** We first show results for novel attribute discovery for different client GANs in Figure 5. xGA produces highly intuitive results by identifying attributes that are unlike to occur in the reference GAN. For example, "cartoon eyes" and "sculptures" are found to be unique to Disney and Met Faces GANs, when compared to CelebA. Next, we performed missing attribute discovery from the controlled CelebA experiments, where we know precisely which attribute is not encoded by the client GAN w.r.t the reference (standard CelebA StyleGANv2). As described earlier, the client models are always trained on a subset of data used by the reference model and by design, there are no novel attributes. Figure 6 shows examples for the different missing attributes. We find that xGA successfully reveals each of the missing client attributes, even though the data distributions \(P_{c}(\mathrm{x})\) and \(P_{r}(\mathrm{x})\) are highly similar (except for a specific missing attribute).
**Quantitative results** To benchmark xGA in missing attribute discovery, we use the \(7\) controlled CelebA client models and audit with respect to the reference CelebA GAN. We denote the set of attributes (one or more) which are explicitly excluded in each client model by \(\mathcal{M}\). In order to evaluate how well xGA identifies the excluded attributes, we introduce a metric based on mean reciprocal rank (MRR) [30, 38]. For each of the \(N_{m}\) missing attributes from xGA, we compute the average semantic discrepancy from the "oracle" attribute classifier as,
\[\mathrm{a}^{n}=\mathbb{E}_{i}[|\mathcal{C}(\mathcal{G}_{c}(\mathrm{z}_{i}, \delta_{n}))-\mathcal{C}(\mathcal{G}_{c}(\mathrm{z}_{i}))|].\]
Denoting the rank of a missing attribute \(m\in\mathcal{M}\) in the difference vector \(\mathrm{a}^{n}\) as \(\text{rank}(m,\mathrm{a}^{n})\), we can define the attribute recovery (for both missing/novelty) score as:
\[\mathcal{R}_{\text{Score}}=\mathbb{E}_{m}\bigg{[}\max_{n}\bigg{(}\frac{1}{ \text{rank}(m,\mathrm{a}^{n})}\bigg{)}\bigg{]} \tag{8}\]
In Table 1, we show results for missing attribute discovery based on this score. We observe that xGA significantly outperforms all baselines in identifying the missing attribute across the suite of client GANs.
\begin{table}
\begin{tabular}{r c c} Method & \(\mathcal{A}_{\text{score}}\) (\(\uparrow\)) & \(\mathcal{R}_{\text{score}}\) (\(\uparrow\)) \\ \hline SeFa + G. S & \(0.382\pm 0.042\) & \(0.167\pm 0.165\) \\ Voynov + G. S & \(0.544\pm 0.033\) & \(0.254\pm 0.246\) \\ LatentCLR + G. S & \(0.543\pm 0.031\) & \(0.297\pm 0.326\) \\ Hessian + G. S & \(0.567\pm 0.065\) & \(0.224\pm 0.273\) \\ Jacobian + G. S & \(0.502\pm 0.024\) & \(0.233\pm 0.201\) \\ \hline xGA & \(\mathbf{0.660\pm 0.147}\) & \(\mathbf{0.411\pm 0.193}\) \\ \end{tabular}
\end{table}
Table 1: **Common and Missing attribute discovery**. The average alignment scores from the \(7\) controlled CelebA experiments. Note, we report both the mean and standard deviations (\(\pm\) std) for each case, and “G. S” refers to the greedy strategy that we use for alignment.
Figure 5: Visualizing novel attributes in different client GANs characterized by challenging distribution shifts with respect to the reference GAN (CelebA or AFHQ Cat-GANs). In each case, we show image manipulation in the attribute direction for two random sample from the latent space.
Figure 6: Using multiple clients trained with different subsets of CelebA data (one of the face attributes explicitly dropped), we find that, in all cases, xGA accurately recovers the missing attribute.
### Analysis
In this section, we examine the key components of xGA to understand its behavior better.
**Impact of the Choice of \(\mathcal{F}\)** We start by studying the choice of the external, feature space used to perform attribute discovery. For this analysis, we consider the case where we assume \(\mathcal{G}_{r}=\mathcal{G}_{c}\), wherein xGA simplifies to the standard setting of attribute discovery with a single GAN model (set \(\lambda_{b}=0\)), such as SeFA and latentCLR. We make an interesting observation that, using a robust latent space leads to improved diversity and disentanglement in the inferred attributes, when compared to the native latent space of StyleGAN. To quantify this behavior we consider two evaluation metrics based on the predictions for a batch of synthesized images \(\mathcal{G}_{c}(z,\delta_{n})\) from the "oracle" attribute classifier. First, for each latent direction \(\delta_{n}\), the average prediction entropy \(\mathcal{H}_{\text{score}}\)[23] is defined as:
\[\mathcal{H}_{\text{score}}=\mathbb{E}_{n}\bigg{[}\mathbb{E}_{i}\bigg{[} \texttt{Entropy}\big{(}\mathcal{C}(\mathcal{G}_{c}(\texttt{z}_{i},\delta_{n} ))\big{)}\bigg{]}\bigg{]} \tag{9}\]
Second, the deviation in the predictions across all latent directions \(\mathcal{D}_{\text{score}}\) is defined in (10), where \(K\) is the total number of attributes in the "oracle" classifier \(\mathcal{C}\):
\[\mathcal{D}_{\text{score}}=\sum_{k=1}^{K}\texttt{Variance}\bigg{[}\bigg{\{} \mathbb{E}_{i}[\mathcal{C}(\mathcal{G}_{c}(\texttt{z}_{i},\delta_{n})]\bigg{\}} _{n=1}^{N}\bigg{]}_{k} \tag{10}\]
When the entropy is low, it indicates that the semantic manipulation is concentrated to a specific attribute, and hence disentangled. On the other hand, when the deviation is high, it is reflective of the high diversity in the inferred latent directions.
For this analysis, we considered the following feature extractors for implementing xGA: (i) vanilla ResNet-50 trained on ImageNet [15]; (ii) robust variant of ResNet-50 trained with advBN [35]; (iii) ResNet-50 trained via CLIP [31]. Table 2 shows the performance of the three feature extractors on attribute discovery with our \(7\) CelebA GANs trained using different data subsets. Note, we scale all entropy and diversity scores by \(100\) for ease of readability. We make a striking finding that, in terms both the entropy and deviation scores, performing attribute discovery in an external feature space is significantly superior to carrying out the optimization in the native style space (all baselines). As expected, LatentCLR produces the most disentangled attributes among the baselines, and regardless of the choice of \(\mathcal{F}\), xGA leads to significant improvements. More importantly, the key benefit of xGA becomes more apparent from the improvements in the deviation score over the baselines. In the supplement, we include examples for the attributes inferred using all the methods. Finally, among the different choices for \(\mathcal{F}\), the advBN ResNet-50 performs the best in terms of both metrics and hence it was used in all our experiments.
**Single GAN Qualitative Results** Figure 11 visualizes a shortened example of the top \(3\) attributes (induce most changes in the "oracle" classifier predictions). This example show a clear improvement by using a pretrained feature extractor, as xGA identifies the most diverse semantic changes. Complete results, all discovered attributes for all methods, are shown in the supplement.
**Extending xGA to compare multiple GANs** Though all our experiments used a client model w.r.t a reference, our method can be readily extended to perform comparative analysis of multiple GANs, with the only constraint arising from GPU memory since all generators need to be loaded into memory for optimization. We performed a
\begin{table}
\begin{tabular}{c c c} Method & \(\mathcal{H}_{\text{score}}\) (\(\downarrow\)) & \(\mathcal{D}_{\text{score}}\) (\(\uparrow\)) \\ \hline SeFa [34] & \(4.006\pm 0.259\) & \(1.031\pm 0.077\) \\ LatentCLR [43] & \(2.348\pm 0.203\) & \(0.749\pm 0.929\) \\ Voynov [39] & \(2.508\pm 0.069\) & \(0.585\pm 0.725\) \\ Hessian [28] & \(2.707\pm 0.145\) & \(0.642\pm 0.795\) \\ Jacobian [40] & \(2.675\pm 0.070\) & \(0.661\pm 0.826\) \\ \hline xGA (ViT) & \(1.988\pm 0.068\) & \(3.072\pm 3.845\) \\ xGA (MAE ViT) & \(2.102\pm 0.035\) & \(3.103\pm 3.814\) \\ xGA (CLIP ViT) & \(2.091\pm 0.041\) & \(3.135\pm 3.901\) \\ xGA (ResNet-50) & \(1.901\pm 0.060\) & \(3.111\pm 3.852\) \\ xGA (Clip ResNet-50) & \(2.033\pm 0.038\) & \(3.121\pm 3.863\) \\ xGA (advBN ResNet-50) & \(\mathbf{1.881}\pm 0.057\) & \(\mathbf{3.153}\pm 3.904\) \\ \hline \end{tabular}
\end{table}
Table 2: **Choice of the feature space for attribute discovery**. Using an external feature space is superior to GAN’s native style space, in terms of both entropy (\(\times 100\)) and deviation metrics. In this experiment, we set \(\mathcal{G}_{r}=\mathcal{G}_{c}\), and aggregate the metrics from the set of controlled CelebA StyleGANs.
Figure 7: Comparing xGA on single GAN attribute discovery with existing approaches, we find that more diverse and novel attributes can be found simply by using an external feature space. We exploit this for effective alignment across two GAN models.
proof-of-concept experiment by discovering common attributes across \(3\) different independently trained StyleGANs as shown in Figure 8. For this setup, we expanded the cost function outlined in (3) to include \(3\) pairwise alignment terms from the \(3\) GANs to perform contrastive training, in addition to an extra independent term from the third model. While beyond scope for the current work, scaling xGA is an important direction for future work.
## 5 Discussion
We introduced the first cross-GAN auditing framework, which utilizes a novel optimization technique to jointly infer common, novel and missing attributes for a client GAN w.r.t any reference GAN. Through a large suite of datasets and GAN models, we demonstrate that the proposed method (i) consistently leads to higher quality (disentangled & diverse) attributes, (ii) effectively reveals shared attributes even across challenging distribution shifts, and (iii) accurately identifies the novel/missing attributes in our controlled experiments (i.e., known ground truth).
**Limitations** First, similar to other optimization-based attribute discovery approaches [39, 43], there is no guarantee that all prevalent factors are captured, though our controlled empirical studies clearly demonstrate the efficacy of xGA over existing approaches. Second, while using an external feature space enhances the performance of attribute discovery, this becomes an additional component that must be tuned. While we found advBN ResNet-50 to be a reasonable choice for a variety of face datasets (and AFHQ), a more systematic solution will expand the utility of our approach to other applications.
## Acknowledgements
This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The project is directly supported by LDRD 22-ERD-006, and DOE HRRL. The manuscript is reviewed and released under LLNL-PROC-832985.
|
2305.03486 | Iterative $α$-(de)Blending: a Minimalist Deterministic Diffusion
Model | We derive a minimalist but powerful deterministic denoising-diffusion model.
While denoising diffusion has shown great success in many domains, its
underlying theory remains largely inaccessible to non-expert users. Indeed, an
understanding of graduate-level concepts such as Langevin dynamics or score
matching appears to be required to grasp how it works. We propose an
alternative approach that requires no more than undergrad calculus and
probability. We consider two densities and observe what happens when random
samples from these densities are blended (linearly interpolated). We show that
iteratively blending and deblending samples produces random paths between the
two densities that converge toward a deterministic mapping. This mapping can be
evaluated with a neural network trained to deblend samples. We obtain a model
that behaves like deterministic denoising diffusion: it iteratively maps
samples from one density (e.g., Gaussian noise) to another (e.g., cat images).
However, compared to the state-of-the-art alternative, our model is simpler to
derive, simpler to implement, more numerically stable, achieves higher quality
results in our experiments, and has interesting connections to computer
graphics. | Eric Heitz, Laurent Belcour, Thomas Chambon | 2023-05-05T12:56:37Z | http://arxiv.org/abs/2305.03486v1 | # Iterative \(\alpha\)-(de)Blending: a Minimalist Deterministic Diffusion Model
###### Abstract.
We derive a minimalist but powerful deterministic denoising-diffusion model. While denoising diffusion has shown great success in many domains, its underlying theory remains largely inaccessible to non-expert users. Indeed, an understanding of graduate-level concepts such as Langevin dynamics or score matching appears to be required to grasp how it works. We propose an alternative approach that requires no more than undergrad calculus and probability. We consider two densities and observe what happens when random samples from these densities are blended (linearly interpolated). We show that iteratively blending and deblending samples produces random paths between the two densities that converge toward a deterministic mapping. This mapping can be evaluated with a neural network trained to deblend samples. We obtain a model that behaves like deterministic denoising diffusion: it iteratively maps samples from one density (e.g., Gaussian noise) to another (e.g., cat images). However, compared to the state-of-the-art alternative, our model is simpler to derive, simpler to implement, more numerically stable, achieves higher quality results in our experiments, and has interesting connections to computer graphics.
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Conference Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyright: _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted _SIGGRAPH '23 Proceedings, August 6-10, 2023, Los Angeles, CA, USA_
+
Footnote †: copyrighted: copyrighted: copyrighted _SIGGRAPH '23 Proceedings
_.then came deterministic diffusion models_. Langevin's SDEs variants describe an equilibrium between noise injection and noise removal. Nullifying the noise injection in these SDEs yields _Ordinary Differential Equations_ (ODEs), also called _Probability Flow ODEs_[12], that simply describe the deterministic trajectory of a noisy sample projected back onto the true density. For instance, Denoising Diffusion Implicit Models (DDIMs) [13] are the ODE variants of DDPMs. These ODEs provide a smooth, deterministic mapping between the Gaussian noise density and the true density. Deterministic diffusion models have recently been proposed because an ODE requires far fewer solver iterations than its SDE counterpart. Furthermore, a deterministic mapping presents multiple practical advantages because samples are uniquely determined by their prior Gaussian noise. For instance, they can be edited or interpolated via the Gaussian noise.
_Is there a simpler approach to deterministic diffusion?_ The point of the above story is that, in the recent line of work on diffusion models, stochastic diffusion models came _first_ and deterministic diffusion models came _after_, framed as special cases of the stochastic ones. Hence they inherited the underlying mindset and mathematical framework. As a result, knowledge of advanced concepts such as Langevin dynamics, score matching, how they relate to Gaussian noise, etc., appears to be required to understand recent deterministic diffusion models. We argue that this is an unnecessary detour for something that can be framed in a much simpler and more general way. We propose a fresh take on deterministic diffusion with another mindset, using only basic sampling concepts.
* **Simpler derivation.** We derive a deterministic, diffusion-like model based on the sampling interpretation of blending and deblending. We call it Iterative \(\alpha\)-(de)Blending (IADB) in reference to the computer graphics \(\alpha\)-blending technique that composes images with a transparency parameter [14]. Our model defines a mapping between arbitrary densities (of finite variance).
* **Practical improvements.** We show that, when the initial density is Gaussian, the mappings defined by IADB are exactly the same as the ones defined by DDIM [13], but with several benefits. First, our derivation leads to a more numerically stable sampling formulation. Second, our experiments show that IADB consistently outperforms DDIM in terms of final FID scores for several datasets and is more stable for a small number of sampling steps.
* **Theoretical improvements.** A side effect of our derivation is that, in contrast to DDIM, IADB does not require the assumption that the initial density is Gaussian, which is a significant generalization. Furthermore, our derivation leads to a stochastic mapping algorithm that is reminiscent of computer graphics applications.
## 2. Blending and deblending as sampling
_Initial densities._ We consider two densities \(p_{0},p_{1}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\) represented, respectively, by the red triangle and the green square in Figure 2. We denote their corresponding samples as \(x_{0}\sim p_{0}\) and \(x_{1}\sim p_{1}\). For independent samples \(x_{0}\) and \(x_{1}\), we use the notation \((x_{0},x_{1})\sim p_{0}\times p_{1}\).
_Definition of \(\alpha\)-blending._ We use \(p_{\alpha}\) to refer to the density of the blended samples \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha\,x_{1}\) obtained by blending random samples \((x_{0},x_{1})\sim p_{0}\times p_{1}\) with a parameter \(\alpha\in[0,1]\).
_Definition of \(\alpha\)-deblending._ We call the inverse sampling operation \(\alpha\)-deblending, i.e., generating random \(x_{0}\) and \(x_{1}\) from the initial densities that could have been \(\alpha\)-blended to a point \(x_{\alpha}\). Formally, it means sampling random _posteriors_\((x_{0},x_{1})_{(x_{\alpha},\alpha)}\sim(p_{0}\times p_{1})_{(x_{\alpha}, \alpha)}\). The key property is that if \(x_{\alpha}\in\mathbb{R}^{d}\) is a **fixed** point, the posterior densities _are not_ the initial densities \(p_{0}\times p_{1}\). However, if \(x_{\alpha}\sim p_{\alpha}\) is a **random** sample, the posterior densities _are_ the initial densities. This follows directly from the _law of total probability_ illustrated in Figure 3. In other words, \(\alpha\)-deblending a random sample \(x_{\alpha}\sim p_{\alpha}\) is equivalent to sampling \((x_{0},x_{1})\sim p_{0}\times p_{1}\).
Figure 3. **The law of total probability.** Intuitively, deblending a **fixed**\(x_{\alpha}\in\mathbb{R}^{d}\) means sampling only in a subset of the initial densities. However, if \(x_{\alpha}\sim p_{\alpha}\) is **random**, all these subsets are merged and the sampling occurs in the initial densities as if we had directly sampled \((x_{0},x_{1})\sim p_{0}\times p_{1}\).
Figure 2. **Blending and deblending as sampling operations.**
Definition of \(\alpha\)-(de)blending.: Let's consider two blending parameters \(\alpha_{1},\alpha_{2}\in[0,1]\). Using the previous proposition, we can chain \(\alpha_{1}\)-deblending and \(\alpha_{2}\)-blending to map a random sample \(x_{\alpha_{1}}\sim p_{\alpha_{1}}\) to a random sample \(x_{\alpha_{2}}\sim p_{\alpha_{2}}\). Indeed, by sampling vectors for a random sample \(x_{\alpha_{1}}\sim p_{\alpha_{1}}\), we obtain random samples \((x_{0},x_{1})\sim(p_{0}\times p_{1})\) from the initial densities, and blending them with parameter \(\alpha_{2}\) provides a random sample \(x_{\alpha_{2}}\sim p_{\alpha_{2}}\). This is illustrated in Figure 4.
## 3. Iterative \(\alpha\)-(de)blending (Iadb)
Our objective is to define a deterministic mapping such that i.i.d. samples \(x_{0}\sim p_{0}\) passed through the mapping produce i.i.d. samples \(x_{1}\sim p_{1}\). We introduce Iterative \(\alpha\)-(de)Blending (IADB), an iterative algorithm that can be implemented stochastically or deterministically. Our main result is that both variants converge toward the same limit, which yields a deterministic mapping between the densities \(p_{0}\) and \(p_{1}\), as shown in Figure 5.
Algorithm 1: iterative \(\alpha\)-(de)blending (stochastic).: Let's consider a number of iterations \(T\) and evenly distributed blending parameters \(\alpha_{t}=t/T,t=\{0,..,T\})\). This algorithm creates a sequence \((x_{\alpha_{t}}\sim p_{\alpha_{t}},t=\{0,..,T\})\) that starts with a random sample \(x_{0}\sim p_{0}\) and ends with a random sample \(x_{\alpha_{T}}=x_{1}\sim p_{1}\) by applying \(\alpha\)-(de)blending iteratively. In each iteration, \(x_{\alpha_{t}}\sim p_{\alpha_{t}}\) is \(\alpha_{t}\)-deblended by sampling random posteriors, which are sampled and \(\alpha_{t+1}\)-blended again to obtain a new sample \(x_{\alpha_{t+1}}\sim p_{\alpha_{t+1}}\). End to end, this algorithm provides a stochastic mapping between samples \(x_{0}\sim p_{0}\) and samples \(x_{1}\sim p_{1}\).
```
0:\(x_{0}\sim p_{0}\), \(T\), \(\alpha_{t}:=\frac{t}{T}\) for\(t=0,..,T-1\)do sample \((x_{0},x_{1})\sim(p_{0}\times p_{1})_{(x_{\alpha_{t}},\alpha_{t})}\) \(x_{\alpha_{t+1}}=(1-\alpha_{t+1})\ x_{0}+\alpha_{t+1}\ x_{1}\) endfor
```
**Algorithm 2** Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 1 retrieve \(\alpha\)-(de)blending (stochastic)
Algorithm 3. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 4. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 1 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 5. Iterative \(\alpha\)-(de)blending
[MISSING_PAGE_POST]
Algorithm 5. Iterative \(\alpha\)-(de)blending
Algorithm 1 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 2 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 3. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 1 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 2 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 3. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 4. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 5. Iterative \(\alpha\)-(de)blending
Algorithm 1 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 2 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 3. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 1 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 2 retrieve \(\alpha\)-(de)blending (deterministic)
Algorithm 3. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 4. Iterative \(\alpha\)-(de)blending (deterministic)
Algorithm 5. Iterative \(\alpha\)-(de)blending
Algorithm 6. Both algorithms step iteratively by moving the samples along segments defined by their posterior densities. The difference is that Algorithm 1 uses segments between random posterior samples, which creates stochastic paths, while Algorithm 2 uses the segment between the average of the posterior samples, which creates deterministic paths. As the number of steps \(T\) increases, the randomness of the stochastic paths averages out and they converge toward the deterministic paths.
_Connection to computer graphics applications._ Figures 7 and 8 show how the mapping behaves in 2D. The deterministic mapping defined by the limit of the algorithm is a transport map (also called an area-preserving parameterization) that could potentially be of interest for common computer graphics applications such as parameterizing, sampling, and stippling. We believe that showing the connection is interesting, but our point here is not to make competitive claims for these applications. Instead, our focus is on using this mapping for deterministic denoising diffusion, as presented in Section 4.
## 4. Learning iterative \(\alpha\)-(de)blending
In this section, we explain how to use iterative \(\alpha\)-(de)blending in a machine learning context, where we train a neural network \(D_{\theta}\) to predict the average posterior samples used in Algorithm 2.
### Variant Formulations of Iterative \(\alpha\)-(de)Blending
A direct transposition of Algorithm 2 means learning the averages of both posterior samples \(\bar{x}_{0}\) and \(\bar{x}_{1}\). However, one is implicitly given by the other such that it is not necessary to learn both, and variants of Alg. 2 are possible. The fact that multiple, theoretically equivalent, variants are possible is pointed out by Salimans and Ho (2022). However, they are not equivalent in practice. In Table 1, we summarize four variants derived in Appendix B of our supplemental and compare their practical properties. Variant (a) is the vanilla transposition of Algorithm 2. It is highly unstable because instead of being a numerical update of the current sample \(x_{\alpha_{t+1}}\), the new sample \(x_{\alpha_{t+1}}\) is computed from the outputs of the neural network. The residual learning errors of the network accumulate at each step and the larger the number of steps \(T\), the more this variant diverges. Variants (b) and (c) consist of learning either only \(\bar{x}_{0}\) or \(\bar{x}_{1}\). The sampling suffers from numerical instability near \(\alpha_{t}=0\) and \(a_{t}=1\) because of the respective divisions by \(\alpha_{t}\) and \(1-\alpha_{t}\). We recommend using variant (d), which consists of learning the average difference vector \(\bar{x}_{1}-\bar{x}_{0}\). It is a direct transposition of the ODE defined in Equation 2. This variant updates the current samples at each iteration without any division, making it the most stable variant for both training and sampling.
### Training and Sampling
Following variant (d) of Table 1, we train the neural network \(D_{\theta}\) to predict the average difference vector between the posterior samples. Our learning objective is defined by
\[\min_{\theta}\ \operatorname*{\mathbb{E}}_{\alpha,x_{\alpha}}\left[\left\|D_{ \theta}\left(x_{\alpha},\alpha\right)-\operatorname*{\mathbb{E}}_{(x_{0},x_{1} )_{(x_{\alpha},\alpha)}}\left[x_{1}-x_{0}\right]\right\|^{2}\right]. \tag{3}\]
Note that minimizing the \(l^{2}\) norm of the average of a distribution is equivalent to minimizing the \(l^{2}\) norm of all of the samples of the distribution. We obtain the equivalent objective
\[\min_{\theta}\ \operatorname*{\mathbb{E}}_{\alpha,x_{\alpha},(x_{0},x_{1} )_{(x_{\alpha},\alpha)}}\left[\left\|D_{\theta}\left(x_{\alpha},\alpha\right)- \left(x_{1}-x_{0}\right)\right\|^{2}\right]. \tag{4}\]
Finally, as explained in Section 2, sampling \(x_{\alpha}\sim p_{\alpha}\) first and then \((x_{0},x_{1})_{(x_{\alpha},\alpha)}\) is equivalent to sampling \((x_{0},x_{1})\sim(p_{0},p_{1})\) and blending them to obtain \(x_{\alpha}\sim p_{\alpha}\). With this, we obtain our final learning objective
\[\min_{\theta}\ \operatorname*{\mathbb{E}}_{\alpha,x_{0},x_{1}}\left[\left\|D_{ \theta}\left(\left(1-\alpha\right)x_{0}+\alpha x_{1},\alpha\right)-\left(x_{1 }-x_{0}\right)\right\|^{2}\right], \tag{5}\]
which we use to optimize \(\theta\) in Algorithm 3. In Algorithm 4, we iteratively map samples \(x_{0}\sim p_{0}\) to samples \(x_{1}\sim p_{1}\) in the same way as in Algorithm 2, where we use the neural network \(D_{\theta}\) to obtain the average posterior difference.
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 3** Training
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 4** Sampling
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 5** Training
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 6** Training
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 7** Training
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 8** Training
```
0:\(x_{0}\sim p_{0},x_{1}\sim p_{1},\alpha\sim\mathcal{U}_{[0,1]}\) \(x_{\alpha}=(1-\alpha)\,x_{0}+\alpha x_{1}\) \(l=\left\|D_{\theta}\left(x_{\alpha},\alpha\right)-\left(x_{1}-x_{0}\right) \right\|^{2}\) backprop from \(l\) and update \(\theta\)
```
**Algorithm 8** Training
Figure 8. **Deterministic mapping with Algorithm 2. We use the deterministic mapping to warp blue-noise samples in the unit square to arbitrary densities.
Figure 7. **Stochastic mapping with Algorithm 1. We map a uniform density on a square (\(p_{0}\)) and to a uniform density on a disk (\(p_{1}\)). The checkerboard pattern shows the randomness of the resulting mapping. The larger the number of steps \(T\), the more the mapping converges and reveals a smooth parameterization of the disk.
## 5. Experiments with Analytic Densities
Experiments with 1D densities.In Figure 9, we experiment with analytic 1D densities, where the expectation \(\bar{x}_{1}-\bar{x}_{0}\) can be computed analytically rather than being learnt by a neural network \(D_{\theta}\). The experiment confirms that the analytic version matches the reference and that the neural network trained with the \(l_{2}\) norm approximates the same mapping. We also tested training the neural network with the \(l_{1}\) norm, which makes the neural network approximate the median of \(x_{1}-x_{0}\) rather than its average. The resulting mapping does not match the reference. This confirms that learning the average via \(l_{2}\) training is a key component of our model, as explained in Section 4.2.
Experiments with 2D densities.Figure 10 shows that the intermediate blended densities \(p_{\alpha}\) computed by our mapping match the reference blended densities. Figure 11 shows how our algorithm maps the samples of \(p_{0}\) to samples of \(p_{1}\). These results demonstrate that IADB computes valid mappings between arbitrary densities.
## 7. Discussion
Improved samplerWe experimented with IADB in its vanilla setting with a uniform blending schedule and a first-order ODE solver. It readily benefits from orthogonal improvements brought to denoising diffusion, such as better blending schedules and higher-order ODE solvers (Karras et al., 2022). For instance, Algorithm 5 provides an improved version of Algorithm 4 with a 2nd-order Runge-Kutta integration and a cosine schedule.
```
0:\(x_{0}\sim p_{0}\), \(T\), \(\alpha_{t}:=1-\cos\left(\frac{t}{T}\frac{\pi}{2}\right)\) for\(t=0,..,T-1\)do \(x_{\alpha_{t+\frac{1}{2}}}=x_{\alpha_{t}}+\left(\alpha_{t+\frac{1}{2}}-\alpha_{ t}\right)\)\(D_{\theta}\left(x_{\alpha_{t}},\alpha_{t}\right)\) \(x_{\alpha_{t+1}}=x_{\alpha_{t}}+\left(\alpha_{t+1}-\alpha_{t}\right)\)\(D_{\theta}\left(x_{\alpha_{t+\frac{1}{2}}},\alpha_{t+\frac{1}{2}}\right)\) endfor
```
**Algorithm 5** Sampling (2nd-order Runge-Kutta, cosine schedule)
Stochastic Differential Equations (SDEs)The random sequence computed by the stochastic version of IADB presented in Algorithm 1 is a Markov chain. This algorithm might therefore appear reminiscent of stochastic diffusion models (Song et al., 2021) based on SDEs. However, IADB is not related to an SDE. Indeed, SDEs model stochastic behaviors at the infinitesimal scale while our mapping is stochastic for discrete steps and becomes a deterministic ODE in the infinitesimal limit.
Non-Gaussian denoising diffusionSome previous works have focused on replacing Gaussian noise with other noise distributions, such as the generalised normal (exponential power) distribution (Deasy et al., 2021) or the Gamma distribution (Nachmani et al., 2021). Our more general derivation works with any finite-variance density rather than specific noise alternatives. Peluchetti (2022) proposes a more general SDE framework. Our ODE can be derived from his SDE by nullifying the stochastic component and following its aggregation method. In this respect, our ODE is not entirely new. However, our derivation is new and incomparably simpler, which is the main point of this paper.
## 8. Conclusion
The objective of this work was to find a simple and intuitive way to approach deterministic denoising diffusion. Using only simple sampling concepts, we derived Iterative \(\alpha\)-(De)Blending (IADB), a deterministic diffusion model based on a sampling interpretation of blending and deblending. We have seen that our model defines exactly the same mapping as DDIM (Song et al., 2021), the state-of-the-art competitor in deterministic denoising diffusion. This yields a positive answer to the question asked in the introduction _"Is there a simpler approach to deterministic diffusion?"_. Indeed, it is possible to derive the same result without leveraging any knowledge about Langevin dynamics, score matching, SDEs, etc. Getting there was the whole point of this paper. Furthermore, our simpler IADB derivation provides both practical and theoretical gains. It has led to a more numerically stable formulation that produces better FID scores than DDIM and has revealed that DDIM's Gaussian assumption is theoretically unnecessary.
|
2310.06990 | Nonabelian embedding tensors on 3-Lie algebras and 3-Leibniz-Lie
algebras | In this paper, first we introduce the notion of a nonabelian embedding tensor
on the 3-Lie algebra. Then, we introduce the notion of a 3-Leibniz-Lie algebra,
which is the underlying algebraic structure of a nonabelian embedding tensor on
the 3-Lie algebra, and can also be viewed as a nonabelian generalization of a
3-Leibniz algebra. Next we develop the cohomology of nonabelian embedding
tensors on 3-Lie algebras with coefficients in a suitable representation and
use the first cohomology group to characterize infinitesimal deformations.
Finally, we investigate nonabelian embedding tensors on 3-Lie algebras induced
by Lie algebras. | Wen Teng, Xiansheng Dai | 2023-08-16T02:43:46Z | http://arxiv.org/abs/2310.06990v1 | # Nonabelian embedding tensors on 3-Lie algebras and 3-Leibniz-Lie algebras
# Nonabelian embedding tensors on 3-Lie algebras and 3-Leibniz-Lie algebras
**Wen Teng\({}^{1}\), Xiansheng Dai\({}^{2}\)**
1. School of Mathematics and Statistics, Guizhou University of Finance and Economics
Guiyang 550025, P. R. of China
E-mail: [email protected] (Wen Teng)
2. School of Mathematical Sciences, Guizhou Normal University
Guizhou 550005, P. R. of China
E-mail:[email protected] (Xiansheng Dai)
**Abstract** In this paper, first we introduce the notion of a nonabelian embedding tensor on the 3-Lie algebra. Then, we introduce the notion of a 3-Leibniz-Lie algebra, which is the underlying algebraic structure of a nonabelian embedding tensor on the 3-Lie algebra, and can also be viewed as a nonabelian generalization of a 3-Leibniz algebra. Next we develop the cohomology of nonabelian embedding tensors on 3-Lie algebras with coefficients in a suitable representation and use the first cohomology group to characterize infinitesimal deformations. Finally, we investigate nonabelian embedding tensors on 3-Lie algebras induced by Lie algebras.
**Key words:** 3-Lie algebra; 3-Leibniz-Lie algebra; nonabelian embedding tensor; cohomology.
**2020 MSC:**17A42, 17B56, 17B38, 17B40
## 1 Introduction
The concept of embedded tensor [26] and related tensor hierarchies provide a useful tool for constructing the theory of supergravity and higher gauge theory [6]. See [4, 5, 8, 9, 10, 11, 12, 13] and the references therein for a great deal of literature on embedding tensors and related tensor hierarchies. In [20], the authors first observed the mathematical essence behind the embedding tensor and proved that the embedding tensor naturally produced Leibniz algebra. In the application of physics, they observed that in the construction of the corresponding gauge theory, they focused more on Leibniz algebra than embedding tensor. In [27], Sheng, Tang and Zhu considered cohomology, deformations and homotopy
theory for embedding tensors and Lie-Leibniz triples. Later on, the deformation and cohomology theory of embedding tensors on 3-Lie algebras are given in [16]. Tang and Sheng [30] first proposed the nonabelian embedding tensor on Lie algebras, which is a nonabelian generalization of the embedding tensor, and gave the algebraic structure behind the nonabelian embedding tensor as Leibniz-Lie algebras. Furthermore, the nonabelian embedding tensor on Lie algebras has been extended to the Hom setting in [32].
On the other hand, Filippov [14] first introduced the concepts of 3-Lie algebras and more generally, \(n\)-Lie algebras (also called Filippov algebras). In recent years, 3-Lie algebra has been widely studied and applied in the fields of mathematics and physics, including string theory, Nambu mechanics and M2-branes [2, 15, 17, 25]. Further research on 3-Lie algebras could be found in [1, 3, 18, 19, 21, 22, 23, 24, 28, 31, 33, 34, 35, 36] and references cited therein. Motivated by Tang's [30] terminology of nonabelian embedding tensors and considering the importance of 3-Lie algebras, cohomology and deformation theories, we mainly study the nonabelian embedding tensors on 3-Lie algebras in this paper.
This paper is organized as follows. Section 2 first recalls some basic notions of 3-Lie algebras and 3-Leibniz algebras. Then we introduce the coherent action of a 3-Lie algebra on another 3-Lie algebra and the notion of nonabelian embedding tensors on 3-Lie algebras with respect to a coherent action. In Section 3, the notion of 3-Leibniz-Lie algebra is introduced as the basic algebraic structure of a nonabelian embedding tensor on the 3-Lie algebra. Naturally, a 3-Leibniz-Lie algebra induces a 3-Leibniz algebra. In Section 4, the cohomology theory of nonabelian embedding tensors on 3-Lie algebras is introduced. As application, we characterize the infinitesimal deformation using the first cohomology group. In Section 5, we investigate nonabelian embedding tensors on 3-Lie algebras induced by Lie algebras.
All vector spaces and algebras considered in this paper are on the field \(\mathbb{K}\) with the characteristic of 0.
## 2 Nonabelian embedding tensors on 3-Lie algebras
This section recalls some basic notions of 3-Lie algebras and 3-Leibniz algebras. After that, we introduce the coherent action of a 3-Lie algebra on another 3-Lie algebra, and we introduce the concept of nonabelian embedding tensors on 3-Lie algebras by its coherent action as a nonabelian generalization of embedding tensors on 3-Lie algebras [16].
**Definition 2.1**.: (see [14]) A 3-Lie algebra is a pair \((L,[-,-,-]_{L})\) consisting of a vector space \(L\) and a skew-symmetric trilinear map \([-,-,-]_{L}:\wedge^{3}L\to L\) such that
\[[l_{1},l_{2},[l_{3},l_{4},l_{5}]_{L}]_{L}=[[l_{1},l_{2},l_{3}]_{L},l_{4},l_{5} ]_{L}+[l_{3},[l_{1},l_{2},l_{4}]_{L},l_{5}]_{L}+[l_{3},l_{4},[l_{1},l_{2},l_{5} ]_{L}]_{L}, \tag{2.1}\]
for any \(l_{i}\in L\).
A homomorphism between two 3-Lie algebras \((L_{1},[-,-,-]_{L_{1}})\) and \((L_{2},[-,-,-]_{L_{2}})\) is a linear map \(f:L_{1}\to L_{2}\) satisfies \(f([l_{1},l_{2},l_{3}]_{L_{1}})=[f(l_{1}),f(l_{2}),f(l_{3})]_{L_{2}},\ \forall l_{1},l_{2},l_{3}\in L_{1}\).
**Definition 2.2**.: (1) (see [19]) A representation of a 3-Lie algebra \((L,[-,-,-]_{L})\) on a vector space \(H\) is a linear map \(\rho:\wedge^{2}L\to\mathrm{End}(H)\), such that
\[\rho([l_{1},l_{2},l_{3}]_{L},l_{4})=\rho(l_{2},l_{3})\rho(l_{1},l_ {4})+\rho(l_{3},l_{1})\rho(l_{2},l_{4})+\rho(l_{1},l_{2})\rho(l_{3},l_{4}), \tag{2.2}\] \[\rho(l_{1},l_{2})\rho(l_{3},l_{4})=\rho(l_{3},l_{4})\rho(l_{1},l_ {2})+\rho([l_{1},l_{2},l_{3}]_{L},l_{4})+\rho(l_{3},[l_{1},l_{2},l_{4}]_{L}), \tag{2.3}\]
for all \(l_{1},l_{2},l_{3},l_{4}\in L\). We also denote a representation of \(L\) on \(H\) by \((H;\rho)\).
(2) A coherent action of a 3-Lie algebra \((L,[-,-,-]_{L})\) on another 3-Lie algebra \((H,[-,-,-]_{H})\) is a linear map \(\rho:\wedge^{2}L\to\mathrm{End}(H)\) satisfying Eqs. (2.2), (2.3) and
\[\rho(l_{1},l_{2})[h_{1},h_{2},h_{3}]_{H}= [\rho(l_{1},l_{2})h_{1},h_{2},h_{3}]_{H}+[h_{1},\rho(l_{1},l_{2}) h_{2},h_{3}]_{H}+[h_{1},h_{2},\rho(l_{1},l_{2})h_{3}]_{H}, \tag{2.4}\] \[[\rho(l_{1},l_{2})h_{1},h_{2},h_{3}]_{H}= 0, \tag{2.5}\]
for all \(l_{1},l_{2},l_{3}\in L\) and \(h_{1},h_{2},h_{3}\in H\). We denote a coherent action of \(L\) on \(H\) by \((H,[-,-,-]_{H};\rho^{\dagger})\).
**Example 2.3**.: Let \((H,[-,-,-]_{H})\) be a 3-Lie algebra. Define \(ad:\wedge^{2}H\to\mathrm{End}(H)\) by \(ad(h_{1},h_{2})(h):=[h_{1},h_{2},h],\forall\ h_{1},h_{2},h\in H.\) Then \((H;ad)\) is a representation of \((H,[-,-,-]_{H})\), which is called the adjoint representation. Furthermore, if \(ad\) satisfies
\[[ad(h_{1},h_{2})h_{1}^{\prime},h_{2}^{\prime},h_{3}^{\prime}]_{H}=0,\forall\ h_{1}^{ \prime},h_{2}^{\prime},h_{3}^{\prime}\in H,\]
then \((H,[-,-,-]_{H};ad^{\dagger})\) is a coherent adjoint action of \((H,[-,-,-]_{H})\).
**Definition 2.4**.: (see [7]) A 3-Leibniz algebra is a vector space \(\mathcal{L}\) together with a trilinear operation \([-,-,-]_{\mathcal{L}}:\mathcal{L}\otimes\mathcal{L}\otimes\mathcal{L}\to \mathcal{L}\) such that
\[[l_{1},l_{2},[l_{3},l_{4},l_{5}]_{\mathcal{L}}]_{\mathcal{L}}=[[l_{1},l_{2},l_ {3}]_{\mathcal{L}},l_{4},l_{5}]_{\mathcal{L}}+[l_{3},[l_{1},l_{2},l_{4}]_{ \mathcal{L}},l_{5}]_{\mathcal{L}}+[l_{3},l_{4},[l_{1},l_{2},l_{5}]_{\mathcal{L }}]_{\mathcal{L}},\]
for any \(l_{i}\in\mathcal{L}\).
**Proposition 2.5**.: _Let \((L,[-,-,-]_{L})\) and \((H,[-,-,-]_{H})\) be two 3-Lie algebras and \(\rho:\wedge^{2}L\to\mathrm{End}(H)\) a bilinear map. Then \((H,[-,-,-]_{H};\rho^{\dagger})\) is a coherent action of \(L\) if and only if \(L\oplus H\) is a 3-Leibniz algebra under the following map:_
\[[l_{1}+h_{1},l_{2}+h_{2},l_{3}+h_{3}]_{\rho}:=[l_{1},l_{2},l_{3}]_{L}+\rho(l_{1 },l_{2})h_{3}+[h_{1},h_{2},h_{3}]_{H},\]
_for any \(l_{1},l_{2},l_{3}\in L\) and \(h_{1},h_{2},h_{3}\in H\). \((L\oplus H,[-,-,-]_{\rho})\) is called the nonabelian hemisemidirect product 3-Leibniz algebra, and denoted by \(L\ltimes_{\rho}H\)._
Proof.: For all \(l_{1},l_{2},l_{3},l_{4},l_{5}\in L\) and \(h_{1},h_{2},h_{3},h_{4},h_{5}\in H\), by Eqs. (2.1), (2.3), (2.4) and (2.5), we have
\[[l_{1}+h_{1},l_{2}+h_{2},[l_{3}+h_{3},l_{4}+h_{4},l_{5}+h_{5}]_{ \rho}]_{\rho}-[[l_{1}+h_{1},l_{2}+h_{2},l_{3}+h_{3}]_{\rho},l_{4}+h_{4},l_{5}+h_ {5}]_{\rho}\] \[-[l_{3}+h_{3},[l_{1}+h_{1},l_{2}+h_{2},l_{4}+h_{4}]_{\rho},l_{5}+h_ {5}]_{\rho}-[l_{3}+h_{3},l_{4}+h_{4},[l_{1}+h_{1},l_{2}+h_{2},l_{5}+h_{5}]_{ \rho}]_{\rho}\] \[= [l_{1},l_{2},[l_{3},l_{4},l_{5}]_{L}]_{L}+\rho(l_{1},l_{2})\rho(l_ {3},l_{4})h_{5}+\rho(l_{1},l_{2})[h_{3},h_{4},h_{5}]_{H}+[h_{1},h_{2},\rho(l_{ 3},l_{4})h_{5}]_{H}\] \[+[h_{1},h_{2},[h_{3},h_{4},h_{5}]_{H}]_{H}-[[l_{1},l_{2},l_{3}]_{L },l_{4},l_{5}]_{L}-\rho([l_{1},l_{2},l_{3}]_{L},l_{4})h_{5}-[\rho(l_{1},l_{2}) h_{3},h_{4},h_{5}]_{H}\] \[-[[h_{1},h_{2},h_{3}]_{H},h_{4},h_{5}]_{H}-[l_{3},[l_{1},l_{2},l_{ 4}]_{L},l_{5}]_{L}-\rho(l_{3},[l_{1},l_{2},l_{4}]_{L})h_{5}-[h_{3},\rho(l_{1},l_ {2})h_{4},h_{5}]_{H}\] \[-[h_{3},[h_{1},h_{2},h_{4}]_{H},h_{5}]_{H}-[l_{3},l_{4},[l_{1},l_{2 },l_{5}]_{L}]_{L}-\rho(l_{3},l_{4})\rho(l_{1},l_{2})h_{5}-\rho(l_{3},l_{4})[h_{ 1},h_{2},h_{5}]_{H}\] \[-[h_{3},h_{4},\rho(l_{1},l_{2})h_{5}]_{H}-[h_{3},h_{4},[h_{1},h_{2 },h_{5}]_{H}]_{H}\] \[= [h_{1},h_{2},\rho(l_{3},l_{4})h_{5}]_{H}-\rho(l_{3},l_{4})[h_{1},h_ {2},h_{5}]_{H}\] \[= 0.\]
Thus, \((L\oplus H,[-,-,-]_{\rho})\) is a 3-Leibniz algebra.
The converse can be proved similarly. We omit the details.
**Definition 2.6**.: (1) A nonabelian embedding tensor on a 3-algebra \((L,[-,-,-]_{L})\) with respect to a coherent action \((H,[-,-,-]_{H};\rho^{\dagger})\) is a linear map \(\Lambda:H\to L\) satisfying the following equation:
\[[\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L}= \Lambda(\rho(\Lambda h_{1},\Lambda h_{2})h_{3}+[h_{1},h_{2},h_{3 }]_{H}), \tag{2.6}\]
for any \(h_{1},h_{2},h_{3}\in H\).
(2) A nonabelian embedding tensor 3-Lie algebra is a triple \((H,L,\Lambda)\) consisting of a 3-Lie algebra \((L,[-,-,-]_{L})\), a coherent action \((H,[-,-,-]_{H};\rho^{\dagger})\) of \(L\) and a nonabelian embedding tensor \(\Lambda:H\to L\). We denote a nonabelian embedding tensor 3-Lie algebra \((H,L,\Lambda)\) by the notation \(H\stackrel{{\Lambda}}{{\longrightarrow}}L\).
(3) Let \(H\stackrel{{\Lambda_{1}}}{{\longrightarrow}}L\) and \(H\stackrel{{\Lambda_{2}}}{{\longrightarrow}}L\) be two nonabelian embedding tensor 3-Lie algebras. Then a homomorphism from \(H\stackrel{{\Lambda_{1}}}{{\longrightarrow}}L\) to \(H\stackrel{{\Lambda_{2}}}{{\longrightarrow}}L\) consists of two 3-Lie algebras homomorphisms \(f_{L}:L\to L\) and \(f_{H}:H\to H\) satisfying the following equations
\[\Lambda_{2}\circ f_{H}= f_{L}\circ\Lambda_{1}, \tag{2.7}\] \[f_{H}(\rho(l_{1},l_{2})h)= \rho(f_{L}(l_{1}),f_{L}(l_{2}))f_{H}(h). \tag{2.8}\]
for \(l_{1},l_{2}\in L\) and \(h\in H.\) Furthermore, if \(f_{L}\) and \(f_{H}\) are nondegenerate, \((f_{L},f_{H})\) is called an isomorphism from \(H\stackrel{{\Lambda_{1}}}{{\longrightarrow}}L\) to \(H\stackrel{{\Lambda_{2}}}{{\longrightarrow}}L\).
**Remark 2.7**.: If \((H,[-,-,-]_{H})\) is an abelian 3-Lie algebra, then we can get that \(\Lambda\) is an embedding tensor on 3-Lie algebra (see [16]). In addition, If \(\rho=0\), then \(\Lambda\) is a 3-Lie algebra homomorphism from \(H\) to \(L\).
**Example 2.8**.: Let \(H\) be a 4-dimensional linear space spanned by \(\alpha_{1},\alpha_{2},\alpha_{3}\) and \(\alpha_{4}\). We define a skew-symmetric trilinear map \([-,-,-]_{H}:\wedge^{3}H\to H\) by
\[[\alpha_{1},\alpha_{2},\alpha_{3}]_{H}=\alpha_{4}.\]
Then \((H,[-,-,-]_{H})\) is a 3-Lie algebra. It is obvious that \((H,[-,-,-]_{H};ad^{\dagger})\) is a coherent adjoint action of \((H,[-,-,-]_{H})\). Moreover, for \(k\in\mathbb{K}\),
\[\Lambda=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&2k&0\\ 0&0&0&k\end{array}\right)\]
is a nonabelian embedding tensor on the 3-algebra \((H,[-,-,-]_{H})\) with respect to the coherent adjoint action \((H,[-,-,-]_{H};ad^{\dagger})\).
Next we use graphs to describe nonabelian embedding tensors on 3-Lie algebras.
**Theorem 2.9**.: _A linear map \(\Lambda:H\to L\) is a nonabelian embedding tensor on a 3-Lie algebra \((L,[-,-,-]_{L})\) with respect to the coherent action \((H,[-,-,-]_{H};\rho^{\dagger})\) if and only if the graph \(Gr(\Lambda)=\{\Lambda h+h\ |\ h\in H\}\) is a subalgebra of the nonabelian hemisemidirect product 3-Leibniz algebra \(L\ltimes_{\rho}H\)._
Proof.: Let \(\Lambda:H\to L\) be a linear map. Then for all \(h_{1},h_{2},h_{3}\in H\), we have
\[[\Lambda h_{1}+h_{1},\Lambda h_{2}+h_{2},\Lambda h_{3}+h_{3}]_{\rho}=[ \Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L}+\rho(\Lambda h_{1},\Lambda h_{ 2})h_{3}+[h_{1},h_{2},h_{3}]_{H},\]
Thus, the graph \(Gr(\Lambda)=\{\Lambda h+h\ |\ h\in H\}\) is a subalgebra of the nonabelian hemisemidirect product 3-Leibniz algebra \(L\ltimes_{\rho}H\) if and only if \(\Lambda\) satisfies Eq. (2.6), which implies that \(\Lambda\) is a nonabelian embedding tensor on \(L\) with respect to the coherent action \((H,[-,-,-]_{H};\rho^{\dagger})\).
Because \(H\) and \(Gr(\Lambda)\) are isomorphic as linear spaces, there is an induced 3-Leibniz algebra structure on \(H\).
**Corollary 2.10**.: _Let \(H\stackrel{{\Lambda}}{{\longrightarrow}}L\) be a nonabelian embedding tensor 3-Lie algebra. If a linear map \([-,-,-]_{\Lambda}:\wedge^{3}H\to H\) is given by_
\[[h_{1},h_{2},h_{3}]_{\Lambda}=\rho(\Lambda h_{1},\Lambda h_{2})h_{3}+[h_{1},h _{2},h_{3}]_{H}, \tag{2.9}\]
_for any \(h_{1},h_{2},h_{3}\in H\). Then \((H,[-,-,-]_{\Lambda})\) is a 3-Leibniz algebra. Moreover, \(\Lambda\) is a homomorphism from the 3-Leibniz algebra \((H,[-,-,-]_{\Lambda})\) to the 3-Lie algebra \((L,[-,-,-]_{L})\). The 3-Leibniz algebra \((H,[-,-,-]_{\Lambda})\) is called the descendent 3-Leibniz algebra._
**Proposition 2.11**.: _Let \((f_{L},f_{H})\) be a homomorphism from \(H\xrightarrow{\Lambda_{1}}L\) to \(H\xrightarrow{\Lambda_{2}}L\). Then \(f_{H}\) is a homomorphism of descendent 3-Leibniz algebra from \((H,[-,-,-]_{\Lambda_{1}})\) to \((H,[-,-,-]_{\Lambda_{2}})\)._
Proof.: For all \(h_{1},h_{2},h_{3}\in H,\) by Eqs. (2.7), (2.8) and (2.9), we have
\[f_{H}([h_{1},h_{2},h_{3}]_{\Lambda_{1}})= f_{H}(\rho(\Lambda_{1}h_{1},\Lambda_{1}h_{2})h_{3}+[h_{1},h_{2},h_{3}] _{H})\] \[= \rho(f_{L}(\Lambda_{1}h_{1}),f_{L}(\Lambda_{1}h_{2}))f_{H}(h_{3}) +f_{H}([h_{1},h_{2},h_{3}]_{H})\] \[= \rho(\Lambda_{2}f_{L}(h_{1}),\Lambda_{2}f_{L}(h_{2}))f_{H}(h_{3}) +[f_{H}(h_{1}),f_{H}(h_{2}),f_{H}(h_{3})]_{H}\] \[= [f_{H}(h_{1}),f_{H}(h_{2}),f_{H}(h_{3})]_{\Lambda_{2}}.\]
The proof is finished.
## 3 3-Leibniz-Lie algebras
In this section, we introduce the concept of 3-Leibniz-Lie algebra as the basic algebraic structure of nonabelian embedding tensor 3-Lie algebra.
**Definition 3.1**.: A 3-Leibniz-Lie algebra \((H,[-,-,-]_{H},\{-,-,-\}_{H})\) consists of a 3-Lie algebra \((H,[-,-,-]_{H})\) and a trilinear product \(\{-,-,-\}_{H}:\wedge^{3}H\to H\) such that
\[\{h_{1},h_{2},\{h_{3},h_{4},h_{5}\}_{H}\}_{H}=\{\{h_{1},h_{2},h_{ 3}\}_{H},h_{4},h_{5}\}_{H}+\{h_{3},\{h_{1},h_{2},h_{4}\}_{H},h_{5}\}_{H}\\ +\{h_{3},h_{4},\{h_{1},h_{2},h_{5}\}_{H}\}_{H}+\{[h_{1},h_{2},h_{ 3}]_{H},h_{4},h_{5}\}_{H}+\{h_{3},[h_{1},h_{2},h_{4}]_{H},h_{5}\}_{H},\\ \{h_{1},h_{2},[h_{3},h_{4},h_{5}]_{H}\}_{H}=[\{h_{1},h_{2},h_{3}\} _{H},h_{4},h_{5}]_{H}=0, \tag{3.2}\]
for any \(h_{1},h_{2},h_{3},h_{4},h_{5}\in H\).
A homomorphism between two 3-Leibniz-Lie algebras \((H_{1},[-,-,-]_{H_{1}},\{-,-,-\}_{H_{1}})\) and \((H_{2},[-,-,-]_{H_{2}},\{-,-,-\}_{H_{2}})\) is a 3-Lie algebra \(f:(H_{1},,[-,-,-]_{H_{1}})\rightarrow(H_{2},,[-,-,-]_{H_{2}})\) such that \(f(\{h_{1},h_{2},h_{3}\}_{H_{1}})=\{f(h_{1}),f(h_{2}),f(h_{3})\}_{H_{2}},\ \ \forall h_{1},h_{2},h_{3}\in H_{1}\).
**Remark 3.2**.: A 3-Leibniz algebra \((H,\{-,-,-\}_{H})\) is naturally a 3-Leibniz-Lie algebra if the 3-Lie algebra \((H,[-,-,-]_{H})\) is abelian.
**Example 3.3**.: Let \((H,[-,-,-]_{H})\) be a 4-dimensional 3-Lie algebra given in Example 2.8. We define a trilinear product \(\{-,-,-\}_{H}:\wedge^{3}H\to H\) by
\[\{\alpha_{i_{1}},\alpha_{i_{2}},\alpha_{i_{3}}\}_{H}=(-1)^{i_{1} +i_{2}+i_{3}}\alpha_{4},\quad i_{1},i_{2},i_{3}\in\{1,2,3\},\] \[\{\alpha_{4},\alpha_{j_{1}},\alpha_{j_{2}}\}_{H}=\{\alpha_{j_{1}},\alpha_{4},\alpha_{j_{2}}\}_{H}=\{\alpha_{j_{1}},\alpha_{j_{2}},\alpha_{4} \}_{H}=0,\ j_{1},j_{2}\in\{1,2,3,4\}.\]
Then \((H,[-,-,-]_{H},\{-,-,-\}_{H})\) is a 3-Leibniz-Lie algebra.
The following theorem shows that a 3-Leibniz-Lie algebra naturally induces a 3-Leibniz algebra.
**Theorem 3.4**.: _Let \((H,[-,-,-]_{H},\{-,-,-\}_{H})\) be a 3-Leibniz-Lie algebra. Then the trilinear product \(\langle-,-,-\rangle_{H}:\wedge^{3}H\to H\) given by_
\[\langle h_{1},h_{2},h_{3}\rangle_{H}:=[h_{1},h_{2},h_{3}]_{H}+\{h_{1},h_{2},h_{3 }\}_{H}, \tag{3.3}\]
_for any \(h_{1},h_{2},h_{3}\in H,\) defines a 3-Leibniz algebra structure on \(H\), which is denoted by \((H,\langle-,-,-\rangle_{H})\) and called the subadjacent 3-Leibniz algebra._
Proof.: For any \(h_{1},h_{2},h_{3},h_{4},h_{5}\in H,\) by Eqs. (2.1), (3.1), (3.2) and (3.3), we have
\[\langle h_{1},h_{2},\langle h_{3},h_{4},h_{5}\rangle_{H}\rangle_{H }-\langle\langle h_{1},h_{2},h_{3}\rangle_{H},h_{4},h_{5}\rangle_{H}-\langle h _{3},\langle h_{1},h_{2},h_{4}\rangle_{H},h_{5}\rangle_{H}\] \[-\langle h_{3},h_{4},\langle h_{1},h_{2},h_{5}\rangle_{H}\rangle_ {H}\] \[= [h_{1},h_{2},[h_{3},h_{4},h_{5}]_{H}]_{H}+[h_{1},h_{2},\{h_{3},h_{ 4},h_{5}\}_{H}]_{H}+\{h_{1},h_{2},[h_{3},h_{4},h_{5}]_{H}\}_{H}\] \[+\{h_{1},h_{2},\{h_{3},h_{4},h_{5}\}_{H}\}_{H}-[[h_{1},h_{2},h_{3} ]_{H},h_{4},h_{5}]_{H}-[\{h_{1},h_{2},h_{3}\}_{H},h_{4},h_{5}]_{H}\] \[-\{[h_{1},h_{2},h_{3}]_{H},h_{4},h_{5}\}_{H}-\{\{h_{1},h_{2},h_{3} \}_{H},h_{4},h_{5}\}_{H}-[h_{3},[h_{1},h_{2},h_{4}]_{H},h_{5}]_{H}\] \[-[h_{3},\{h_{1},h_{2},h_{4}\}_{H},h_{5}]_{H}-\{h_{3},[h_{1},h_{2},h _{4}]_{H},h_{5}\}_{H}-\{h_{3},\{h_{1},h_{2},h_{4}\}_{H},h_{5}\}_{H}\] \[-[h_{3},h_{4},[h_{1},h_{2},h_{5}]_{H}]_{H}-[h_{3},h_{4},\{h_{1},h_ {2},h_{5}\}_{H}]_{H}-\{h_{3},h_{4},[h_{1},h_{2},h_{5}]_{H}\}_{H}\] \[-\{h_{3},h_{4},\{h_{1},h_{2},h_{5}\}_{H}\}_{H}\] \[= \{h_{1},h_{2},\{h_{3},h_{4},h_{5}\}_{H}\}_{H}-\{[h_{1},h_{2},h_{3} ]_{H},h_{4},h_{5}\}_{H}-\{\{h_{1},h_{2},h_{3}\}_{H},h_{4},h_{5}\}_{H}\] \[-\{h_{3},[h_{1},h_{2},h_{4}]_{H},h_{5}\}_{H}-\{h_{3},\{h_{1},h_{2}, h_{4}\}_{H},h_{5}\}_{H}-\{h_{3},h_{4},\{h_{1},h_{2},h_{5}\}_{H}\}_{H}\] \[= 0.\]
Hence, \((H,\langle-,-,-\rangle_{H})\) is a 3-Leibniz algebra.
The following theorem shows that a nonabelian embedding tensor 3-Lie algebra induces a 3-Leibniz-Lie algebra.
**Theorem 3.5**.: _Let \(H\stackrel{{\Lambda}}{{\longrightarrow}}L\) be a nonabelian embedding tensor 3-Lie algebra. Then \((H,[-,-,-]_{H},\{-,-,-\}_{\Lambda})\) is a 3-Leibniz-Lie algebra, where_
\[\{h_{1},h_{2},h_{3}\}_{\Lambda}:=\rho(\Lambda h_{1},\Lambda h_{2})h_{3}, \tag{3.4}\]
_for any \(h_{1},h_{2},h_{3}\in H\)._
Proof.: For all \(h_{1},h_{2},h_{3},h_{4},h_{5}\in H,\) by Eqs. (2.3), (2.6) and (3.4), we have
\[\{\{h_{1},h_{2},h_{3}\}_{\Lambda},h_{4},h_{5}\}_{\Lambda}+\{h_{3}, \{h_{1},h_{2},h_{4}\}_{\Lambda},h_{5}\}_{\Lambda}+\{h_{3},h_{4},\{h_{1},h_{2},h_ {5}\}_{\Lambda}\}_{\Lambda}\] \[+\{[h_{1},h_{2},h_{3}]_{H},h_{4},h_{5}\}_{\Lambda}+\{h_{3},[h_{1},h_{2},h_{4}]_{H},h_{5}\}_{\Lambda}-\{h_{1},h_{2},\{h_{3},h_{4},h_{5}\}_{ \Lambda}\}_{\Lambda}\] \[= \rho(\Lambda\rho(\Lambda h_{1},\Lambda h_{2})h_{3},\Lambda h_{4}) h_{5}+\rho(\Lambda h_{3},\Lambda\rho(\Lambda h_{1},\Lambda h_{2})h_{4})h_{5}+ \rho(\Lambda h_{3},\Lambda h_{4})\rho(\Lambda h_{1},\Lambda h_{2})h_{5}\] \[+\rho(\Lambda[h_{1},h_{2},h_{3}]_{H},\Lambda h_{4})h_{5}+\rho( \Lambda h_{3},\Lambda[h_{1},h_{2},h_{4}]_{H})h_{5}-\rho(\Lambda h_{1},\Lambda h _{2})\rho(\Lambda h_{3},\Lambda h_{4})h_{5}\]
\[= \rho(\Lambda\rho(\Lambda h_{1},\Lambda h_{2})h_{3},\Lambda h_{4})h_{5} +\rho(\Lambda h_{3},\Lambda\rho(\Lambda h_{1},\Lambda h_{2})h_{4})h_{5}+\rho( \Lambda h_{3},\Lambda h_{4})\rho(\Lambda h_{1},\Lambda h_{2})h_{5}\] \[+\rho([\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L}-\Lambda\rho( \Lambda h_{1},\Lambda h_{2})h_{3},\Lambda h_{4})h_{5}+\rho(\Lambda h_{3},[ \Lambda h_{1},\Lambda h_{2},\Lambda h_{4}]_{L}\] \[-\Lambda\rho(\Lambda h_{1},\Lambda h_{2})h_{4})h_{5}-\rho(\Lambda h _{1},\Lambda h_{2})\rho(\Lambda h_{3},\Lambda h_{4})h_{5}\] \[= \rho(\Lambda h_{3},\Lambda h_{4})\rho(\Lambda h_{1},\Lambda h_{2}) h_{5}+\rho([\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L},\Lambda h_{4})h_{5}+\rho( \Lambda h_{3},[\Lambda h_{1},\Lambda h_{2},\Lambda h_{4}]_{L})h_{5}\] \[-\rho(\Lambda h_{1},\Lambda h_{2})\rho(\Lambda h_{3},\Lambda h_{4 })h_{5}\] \[= 0.\]
Furthermore, by Eqs. (2.4) and (2.5), we have
\[\{h_{1},h_{2},[h_{3},h_{4},h_{5}]_{H}\}_{\Lambda} =\rho(\Lambda h_{1},\Lambda h_{2})[h_{3},h_{4},h_{5}]_{H}=0,\] \[[\{h_{1},h_{2},h_{3}\}_{\Lambda},h_{4},h_{5}]_{H} =[\rho(\Lambda h_{1},\Lambda h_{2})h_{3},h_{4},h_{5}]_{H}=0.\]
Thus, \((H,[-,-,-]_{H},\{-,-,-\}_{\Lambda})\) is a 3-Leibniz-Lie algebra.
**Proposition 3.6**.: _Let \((f_{L},f_{H})\) be a homomorphism from \(H\xrightarrow{\Lambda_{1}}L\) to \(H\xrightarrow{\Lambda_{2}}L\). Then \(f_{H}\) is a homomorphism of 3-Leibniz-Lie algebras from \((H,[-,-,-]_{H},\{-,-,-\}_{\Lambda_{1}})\) to \((H,[-,-,-]_{H},\{-,-,-\}_{\Lambda_{2}})\)._
Proof.: For any \(h_{1},h_{2},h_{3}\in H\), by Eqs. (2.7), (2.8) and (3.4), we have
\[f_{H}(\{h_{1},h_{2},h_{3}\}_{\Lambda_{1}})= f_{H}(\rho(\Lambda_{1}h_{1},\Lambda_{1}h_{2})h_{3})\] \[= \rho(f_{L}(\Lambda_{1}h_{1}),f_{L}(\Lambda_{1}h_{2}))f_{H}(h_{3})\] \[= \rho(\Lambda_{2}f_{H}(h_{1}),\Lambda_{2}f_{H}(h_{2}))f_{H}(h_{3})\] \[= \{f_{H}(h_{1}),f_{H}(h_{2}),f_{H}(h_{3})\}_{\Lambda_{2}}.\]
The proof is finished.
## 4 Cohomology theory of nonabelian embedding tensors on 3-Lie algebras
In this section, we recall some basic results of representations and cohomologies of 3-Leibniz algebras. We construct a representation of the descendent 3-Leibniz algebra \((H,[-,-,-]_{\Lambda})\) on the vector space \(L\), and define the cohomologies of a nonabelian embedding tensor on 3-Lie algebras. As application, we characterize the infinitesimal deformation using the first cohomology group.
**Definition 4.1**.: A representation of the 3-Leibniz algebra \((\mathcal{H},[-,-,-]_{\mathcal{H}})\) is a vector space \(V\) equipped with 3 actions
\[\mathfrak{l}:\mathcal{H}\otimes\mathcal{H}\otimes V \to V,\] \[\mathfrak{m}:\mathcal{H}\otimes V\otimes\mathcal{H} \to V,\] \[\mathfrak{r}:V\otimes\mathcal{H}\otimes\mathcal{H} \to V,\]
satisfying for any \(a_{1},a_{2},a_{3},a_{4},a_{5}\in\mathcal{H}\) and \(u\in V\)
\[\mathfrak{l}(a_{1},a_{2},\mathfrak{l}(a_{3},a_{4},u))=\mathfrak{l}([a _{1},a_{2},a_{3}]_{\mathcal{H}},a_{4},u)+\mathfrak{l}(a_{3},[a_{1},a_{2},a_{4}]_ {\mathcal{H}},u)+\mathfrak{l}(a_{3},a_{4},\mathfrak{l}(a_{1},a_{2},u)), \tag{4.1}\] \[\mathfrak{l}(a_{1},a_{2},\mathfrak{m}(a_{3},u,a_{5}))=\mathfrak{m }([a_{1},a_{2},a_{3}]_{\mathcal{H}},u,a_{5})+\mathfrak{m}(a_{3},l(a_{1},a_{2},u ),a_{5})+\mathfrak{m}(a_{3},u,[a_{1},a_{2},a_{5}]_{\mathcal{H}}),\] (4.2) \[\mathfrak{l}(a_{1},a_{2},\mathfrak{r}(u,a_{4},a_{5}))=\mathfrak{ r}(\mathfrak{l}(a_{1},a_{2},u),a_{4},a_{5})+\mathfrak{r}(u,[a_{1},a_{2},a_{4}]_ {\mathcal{H}},a_{5})+\mathfrak{r}(u,a_{4},[a_{1},a_{2},a_{5}]_{\mathcal{H}}),\] (4.3) \[\mathfrak{m}(a_{1},u,[a_{3},a_{4},a_{5}]_{\mathcal{H}})=\mathfrak{ r}(\mathfrak{m}(a_{1},u,a_{3}),a_{4},a_{5})+\mathfrak{m}(a_{3},\mathfrak{m}(a_{1},u,a_{4} ),a_{5})+\mathfrak{l}(a_{3},a_{4},\mathfrak{m}(a_{1},u,a_{5})),\] (4.4) \[\mathfrak{r}(u,a_{2},[a_{3},a_{4},a_{5}]_{\mathcal{H}})=\mathfrak{ r}(\mathfrak{r}(u,a_{2},a_{3}),a_{4},a_{5})+\mathfrak{m}(a_{3},\mathfrak{r}(u,a_{2},a_{4} ),a_{5})+\mathfrak{l}(a_{3},a_{4},\mathfrak{r}(u,a_{2},a_{5})). \tag{4.5}\]
For \(n\geq 1\), denote the \(n\)-cochains of \(3\)-Leibniz algebra \((\mathcal{H},[-,-,-]_{\mathcal{H}})\) with coefficients in a representation \((V;\mathfrak{l},\mathfrak{m},\mathfrak{r})\) by
\[\mathcal{C}^{n}_{3\mathrm{Leib}}(\mathcal{H},V)=\mathrm{Hom}(\overbrace{ \wedge^{2}\mathcal{H}\otimes\cdots\otimes\wedge^{2}\mathcal{H}}\otimes \mathcal{H},V).\]
The coboundary map \(\delta:\mathcal{C}^{n}_{3\mathrm{Leib}}(\mathcal{H},V)\to\mathcal{C}^{n+1}_{3 \mathrm{Leib}}(\mathcal{H},V)\), for \(A_{i}=a_{i}\wedge b_{i}\in\wedge^{2}\mathcal{H},1\leq i\leq n\) and \(c\in\mathcal{H}\), as
\[(\delta\varphi)(A_{1},A_{2},\cdots,A_{n},c)\] \[= \sum_{1\leq j<k\leq n}(-1)^{j}\varphi(A_{1},\cdots,\widehat{A_{j} },\cdots,A_{k-1},a_{k}\wedge[a_{j},b_{j},b_{k}]_{\mathcal{H}}+[a_{j},b_{j},a_ {k}]_{\mathcal{H}}\wedge b_{k},\cdots,A_{n},c)\] \[+\sum_{j=1}^{n}(-1)^{j}\varphi(A_{1},\cdots,\widehat{A_{j}}, \cdots,A_{n},[a_{j},b_{j},c]_{\mathcal{H}})\] \[+\sum_{j=1}^{n}(-1)^{j+1}\mathfrak{l}(A_{j},\varphi(A_{1}, \cdots,\widehat{A_{j}},\cdots,A_{n},c))\] \[+(-1)^{n+1}(\mathfrak{m}(a_{n},\varphi(A_{1},\cdots,A_{n-1},v_{n} ),c)+\mathfrak{r}(\varphi(A_{1},\cdots,A_{n-1},a_{n}),b_{n},c)).\]
It was proved in [7, 31] that \(\delta^{2}=0\). Therefore, \((\oplus_{n=1}^{+\infty}\mathcal{C}^{n}_{3\mathrm{Leib}}(\mathcal{H},V),\delta)\) is a cochain complex.
Let \(H\xrightarrow{\Lambda}L\) be a nonabelian embedding tensor \(3\)-Lie algebra. By Corollary 2.10, \((H,[-,-,-]_{\Lambda})\) is a \(3\)-Leibniz algebra. Next we give a representation of \((H,[-,-,-]_{\Lambda})\) on \(L\).
**Lemma 4.2**.: _With above notations. Define 3 actions_
\[\mathfrak{l}_{\Lambda}:H\otimes H\otimes L\to L,\mathfrak{m}_{\Lambda}:H \otimes L\otimes H\to L,\mathfrak{r}_{\Lambda}:L\otimes H\otimes H\to L,\]
_by_
\[\mathfrak{l}_{\Lambda}(h_{1},h_{2},l) =[\Lambda h_{1},\Lambda h_{2},l]_{L},\] \[\mathfrak{m}_{\Lambda}(h_{1},l,h_{2}) =[\Lambda h_{1},l,\Lambda h_{2}]_{L}-\Lambda\rho(\Lambda h_{1},l )h_{2},\] \[\mathfrak{r}_{\Lambda}(l,h_{1},h_{2}) =[l,\Lambda h_{1},\Lambda h_{2}]_{L}-\Lambda\rho(l,\Lambda h_{1})h _{2},\]
_for any \(h_{1},h_{2}\in H,l\in L.\) Then \((L;\mathfrak{l}_{\Lambda},\mathfrak{m}_{\Lambda},\mathfrak{r}_{\Lambda})\) is a representation of the descendent 3-Leibniz algebra \((H,[-,-,-]_{\Lambda})\)._
Proof.: For all \(h_{1},h_{2},h_{3},h_{4},h_{5}\in H\) and \(l\in L\), by Eqs. (2.1), (2.3)-(2.6) and (2.9), we have
\[\mathfrak{l}_{\Lambda}(h_{1},h_{2},\mathfrak{l}_{\Lambda}(h_{3},h_ {4},l))-\mathfrak{l}_{\Lambda}([h_{1},h_{2},h_{3}]_{\Lambda},h_{4},l)- \mathfrak{l}_{\Lambda}(h_{3},[h_{1},h_{2},h_{4}]_{\Lambda},l)-\mathfrak{l}_{ \Lambda}(h_{3},h_{4},\mathfrak{l}_{\Lambda}(h_{1},h_{2},l))\] \[= [\Lambda h_{1},\Lambda h_{2},[\Lambda h_{3},\Lambda h_{4},l]_{L}]_{ L}-[[\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L},\Lambda h_{4},l]_{L}-[ \Lambda h_{3},[\Lambda h_{1},\Lambda h_{2},\Lambda h_{4}]_{L},l]_{L}\] \[-[\Lambda h_{3},\Lambda h_{4},[\Lambda h_{1},\Lambda h_{2},l]_{L}] _{L}\] \[= 0,\] \[\mathfrak{l}_{\Lambda}(h_{1},h_{2},\mathfrak{m}_{\Lambda}(h_{3},l,h_{5}))-\mathfrak{m}_{\Lambda}([h_{1},h_{2},h_{3}]_{\Lambda},l,h_{5})- \mathfrak{m}_{\Lambda}(h_{3},\mathfrak{l}_{\Lambda}(h_{1},h_{2},l),h_{5})- \mathfrak{m}_{\Lambda}(h_{3},l,[h_{1},h_{2},h_{5}]_{\Lambda})\] \[= [\Lambda h_{1},\Lambda h_{2},[\Lambda h_{3},l,\Lambda h_{5}]_{L}]_ {L}-[\Lambda h_{1},\Lambda h_{2},\Lambda\rho(\Lambda h_{3},l)h_{5}]_{L}-[[ \Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L},l,\Lambda h_{5}]_{L}\] \[+\Lambda\rho([\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L},l)h_ {5}-[\Lambda h_{3},[\Lambda h_{1},\Lambda h_{2},l]_{L},\Lambda h_{5}]_{L}+ \Lambda\rho(\Lambda h_{3},[\Lambda h_{1},\Lambda h_{2},l]_{L})h_{5}\] \[-[\Lambda h_{3},l,[\Lambda h_{1},\Lambda h_{2},\Lambda h_{5}]_{L} ]_{L}+\Lambda\rho(\Lambda h_{3},l)\rho(\Lambda h_{1},\Lambda h_{2})h_{5}+ \Lambda\rho(\Lambda h_{3},l)[h_{1},h_{2},h_{5}]_{H}\] \[= -[\Lambda h_{1},\Lambda h_{2},\Lambda\rho(\Lambda h_{3},l)h_{5}]_ {L}+\Lambda\rho([\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L},l)h_{5}+ \Lambda\rho(\Lambda h_{3},[\Lambda h_{1},\Lambda h_{2},l]_{L})h_{5}\] \[+\Lambda\rho(\Lambda h_{3},l)\rho(\Lambda h_{1},\Lambda h_{2})h_{ 5}+\Lambda\rho(\Lambda h_{3},l)[h_{1},h_{2},h_{5}]_{H}\] \[= -\Lambda\rho(\Lambda h_{1},\Lambda h_{2})\Lambda\rho(\Lambda h_{3 },l)h_{5}-\Lambda[h_{1},h_{2},\rho(\Lambda h_{3},l)h_{5}]_{H}+\Lambda\rho( \Lambda h_{1},\Lambda h_{2})\rho(\Lambda h_{3},l)h_{5}\] \[+\Lambda\rho(\Lambda h_{3},l)[h_{1},h_{2},h_{5}]_{H}\] \[= -\Lambda[h_{1},h_{2},\rho(\Lambda h_{3},l)h_{5}]_{H}+\Lambda\rho( \Lambda h_{3},l)[h_{1},h_{2},h_{5}]_{H}\] \[= \Lambda[\rho(\Lambda h_{3},l)h_{1},h_{2},h_{5}]_{H}+\Lambda[h_{1},\rho(\Lambda h_{3},l)h_{2},h_{5}]_{H}\] \[= 0,\]
which imply that Eqs. (4.1) and (4.2) hold. Similarly, we can prove that Eqs.(4.3), (4.4) and (4.5) are true. The proof is finished.
**Proposition 4.3**.: _Let \(H\xrightarrow{\Lambda_{1}}L\) and \(H\xrightarrow{\Lambda_{2}}L\) be two nonabelian embedding tensor 3-Lie algebras and \((f_{L},f_{H})\) a homomorphism from \(H\xrightarrow{\Lambda_{1}}L\) to \(H\xrightarrow{\Lambda_{2}}L\). Then the induced representation \((L;\mathfrak{l}_{\Lambda_{1}},\mathfrak{m}_{\Lambda_{1}},\mathfrak{r}_{\Lambda_ {1}})\) of the descendent 3-Leibniz algebra \((H,[-,-,-]_{\Lambda_{1}})\) and the induced representation \((L;\mathfrak{l}_{\Lambda_{2}},\mathfrak{m}_{\Lambda_{2}},\mathfrak{r}_{ \Lambda_{2}})\) of the descendent 3-Leibniz algebra \((H,[-,-,-]_{\Lambda_{2}})\) satisfying the following equations:_
\[f_{L}(\mathfrak{l}_{\Lambda_{1}}(h_{1},h_{2},l))= \mathfrak{l}_{\Lambda_{2}}(f_{H}(h_{1}),f_{H}(h_{2}),f_{L}(l)), \tag{4.6}\] \[f_{L}(\mathfrak{m}_{\Lambda_{1}}(h_{1},l,h_{2}))= \mathfrak{m}_{\Lambda_{2}}(f_{H}(h_{1}),f_{L}(l),f_{H}(h_{2})),\] (4.7) \[f_{L}(\mathfrak{r}_{\Lambda_{1}}(l,h_{1},h_{2}))= \mathfrak{r}_{\Lambda_{2}}(f_{L}(l),f_{H}(h_{1}),f_{H}(h_{2})),\ \forall h_{1},h_{2}\in H,l\in L. \tag{4.8}\]
_In other words, the following diagrams commute:_
\[\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.0} \diagram{-1.0}\diagram{-1.0}\diagram{-1.0}\diagram{-1.
Proof.: For all \(h_{1},h_{2}\in H,l\in L\), by Eqs. (2.7) and (2.8) we have
\[f_{L}(\mathfrak{l}_{\Lambda_{1}}(h_{1},h_{2},l))= f_{L}([\Lambda_{1}h_{1},\Lambda_{1}h_{2},l]_{L})=[f_{L}(\Lambda_{1}h_{1}),f_{L}( \Lambda_{1}h_{2}),f_{L}(l)]_{L}\] \[= [\Lambda_{2}f_{H}(h_{1}),\Lambda_{2}f_{H}(h_{2}),f_{L}(l)]_{L}\] \[= \mathfrak{l}_{\Lambda_{2}}(f_{H}(h_{1}),f_{H}(h_{2}),f_{L}(l)),\] \[f_{L}(\mathfrak{m}_{\Lambda_{1}}(h_{1},l,h_{2}))= f_{L}([\Lambda_{1}h_{1},l,\Lambda_{1}h_{2}]_{L}-\Lambda_{1}\rho( \Lambda_{1}h_{1},l)h_{2})\] \[= [f_{L}(\Lambda_{1}h_{1}),f_{L}(l),f_{L}(\Lambda_{1}h_{2})]_{L}-f_ {L}(\Lambda_{1}\rho(\Lambda_{1}h_{1},l)h_{2})\] \[= [\Lambda_{2}f_{H}(h_{1}),f_{L}(l),\Lambda_{2}f_{H}(h_{2})]_{L}- \Lambda_{2}f_{H}(\rho(\Lambda_{1}h_{1},l)h_{2})\] \[= [\Lambda_{2}f_{H}(h_{1}),f_{L}(l),\Lambda_{2}f_{H}(h_{2})]_{L}- \Lambda_{2}\rho(\Lambda_{2}f_{H}(h_{1}),f_{L}(l))f_{H}(h_{2})\] \[= \mathfrak{m}_{\Lambda_{2}}(f_{H}(h_{1}),f_{L}(l),f_{H}(h_{2})).\]
And the other equation is similar to provable.
For \(n\geq 1\), let \(\delta_{\Lambda}:\mathcal{C}^{n}_{\rm 3Leib}(H,L)\to\mathcal{C}^{n+1}_{\rm 3Leib}(H,L)\) be the coboundary operator of the \(3\)-Leibniz algebra \((H,[-,-,-]_{\Lambda})\) with coefficients in the representation \((L;\mathfrak{l}_{\Lambda},\mathfrak{m}_{\Lambda},\mathfrak{r}_{\Lambda})\). More precisely, for all \(\phi\in\mathcal{C}^{n}_{\rm 3Leib}(H,L),\mathfrak{H}_{i}=u_{i}\wedge v_{i}\in \wedge^{2}H,1\leq i\leq n\) and \(w\in H\), we have
\[(\delta_{\Lambda}\phi)(\mathfrak{H}_{1},\mathfrak{H}_{2},\cdots, \mathfrak{H}_{n},w)\] \[= \sum_{1\leq j<k\leq n}(-1)^{j}\phi(\mathfrak{H}_{1},\cdots, \widehat{\mathfrak{H}_{j}},\cdots,\mathfrak{H}_{k-1},u_{k}\wedge[u_{j},v_{j},v _{k}]_{\Lambda}+[u_{j},v_{j},u_{k}]_{\Lambda}\wedge v_{k},\cdots,\mathfrak{H}_ {n},w)\] \[+\sum_{j=1}^{n}(-1)^{j}\phi(\mathfrak{H}_{1},\cdots,\widehat{ \mathfrak{H}_{j}},\cdots,\mathfrak{H}_{n},[u_{j},v_{j},w]_{\Lambda})\] \[+\sum_{j=1}^{n}(-1)^{j+1}\mathfrak{l}_{\Lambda}(\mathfrak{H}_{j},\phi(\mathfrak{H}_{1},\cdots,\widehat{\mathfrak{H}_{j}},\cdots,\mathfrak{H} _{n},w))\] \[+(-1)^{n+1}(\mathfrak{m}_{\Lambda}(u_{n},\phi(\mathfrak{H}_{1}, \cdots,\mathfrak{H}_{n-1},v_{n}),w)+\mathfrak{r}_{\Lambda}(\phi(\mathfrak{H}_ {1},\cdots,\mathfrak{H}_{n-1},u_{n}),v_{n},w)).\]
In particular, for \(\phi\in\mathcal{C}^{1}_{\rm 3Leib}(H,L):=\mathrm{Hom}(H,L)\) and \(u_{1},v_{1},w\in H,\) we have
\[(\delta_{\Lambda}\phi)(u_{1},v_{1},w)= -\phi([u_{1},v_{1},w]_{\Lambda})+\mathfrak{l}_{\Lambda}(u_{1},v_ {1},\phi(w))+\mathfrak{m}_{\Lambda}(u_{1},\phi(v_{1}),w)+\mathfrak{r}_{ \Lambda}(\phi(u_{1}),v_{1},w)\] \[= -\phi([u_{1},v_{1},w]_{\Lambda})+[\Lambda u_{1},\Lambda v_{1}, \phi(w)]_{L}+[\Lambda u_{1},\phi(v_{1}),\Lambda w]_{L}\] \[-\Lambda\rho(\Lambda u_{1},\phi(v_{1}))w+[\phi(u_{1}),\Lambda v_ {1},\Lambda w]_{L}-\Lambda\rho(\phi(u_{1}),\Lambda v_{1})w.\]
For any \((a_{1},a_{2})\in\mathcal{C}^{0}_{\rm 3Leib}(V,L):=\wedge^{2}L\), we define \(\delta_{\Lambda}:\mathcal{C}^{0}_{\rm 3Leib}(V,L)\to\mathcal{C}^{1}_{\rm 3Leib}(V,L),(a_{1},a_{2})\mapsto \delta_{\Lambda}(a_{1},a_{2})\) by
\[\delta_{\Lambda}(a_{1},a_{2})u=\Lambda\rho(a_{1},a_{2})u-[a_{1},a_{2},\Lambda u ]_{L},\forall u\in H.\]
**Proposition 4.4**.: _Let \(H\stackrel{{\Lambda}}{{\longrightarrow}}L\) be a nonabelian embedding tensor 3-Lie algebra. Then \(\delta_{\Lambda}(\delta_{\Lambda}(a,b))=0\)._
Proof.: For any \(u_{1},v_{1},w\in V,\) by Eqs. (2.1)-(2.6) and (2.9) we have
\[\delta_{\Lambda}(\delta_{\Lambda}(a_{1},a_{2}))(u_{1},v_{1},w)\] \[= -\delta_{\Lambda}(a_{1},a_{2})([u_{1},v_{1},w]_{\Lambda})+[ \Lambda u_{1},\Lambda v_{1},\delta_{\Lambda}(a_{1},a_{2})(w)]_{L}+[\Lambda u_{ 1},\delta_{\Lambda}(a_{1},a_{2})(v_{1}),\Lambda w]_{L}\] \[-\Lambda\rho(\Lambda u_{1},\delta_{\Lambda}(a_{1},a_{2})(v_{1}))w +[\delta_{\Lambda}(a_{1},a_{2})(u_{1}),\Lambda v_{1},\Lambda w]_{L}-\Lambda \rho(\delta_{\Lambda}(a_{1},a_{2})(u_{1}),\Lambda v_{1})w\] \[= -\Lambda\rho(a_{1},a_{2})[u_{1},v_{1},w]_{\Lambda}+[a_{1},a_{2},[ \Lambda u_{1},\Lambda v_{1},\Lambda w]_{L}]_{L}+[\Lambda u_{1},\Lambda v_{1}, \Lambda\rho(a_{1},a_{2})w]_{L}\] \[-[\Lambda u_{1},\Lambda v_{1},[a_{1},a_{2},\Lambda w]_{L}]_{L}+[ \Lambda u_{1},\Lambda\rho(a_{1},a_{2})v_{1},\Lambda w]_{L}-[\Lambda u_{1},[a_{ 1},a_{2},\Lambda v_{1}]_{L},\Lambda w]_{L}\] \[-\Lambda\rho(\Lambda u_{1},\Lambda\rho(a_{1},a_{2})v_{1})w+\Lambda \rho(\Lambda u_{1},[a_{1},a_{2},\Lambda v_{1}]_{L})w+[\Lambda\rho(a_{1},a_{2}) u_{1},\Lambda v_{1},\Lambda w]_{L}\] \[-[[a_{1},a_{2},\Lambda u_{1}]_{L},\Lambda v_{1},\Lambda w]_{L}- \Lambda\rho(\Lambda\rho(a_{1},a_{2})u_{1},\Lambda v_{1})w+\Lambda\rho([a_{1},a _{2},\Lambda u_{1}]_{L},\Lambda v_{1})w\] \[= -\Lambda\rho(a_{1},a_{2})\rho(\Lambda u_{1},\Lambda v_{1})w- \Lambda\rho(a_{1},a_{2})[u_{1},v_{1},w]_{H}+\Lambda\rho(\Lambda u_{1},\Lambda v _{1})\rho(a_{1},a_{2})w\] \[+\Lambda[u_{1},v_{1},\rho(a_{1},a_{2})w]_{H}+\Lambda\rho(\Lambda u _{1},\Lambda\rho(a_{1},a_{2})v_{1})w+\Lambda[u_{1},\rho(a_{1},a_{2})v_{1},w]_ {H}\] \[-\Lambda\rho(\Lambda u_{1},\Lambda\rho(a_{1},a_{2})v_{1})w+ \Lambda\rho(\Lambda u_{1},[a_{1},a_{2},\Lambda v_{1}]_{L})w+\Lambda(\Lambda \rho(a_{1},a_{2})u_{1},\Lambda v_{1})w\] \[+\Lambda[\rho(a_{1},a_{2})u_{1},v_{1},w]_{H}-\Lambda\rho(\Lambda \rho(a_{1},a_{2})u_{1},\Lambda v_{1})w+\Lambda\rho([a_{1},a_{2},\Lambda u_{1 }]_{L},\Lambda v_{1})w\] \[= -\Lambda\rho(a_{1},a_{2})\rho(\Lambda u_{1},\Lambda v_{1})w+ \Lambda\rho(\Lambda u_{1},\Lambda v_{1})\rho(a_{1},a_{2})w+\Lambda\rho(\Lambda u _{1},\Lambda\rho(a_{1},a_{2})v_{1})w\] \[-\Lambda\rho(\Lambda u_{1},\Lambda\rho(a_{1},a_{2})v_{1})w+ \Lambda\rho(\Lambda u_{1},[a_{1},a_{2},\Lambda v_{1}]_{L})w+\Lambda(\Lambda \rho(a_{1},a_{2})u_{1},\Lambda v_{1})w\] \[-\Lambda\rho(\Lambda\rho(a_{1},a_{2})u_{1},\Lambda v_{1})w+ \Lambda\rho([a_{1},a_{2},\Lambda u_{1}]_{L},\Lambda v_{1})w\] \[= 0.\]
Therefore, we deduce that \(\delta_{\Lambda}(\delta_{\Lambda}(a_{1},a_{2}))=0\).
Now we develop the cohomology theory of a nonabelian embedding tensor \(\Lambda\) on the 3-Lie algebra \((L,[-,-]_{L})\) with respect to the coherent action \((H,[-,-,-]_{H};\rho^{\dagger})\).
For \(n\geq 0,\) define the set of \(n\)-cochains of \(\Lambda\) by \(\mathcal{C}_{\Lambda}^{n}(H,L):=\mathcal{C}_{\text{3Leib}}^{n}(H,L)\). Then \((\oplus_{n=0}^{\infty}\mathcal{C}_{\Lambda}^{n}(H,L),\delta_{\Lambda})\) is a cochain complex.
For \(n\geq 1,\) we denote the set of \(n\)-cocycles by \(\mathcal{Z}_{\Lambda}^{n}(H,L)\), the set of \(n\)-coboundaries by \(\mathcal{B}_{\Lambda}^{n}(H,L)\) and the \(n\)-th cohomology group of the nonabelian embedding tensor \(\Lambda\) by \(\mathcal{H}_{\Lambda}^{n}(H,L)=\mathcal{Z}_{\Lambda}^{n}(H,L)/\mathcal{B}_{ \Lambda}^{n}(H,L)\).
**Proposition 4.5**.: _Let \(H\stackrel{{\Lambda_{1}}}{{\longrightarrow}}L\) and \(H\stackrel{{\Lambda_{2}}}{{\longrightarrow}}L\) be two nonabelian embedding tensor 3-Lie algebras and let \((f_{L},f_{H})\) be a homomorphism from \(H\stackrel{{\Lambda_{1}}}{{\longrightarrow}}L\) to \(H\stackrel{{\Lambda_{2}}}{{\longrightarrow}}L\) in which \(f_{H}\) is invertible. We define a map \(\Psi:\mathcal{C}_{\Lambda_{1}}^{n}(H,L)\to\mathcal{C}_{\Lambda_{2}}^{n}(H,L)\) by_
\[\Psi(\phi)(\mathfrak{H}_{1},\mathfrak{H}_{2},\cdots,\mathfrak{H}_{n-1},w)=f_{L} \big{(}\phi(f_{H}^{-1}(u_{1})\wedge f_{H}^{-1}(v_{1}),\cdots,f_{H}^{-1}(u_{n-1}) \wedge f_{H}^{-1}(v_{n-1}),f_{H}^{-1}(w))\big{)},\]
_for any \(\phi\in\mathcal{C}_{\Lambda_{1}}^{n}(H,L),\mathfrak{H}_{i}=u_{i}\wedge v_{i}\in \wedge^{2}H,1\leq i\leq n-1\) and \(w\in H\). Then \(\Psi:(\mathcal{C}_{\Lambda_{1}}^{n+1}(H,L),\delta_{\Lambda_{1}})\to(\mathcal{C}_ {\Lambda_{2}}^{n+1}(H,L),\delta_{\Lambda_{2}})\) is a cochain map._
_That is, the following diagram commutes:_
_Consequently, it induces a homomorphism \(\Psi^{*}\) from the cohomology group \(\mathcal{H}^{n+1}_{\Lambda_{1}}(H,L)\) to \(\mathcal{H}^{n+1}_{\Lambda_{2}}(H,L)\)._
Proof.: For any \(\phi\in\mathcal{C}^{n}_{\Lambda_{1}}(H,L),\mathfrak{H}_{i}=u_{i}\wedge v_{i}\in \wedge^{2}H,1\leq i\leq n\) and \(w\in H\), by Eqs. (4.6)-(4.8) and Proposition 2.11, we have
\[(\delta_{\Lambda_{2}}\Psi(\phi))(\mathfrak{H}_{1},\mathfrak{H}_{ 2},\cdots,\mathfrak{H}_{n},w)\] \[= \sum_{1\leq j<k\leq n}(-1)^{j}\Psi(\phi)(\mathfrak{H}_{1},\cdots, \widehat{\mathfrak{H}_{j}},\cdots,\mathfrak{H}_{k-1},u_{k}\wedge[u_{j},v_{j},v _{k}]_{\Lambda_{2}}+[u_{j},v_{j},u_{k}]_{\Lambda_{2}}\wedge v_{k},\cdots, \mathfrak{H}_{n},w)\] \[+\sum_{j=1}^{n}(-1)^{j}\Psi(\phi)(\mathfrak{H}_{1},\cdots, \widehat{\mathfrak{H}_{j}},\cdots,\mathfrak{H}_{n},[u_{j},v_{j},w]_{\Lambda_ {2}})\] \[+\sum_{j=1}^{n}(-1)^{j+1}\mathfrak{l}_{\Lambda_{2}}(\mathfrak{H} _{j},\Psi(\phi)(\mathfrak{H}_{1},\cdots,\widehat{\mathfrak{H}_{j}},\cdots, \mathfrak{H}_{n},w))\] \[+(-1)^{n+1}\mathfrak{m}_{\Lambda_{2}}(u_{n},\Psi(\phi)(\mathfrak{ H}_{1},\cdots,\mathfrak{H}_{n-1},v_{n}),w)\] \[+(-1)^{n+1}\mathfrak{r}_{\Lambda_{2}}(\Psi(\phi)(\mathfrak{H}_{1 },\cdots,\mathfrak{H}_{n-1},u_{n}),v_{n},w)\] \[= \sum_{1\leq j<k\leq n}(-1)^{j}f_{L}(\phi(f_{H}^{-1}(u_{1})\wedge f _{H}^{-1}(v_{1}),\cdots,\widehat{\mathfrak{H}_{j}},\cdots,f_{H}^{-1}(u_{k-1}) \wedge f_{H}^{-1}(v_{k-1}),\] \[f_{H}^{-1}(u_{k})\wedge f_{H}^{-1}([u_{j},v_{j},v_{k}]_{\Lambda_ {2}})+f_{H}^{-1}([u_{j},v_{j},u_{k}]_{\Lambda_{2}})\wedge f_{H}^{-1}(v_{k}),f_ {H}^{-1}(u_{n})\wedge f_{H}^{-1}(v_{n}),f_{H}^{-1}(w))\] \[+\sum_{j=1}^{n}(-1)^{j}f_{L}(\phi(f_{H}^{-1}(u_{1})\wedge f_{H}^{ -1}(v_{1}),\cdots,\widehat{\mathfrak{H}_{j}},\cdots,f_{H}^{-1}(u_{n})\wedge f _{H}^{-1}(v_{n}),f_{H}^{-1}([u_{j},v_{j},w]_{\Lambda_{2}})))\] \[+\sum_{j=1}^{n}(-1)^{j+1}\mathfrak{l}_{\Lambda_{2}}(\mathfrak{H} _{j},f_{L}(\phi(f_{H}^{-1}(u_{1})\wedge f_{H}^{-1}(v_{1}),\cdots,\widehat{ \mathfrak{H}_{j}},\cdots,f_{H}^{-1}(u_{n})\wedge f_{H}^{-1}(v_{n}),f_{H}^{-1} (w))))\] \[+(-1)^{n+1}\mathfrak{m}_{\Lambda_{2}}(u_{n},f_{L}(\phi(f_{H}^{-1} (u_{1})\wedge f_{H}^{-1}(v_{1}),\cdots,f_{H}^{-1}(u_{n-1})\wedge f_{H}^{-1}(v_ {n-1}),f_{H}^{-1}(v_{n}))),w)\] \[+(-1)^{n+1}\mathfrak{r}_{\Lambda_{2}}(f_{L}(\phi(f_{H}^{-1}(u_{1} )\wedge f_{H}^{-1}(v_{1}),\cdots,f_{H}^{-1}(u_{n-1})\wedge f_{H}^{-1}(v_{n-1}), f_{H}^{-1}(u_{n}))),v_{n},w)\]
\[= \Lambda_{1}\rho(\Lambda u_{1},\Lambda u_{2},\Lambda u_{3}]_{L}+[ \Lambda u_{1},\Lambda_{1}u_{2},\Lambda u_{3}]_{L}+[\Lambda u_{1},\Lambda u_{2}, \Lambda_{1}u_{3}]_{L} \tag{4.9}\] \[= \Lambda_{1}\rho(\Lambda u_{1},\Lambda u_{2},\Lambda u_{3}]_{L}+[ \Lambda_{1}u_{1},\Lambda u_{2},\Lambda_{1}u_{3}]_{L}\] \[= \Lambda_{1}\rho(\Lambda_{1}u_{1},\Lambda u_{2})u_{3}+\Lambda_{1} \rho(\Lambda u_{1},\Lambda_{1}u_{2})u_{3}+\Lambda\rho(\Lambda_{1}u_{1},\Lambda_ {1}u_{2})u_{3},\] (4.10) \[[\Lambda_{1}u_{1},\Lambda_{1}u_{2},\Lambda_{1}u_{3}]_{L}=\Lambda_{1 }\rho(\Lambda_{1}u_{1},\Lambda_{1}u_{2})u_{3}. \tag{4.11}\]
From Eq. (4.11) it follows that the map \(\Lambda_{1}\) is an embedding tensor on the 3-Lie algebra \((L,[-,-,-])\) with respect to the representation \((H;\rho)\) (see [16]). It follows from Eq. (4.9) that \(\Lambda_{1}\in\mathcal{C}^{1}_{\Lambda}(H,L)\) is a 1-cocycle in the cohomology complex of \(\Lambda\). Thus the cohomology class of \(\Lambda_{1}\) defines an element in \(\mathcal{H}^{1}_{\Lambda}(H,L)\).
Let \(\Lambda_{t}=\Lambda+t\Lambda_{1}\) and \(\Lambda^{\prime}_{t}=\Lambda+t\Lambda^{\prime}_{1}\) be two infinitesimal deformations of \(\Lambda\). They are said to be equivalent if there exist \(a_{1}\wedge a_{2}\in\wedge^{2}L\) such that the pair \((id_{L}+tad(a_{1},a_{2}),id_{H}+t\rho(a_{1},a_{2}))\) is a homomorphism from \(H\stackrel{{\Lambda_{t}}}{{\longrightarrow}}L\) to \(H\stackrel{{\Lambda^{\prime}_{t}}}{{\longrightarrow}}L\). That is, the following conditions must hold:
(1) the maps \(id_{L}+tad(a_{1},a_{2}):L\to L\) and \(id_{H}+t\rho(a_{1},a_{2}):H\to H\) are two 3-Lie algebra homomorphisms,
(2) the pair \((id_{L}+tad(a_{1},a_{2}),id_{H}+t\rho(a_{1},a_{2}))\) satisfies:
\[(id_{H}+t\rho(a_{1},a_{2}))(\rho(a,b)u)\] \[=\rho((id_{L}+tad(a_{1},a_{2}))(a),(id_{L}+tad(a_{1},a_{2}))(b))( id_{H}+t\rho(a_{1},a_{2}))(u), \tag{4.12}\] \[(\Lambda+t\Lambda^{\prime}_{1})(id_{H}+t\rho(a_{1},a_{2}))(u)=(id _{L}+tad(a_{1},a_{2}))(\Lambda+t\Lambda_{1})(u), \tag{4.13}\]
for all \(a,b\in L,u\in H.\) It is easy to see that the condition (4.13) gives rise to
\[\Lambda_{1}u-\Lambda^{\prime}_{1}u=\Lambda\rho(a_{1},a_{2})u-[a_{1},a_{2}, \Lambda u]=\delta_{\Lambda}(a_{1},a_{2})u\in{\cal C}^{1}_{\Lambda}(H,L).\]
This shows that \(\Lambda_{1}\) and \(\Lambda^{\prime}_{1}\) are cohomologous. Thus their cohomology classes are the same in \({\cal H}^{1}_{\Lambda}(H,L)\).
Conversely, any 1-cocycle \(\Lambda_{1}\) gives rise to the infinitesimal deformation \(\Lambda+t\Lambda_{1}\). Furthermore, we have the following result.
**Theorem 4.7**.: _Let \(\Lambda:H\to L\) be a nonabelian embedding tensor on a 3-Lie algebra \((L,[-,-,-]_{L})\) with respect to a coherent action \((H,[-,-,-]_{H};\rho^{\dagger})\). Then there is a bijection between the set of all equivalence classes of infinitesimal deformations of \(\Lambda\) and the first cohomology group \({\cal H}^{1}_{\Lambda}(H,L)\)._
## 5 Nonabelian embedding tensors on 3-Lie algebras induced by Lie algebras
Motivated by the construction of 3-Lie algebras from Lie algebras. In this section, we provide and investigate nonabelian embedding tensors on 3-Lie algebras induced by Lie algebras. Recall from [3] that given a Lie algebra and a trace map one can construct a 3-Lie algebra. Let \((L,[-,-]_{L})\) be a Lie algebra and \(L^{*}\) the dual of \(L\). \(\varsigma\in L^{*}\) is called a trace map if it satisfies \(\varsigma([l_{1},l_{2}]_{L})=0\), for any \(l_{1},l_{2}\in L\). We define the ternary bracket \([-,-,-]_{L_{\varsigma}}\) by
\[[l_{1},l_{2},l_{3}]_{L_{\varsigma}}=\varsigma(l_{1})[l_{2},l_{3}]_{L}+\varsigma (l_{2})[l_{3},l_{1}]_{L}+\varsigma(l_{3})[l_{1},l_{2}]_{L},\ \forall l_{1},l_{2},l_{3}\in L.\]
This 3-Lie algebra is denoted by \(L_{\varsigma}\).
A coherent action of a Lie algebra \((L,[-,-]_{L})\) on a Lie algebra \((H,[-,-]_{H})\) is a Lie algebra homomorphism \(\rho:L\rightarrow{\rm Der}(H)\) satisfies \([\rho(l)h_{1},h_{2}]_{H}=0,\ \forall l\in L,h_{1},h_{2}\in H.\) See [30] for more details. We denote a coherent action of \((L,[-,-]_{L})\) by \((H,[-,-]_{H};\rho^{\dagger})\).
**Proposition 5.1**.: _let \((H,[-,-]_{H};\rho^{\dagger})\) be a coherent action of a Lie algebra \((L,[-,-]_{L})\) and \(\varsigma_{L},\varsigma_{H}\) be two trace maps, that is two linear maps satisfying_
\[\varsigma_{L}([l_{1},l_{2}]_{L})=0,\varsigma_{H}([h_{1},h_{2}]_{H})=0,\ \ \forall l_{1},l_{2}\in L,h_{1},h_{2}\in H.\]
_In order to simplify, \(\varsigma_{L}\) and \(\varsigma_{H}\) are denoted by the same symbol \(\varsigma\). Then \((H,[-,-,-]_{\varsigma};\rho_{\varsigma}^{\dagger})\) is a coherent action of the 3-Lie algebra \(L_{\varsigma}\), where \(\rho_{\varsigma}:\wedge^{2}L\to\mathrm{End}(H)\) is defined by_
\[\rho_{\varsigma}(l_{1},l_{2})=\varsigma(l_{1})\rho(l_{2})-\varsigma(l_{2}) \rho(l_{1}),\ \ \forall l_{1},l_{2}\in L.\]
Proof.: In the light of [36](Proposition 5.6), \((H;\rho_{\varsigma})\) is a representation of a 3-Lie algebra \(L_{\varsigma}\). We only need to check that \(\rho_{\varsigma}\) satisfies Eqs. (2.4) and (2.5). For all \(l_{1},l_{2}\in L\) and \(h_{1},h_{2},h_{3}\in H\), we have
\[[\rho_{\varsigma}(l_{1},l_{2})h_{1},h_{2},h_{3}]_{H_{\varsigma}}+ [h_{1},\rho_{\varsigma}(l_{1},l_{2})h_{2},h_{3}]_{H_{\varsigma}}+[h_{1},h_{2}, \rho_{\varsigma}(l_{1},l_{2})h_{3}]_{H_{\varsigma}}\] \[= [(\varsigma(l_{1})\rho(l_{2})-\varsigma(l_{2})\rho(l_{1}))h_{1},h_ {2},h_{3}]_{H_{\varsigma}}+[h_{1},(\varsigma(l_{1})\rho(l_{2})-\varsigma(l_{2}) \rho(l_{1}))h_{2},h_{3}]_{H_{\varsigma}}\] \[+[h_{1},h_{2},(\varsigma(l_{1})\rho(l_{2})-\varsigma(l_{2})\rho(l_ {1}))h_{3}]_{H_{\varsigma}}\] \[= \varsigma(l_{1})\varsigma(\rho(l_{2})h_{1})[h_{2},h_{3}]_{H}+ \varsigma(l_{1})\varsigma(h_{2})[h_{3},\rho(l_{2})h_{1}]_{H}+\varsigma(l_{1}) \varsigma(h_{3})[\rho(l_{2})h_{1},h_{2}]_{H}\] \[-\varsigma(l_{2})\varsigma(\rho(l_{1})h_{1})[h_{2},h_{3}]_{H}- \varsigma(l_{2})\varsigma(h_{2})[h_{3},\rho(l_{1})h_{1}]_{H}-\varsigma(l_{2}) \varsigma(h_{3})[\rho(l_{1})h_{1},h_{2}]_{H}\] \[+\varsigma(l_{1})\varsigma(h_{1})[\rho(l_{2})h_{2},h_{3}]_{H}+ \varsigma(l_{1})\varsigma(\rho(l_{2})h_{2})[h_{3},h_{1}]_{H}+\varsigma(l_{1}) \varsigma(h_{3})[h_{1},\rho(l_{2})h_{2}]_{H}\] \[-\varsigma(l_{2})\varsigma(h_{1})[\rho(l_{1})h_{2},h_{3}]_{H}- \varsigma(l_{2})\varsigma(\rho(l_{1})h_{2})[h_{3},h_{1}]_{H}-\varsigma(l_{2}) \varsigma(h_{3})[h_{1},\rho(l_{1})h_{2}]_{H}\] \[+\varsigma(l_{1})\varsigma(h_{1})[h_{2},\rho(l_{2})h_{3}]_{H}+ \varsigma(l_{1})\varsigma(h_{2})[\rho(l_{2})h_{3},h_{1}]_{H}+\varsigma(l_{1}) \varsigma(\rho(l_{2})h_{3})[h_{1},h_{2}]_{H}\] \[-\varsigma(l_{2})\varsigma(h_{1})[h_{2},\rho(l_{1})h_{3}]_{H}- \varsigma(l_{2})\varsigma(h_{2})[\rho(l_{1})h_{3},h_{1}]_{H}-\varsigma(l_{2}) \varsigma(\rho(l_{1})h_{3})[h_{1},h_{2}]_{H}\] \[= \zeta(l_{1})\varsigma(\rho(l_{2})h_{1})[h_{2},h_{3}]_{H}- \varsigma(l_{2})\varsigma(\rho(l_{1})h_{1})[h_{2},h_{3}]_{H}+\varsigma(l_{1}) \varsigma(\rho(l_{2})h_{2})[h_{3},h_{1}]_{H}\] \[-\varsigma(l_{2})\varsigma(\rho(l_{1})h_{2})[h_{3},h_{1}]_{H}+ \varsigma(l_{1})\varsigma(\rho(l_{2})h_{3})[h_{1},h_{2}]_{H}-\varsigma(l_{2}) \varsigma(\rho(l_{1})h_{3})[h_{1},h_{2}]_{H}\] \[= (\varsigma(l_{1})\rho(l_{2})-\varsigma(l_{2})\rho(l_{1}))(\varsigma(h_ {1})[h_{2},h_{3}]_{H}+\varsigma(h_{2})[h_{3},h_{1}]_{H}+\varsigma(h_{3})[h_{1},h _{2}]_{H})\] \[= \rho_{\varsigma}(l_{1},l_{2})[h_{1},h_{2},h_{3}]_{H_{\varsigma}},\] \[[\rho_{\varsigma}(l_{1},l_{2})h_{1},h_{2},h_{3}]_{H_{\varsigma}}\] \[= [\varsigma(l_{1})\rho(l_{2})h_{1}-\varsigma(l_{2})\rho(l_{1})h_{1}, h_{2},h_{3}]_{H_{\varsigma}}\] \[= \varsigma(l_{1})[\rho(l_{2})h_{1},h_{2},h_{3}]_{H_{\varsigma}}- \varsigma(l_{2})[\rho(l_{1})h_{1},h_{2},h_{3}]_{H_{\varsigma}}\] \[= \varsigma(l_{1})\varsigma(\rho(l_{2})h_{1})[h_{2},h_{3}]_{H}+ \varsigma(l_{1})\varsigma(h_{2})[h_{3},\rho(l_{2})h_{1}]_{H}+\varsigma(l_{1}) \varsigma(h_{3})[\rho(l_{2})h_{1},h_{2}]_{H}\] \[-\varsigma(l_{2})\varsigma(\rho(l_{1})h_{1})[h_{2},h_{3}]_{H}- \varsigma(l_{2})\varsigma(h_{2})[h_{3},\rho(l_{1})h_{1}]_{H}-\varsigma(l_{2}) \varsigma(h_{3})[\rho(l_{1})h_{1},h_{2}]_{H}\] \[= 0.\]
The proof is finished.
A nonabelian embedding tensor on a Lie algebra \((L,[-,-]_{L})\) with respect to a coherent action \((H,[-,-]_{H};\rho^{\dagger})\) is a linear map \(\Lambda:H\to L\) such that \([\Lambda h_{1},\Lambda h_{2}]_{L}=\Lambda(\rho(\Lambda h_{1})h_{2}+[h_{1},h_{2}]_{H} ),\forall h_{1},h_{2}\in H\) (see [30]).
**Theorem 5.2**.: _Let \(\Lambda:H\to L\) be a nonabelian embedding tensor on a Lie algebra \((L,[-,-]_{L})\) with respect to a coherent action \((H,[-,-]_{H};\rho^{\dagger})\), let \(\varsigma_{L},\varsigma_{H}\) be two trace maps and satisfying \(\varsigma_{L}(\Lambda h)=\varsigma_{H}(h)\), for any \(h\in H\). In order to simplify, \(\varsigma_{L}\) and \(\varsigma_{H}\) are denoted by the same symbol \(\varsigma\). Then \(\Lambda:H\to L\) is a nonabelian embedding tensor on the 3-Lie algebra \((L,[-,-,-]_{L_{\varsigma}})\) with respect to the coherent action \((H,[-,-,-]_{H_{\varsigma}};\rho^{\dagger}_{\varsigma})\)._
Proof.: For all \(h_{1},h_{2},h_{3}\in H\), we have
\[[\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L_{\varsigma}}\] \[= \varsigma(\Lambda h_{1})[\Lambda h_{2},\Lambda h_{3}]_{L}+ \varsigma(\Lambda h_{2})[\Lambda h_{3},\Lambda h_{1}]_{L}+\varsigma(\Lambda h _{3})[\Lambda h_{1},\Lambda h_{2}]_{L}\] \[= \varsigma(\Lambda h_{1})(\Lambda\rho(\Lambda h_{2})h_{3}+\Lambda [h_{2},h_{3}]_{H})+\varsigma(\Lambda h_{2})(\Lambda\rho(\Lambda h_{3})h_{1}+ \Lambda[h_{3},h_{1}]_{H})\] \[+\varsigma(\Lambda h_{3})(\Lambda\rho(\Lambda h_{1})h_{2}+ \Lambda[h_{1},h_{2}]_{H})\] \[= \varsigma(\Lambda h_{1})\Lambda\rho(\Lambda h_{2})h_{3}+\varsigma (\Lambda h_{2})\Lambda\rho(\Lambda h_{3})h_{1}+\varsigma(\Lambda h_{3})\Lambda \rho(\Lambda h_{1})h_{2}+\varsigma(\Lambda h_{2})\Lambda[h_{3},h_{1}]_{H}\] \[+\varsigma(\Lambda h_{1})\Lambda[h_{2},h_{3}]_{H}+\varsigma( \Lambda h_{3})\Lambda[h_{1},h_{2}]_{H}.\]
On the other hand, we have
\[\Lambda(\rho_{\varsigma}(\Lambda h_{1},\Lambda h_{2})h_{3}+[h_{1}, h_{2},h_{3}]_{H_{\varsigma}})\] \[= \Lambda(\varsigma(\Lambda h_{1})\rho(\Lambda h_{2})h_{3}-\varsigma (\Lambda h_{2})\rho(\Lambda h_{1})h_{3}+\varsigma(h_{1})[h_{2},h_{3}]_{H}+ \varsigma(h_{2})[h_{3},h_{1}]_{H}+\varsigma(h_{3})[h_{1},h_{2}]_{H}).\]
Thus, \([\Lambda h_{1},\Lambda h_{2},\Lambda h_{3}]_{L_{\varsigma}}=\Lambda(\rho_{ \varsigma}(\Lambda h_{1},\Lambda h_{2})h_{3}+[h_{1},h_{2},h_{3}]_{H_{\varsigma}})\), which implies that \(\Lambda:H\to L\) is a nonabelian embedding tensor on \((L,[-,-,-]_{L_{\varsigma}})\) with respect to \((H,[-,-,-]_{H_{\varsigma}};\rho^{\dagger}_{\varsigma})\).
**Definition 5.3**.: (See [30]) A Leibniz-Lie algebra \((H,[-,-]_{H},\rhd)\) consists of a Lie algebra \((H,[-,-]_{H})\) and a binary product \(\rhd:H\otimes H\to H\) such that
\[h_{1}\rhd(h_{2}\rhd h_{3}) =(h_{1}\rhd h_{2})\rhd h_{3}+h_{2}\rhd(h_{1}\rhd h_{3})+[h_{1},h_ {2}]_{H}\rhd h_{3}\] \[h_{1}\rhd[h_{2},h_{3}]_{H} =[h_{1}\rhd h_{2},h_{3}]_{H}=0,\]
for all \(h_{1},h_{2},h_{3}\in H\).
**Theorem 5.4**.: _Let \((H,[-,-]_{H},\rhd)\) be a Leibniz-Lie algebra and \(\varsigma\) be a trace map, that is a linear map satisfying_
\[\varsigma([h_{1},h_{2}]_{H})=0,\varsigma(h_{1}\rhd h_{2})=0,\ \ \forall h_{1},h_{2}\in H.\]
_Define two ternary brackets by_
\[[h_{1},h_{2},h_{3}]_{H_{\varsigma}} =\varsigma(h_{1})[h_{2},h_{3}]_{H}+\varsigma(h_{2})[h_{3},h_{1}]_ {H}+\varsigma(h_{3})[h_{1},h_{2}]_{H},\] \[\{h_{1},h_{2},h_{3}\}_{H_{\varsigma}} =\varsigma(h_{1})h_{2}\rhd h_{3}-\varsigma(h_{2})h_{1}\rhd h_{3}, \forall h_{1},h_{2},h_{3}\in H.\]
_Then \((H,[-,-,-]_{H_{\varsigma}},\{-,-,-\}_{H_{\varsigma}})\) is a 3-Leibniz-Lie algebra._
Proof.: For any \(h_{1},h_{2},h_{3},h_{4},h_{5}\in H\), we have
\[\{\{h_{1},h_{2},h_{3}\}_{H_{\varsigma}},h_{4},h_{5}\}_{H_{\varsigma} }+\{h_{3},\{h_{1},h_{2},h_{4}\}_{H_{\varsigma}},h_{5}\}_{H_{\varsigma}}+\{h_{3},h_{4},\{h_{1},h_{2},h_{5}\}_{H_{\varsigma}}\}_{H_{\varsigma}}\] \[+\{[h_{1},h_{2},h_{3}]_{H_{\varsigma}},h_{4},h_{5}\}_{H_{\varsigma} }+\{h_{3},[h_{1},h_{2},h_{4}]_{H_{\varsigma}},h_{5}\}_{H_{\varsigma}}-\{h_{1},h_{2},\{h_{3},h_{4},h_{5}\}_{H_{\varsigma}}\}_{H_{\varsigma}}\] \[= \varsigma(h_{1})\varsigma(h_{2}\rhd h_{3})h_{4}\rhd h_{5}-\varsigma (h_{4})\varsigma(h_{1})(h_{2}\rhd h_{3})\rhd h_{5}-\varsigma(h_{2})\varsigma(h _{1}\rhd h_{3})h_{4}\rhd h_{5}\] \[+\varsigma(h_{4})\varsigma(h_{2})(h_{1}\rhd h_{3})\rhd h_{5}+ \varsigma(h_{3})\varsigma(h_{1})(h_{2}\rhd h_{4})\rhd h_{5}-\varsigma(h_{1}) \varsigma(h_{2}\rhd h_{4})h_{3}\rhd h_{5}\] \[-\varsigma(h_{3})\varsigma(h_{2})(h_{1}\rhd h_{4})\rhd h_{5}+ \varsigma(h_{2})\varsigma(h_{1}\rhd h_{4})h_{3}\rhd h_{5}+\varsigma(h_{1}) \varsigma(h_{3})h_{4}\rhd(h_{2}\rhd h_{5})\] \[-\varsigma(h_{1})\varsigma(h_{4})h_{3}\rhd(h_{2}\rhd h_{5})- \varsigma(h_{2})\varsigma(h_{3})h_{4}\rhd(h_{1}\rhd h_{5})+\varsigma(h_{2}) \varsigma(h_{4})h_{3}\rhd(h_{1}\rhd h_{5})\] \[+\varsigma(h_{1})\varsigma([h_{2},h_{3}]_{H})h_{4}\rhd h_{5}- \varsigma(h_{4})\varsigma(h_{1})[h_{2},h_{3}]_{H}\rhd h_{5}+\varsigma(h_{2}) \varsigma([h_{3},h_{1}]_{H})h_{4}\rhd h_{5}\] \[-\varsigma(h_{4})\varsigma(h_{2})[h_{3},h_{1}]_{H}\rhd h_{5}+ \varsigma(h_{3})\varsigma([h_{1},h_{2}]_{H})h_{4}\rhd h_{5}-\varsigma(h_{4}) \varsigma(h_{3})[h_{1},h_{2}]_{H}\rhd h_{5}\] \[+\varsigma(h_{3})\varsigma(h_{1})[h_{2},h_{4}]_{H}\rhd h_{5}- \varsigma(h_{1})\varsigma([h_{2},h_{4}]_{H})h_{3}\rhd h_{5}+\varsigma(h_{3}) \varsigma(h_{2})[h_{4},h_{1}]_{H}\rhd h_{5}\] \[-\varsigma(h_{2})\varsigma([h_{4},h_{1}]_{H})h_{3}\rhd h_{5}+ \varsigma(h_{3})\varsigma(h_{4})[h_{1},h_{2}]_{H}\rhd h_{5}-\varsigma(h_{4}) \varsigma([h_{1},h_{2}]_{H})h_{3}\rhd h_{5}\] \[-\varsigma(h_{1})\varsigma(h_{3})h_{2}\rhd(h_{4}\rhd h_{5})+ \varsigma(h_{2})\varsigma(h_{3})h_{1}\rhd(h_{4}\rhd h_{5})+\varsigma(h_{1}) \varsigma(h_{4})h_{2}\rhd(h_{3}\rhd h_{5})\] \[-\varsigma(h_{2})\varsigma(h_{4})h_{1}\rhd(h_{3}\rhd h_{5})\] \[= 0,\] \[\{h_{1},h_{2},[h_{3},h_{4},h_{5}]_{H_{\varsigma}}\}_{H_{\varsigma}}\] \[= \varsigma(h_{1})\varsigma(h_{2}\rhd h_{3})[h_{4},h_{5}]_{H}+ \varsigma(h_{4})\varsigma(h_{1})[h_{5},h_{2}\rhd h_{3}]_{H}+\varsigma(h_{5}) \varsigma(h_{1})[h_{2}\rhd h_{3},h_{4}]_{H}\] \[-\varsigma(h_{2})\varsigma(h_{1}\rhd h_{3})[h_{4},h_{5}]_{H}- \varsigma(h_{4})\varsigma(h_{2})[h_{5},h_{1}\rhd h_{3}]_{H}-\varsigma(h_{5}) \varsigma(h_{2})[h_{1}\rhd h_{3},h_{4}]_{H}\] \[= 0.\]
The proof is finished.
**Acknowledgments.** The paper is supported by the Foundation of Science and Technology of Guizhou Province(Grant Nos. [2018]1020, ZK[2022]031, ZK[2023]025), the National Natural Science Foundation of China (Grant No. 12161013).
|
2310.17645 | PubDef: Defending Against Transfer Attacks From Public Models | Adversarial attacks have been a looming and unaddressed threat in the
industry. However, through a decade-long history of the robustness evaluation
literature, we have learned that mounting a strong or optimal attack is
challenging. It requires both machine learning and domain expertise. In other
words, the white-box threat model, religiously assumed by a large majority of
the past literature, is unrealistic. In this paper, we propose a new practical
threat model where the adversary relies on transfer attacks through publicly
available surrogate models. We argue that this setting will become the most
prevalent for security-sensitive applications in the future. We evaluate the
transfer attacks in this setting and propose a specialized defense method based
on a game-theoretic perspective. The defenses are evaluated under 24 public
models and 11 attack algorithms across three datasets (CIFAR-10, CIFAR-100, and
ImageNet). Under this threat model, our defense, PubDef, outperforms the
state-of-the-art white-box adversarial training by a large margin with almost
no loss in the normal accuracy. For instance, on ImageNet, our defense achieves
62% accuracy under the strongest transfer attack vs only 36% of the best
adversarially trained model. Its accuracy when not under attack is only 2%
lower than that of an undefended model (78% vs 80%). We release our code at
https://github.com/wagner-group/pubdef. | Chawin Sitawarin, Jaewon Chang, David Huang, Wesson Altoyan, David Wagner | 2023-10-26T17:58:08Z | http://arxiv.org/abs/2310.17645v2 | # Defending Against Transfer Attacks From
###### Abstract
Adversarial attacks have been a looming and unaddressed threat in the industry. However, through a decade-long history of the robustness evaluation literature, we have learned that mounting a strong or optimal attack is challenging. It requires both machine learning and domain expertise. In other words, the white-box threat model, religiously assumed by a large majority of the past literature, is unrealistic. In this paper, we propose a new practical threat model where the adversary relies on **transfer attacks through publicly available surrogate models**. We argue that this setting will become the most prevalent for security-sensitive applications in the future. We evaluate the transfer attacks in this setting and propose a specialized defense method based on a game-theoretic perspective. The defenses are evaluated under 24 public models and 11 attack algorithms across three datasets (CIFAR-10, CIFAR-100, and ImageNet). Under this threat model, our defense, PubMed, outperforms the state-of-the-art white-box adversarial training by a large margin with **almost no loss in the normal accuracy**. For instance, on ImageNet, our defense achieves 62% accuracy under the strongest transfer attack vs only 36% of the best adversarially trained model. Its accuracy when not under attack is only 2% lower than that of an undefended model (78% vs 80%). We release our code at [https://github.com/wagner-group/pubdef](https://github.com/wagner-group/pubdef).
## 1 Introduction
Current ML models are fragile: they are susceptible to adversarial examples (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow et al., 2015), where a small imperceptible change to an image radically changes its classification. This has stimulated a profusion of research on ML-based methods to improve the robustness to adversarial attacks. Unfortunately, progress has slowed and fallen far short of what is needed to protect systems in practice (Hendrycks et al., 2022). In this paper, we articulate a new approach that we hope will lead to pragmatic improvements in resistance to attacks.
In the literature, we can find two defense strategies: systems-level defenses and ML-level defenses. Systems-level defenses include controls such as keeping the model weights secret, returning only the predicted class and not confidence scores, monitoring use of the model to detect anomalous patterns, etc. Unfortunately, systems-level defenses have proven inadequate on their own: for instance, transfer attacks can successfully attack a target model even without knowing its weights. Therefore, most research focuses on ML-level defenses, where we try to build models that are more robust against such attacks, for example through novel architectures and/or training methods. Researchers made early progress on ML-level defenses, with the introduction of adversarial training (Madry et al., 2018), but since then progress has slowed dramatically, and there is no clear path to achieving strong adversarial robustness in any deployable model.
Current ML-level defenses suffer from two major problems: first, they have an unacceptable negative impact on clean accuracy (accuracy when not under attack), and second, they focus on a threat model that is increasingly recognized to be unrealistic (Gilmer et al., 2018; Goodfellow, 2018; Hendrycks et al., 2022; Apruzzese et al., 2023). These problems are closely related: prevailing academic threat
models are unrealistic as they grant the attacker excessive powers that are hard to realize in real life, making it too difficult to achieve strong security against such unrealistic attacks. Because of the existing trade-off between adversarial robustness and clean accuracy, this in turn means achieving any non-trivial adversarial robustness currently requires unacceptable degradation to clean accuracy.
We advocate for a different approach, captured in slogan form as follows:
\[\begin{array}{ccccc}\text{Secure ML}&=&\text{Systems-level defenses}&+&\text{Realistic threat model}\\ &&+&\text{ML-level defenses against those threats}\end{array}\]
We propose using all available systems-level defenses. We articulate a concrete threat model, influenced by what attacks cannot be stopped by systems-level defenses. Specifically, we propose _security against transfer attacks from public models_ (TAPM; Fig. 1(a)) as the threat model we focus on. The TAPM threat model focuses on transfer attacks where the adversary starts with a publicly available model, attacks the public model, and then hopes that this attack will "transfer", i.e., will also be effective against the target model. Because public models are often widely available, e.g., in model zoos, this kind of transfer attack is particularly easy to mount and thus particularly important to defend against. Under the TAPM threat model, we assume neither the model weights nor training set are known to the attacker, and the attacker cannot train their own model or mount query-based attacks that involve querying the target model many times. These assumptions are driven partly by what kinds of attacks can be prevented or mitigated by existing systems-level defenses.
Finally, we introduce PubDef (Fig. 1(c)), a new method for training models that will be secure against transfer attacks from public models. PubDef models are attractive for practical deployment. For instance, they achieve clean accuracy close to that of an undefended model, so there is little loss in performance when not under attack. When under attack (via transfer from public models), adversarial accuracy remains fairly high: 88.6% for CIFAR-10 (almost 20 points higher than any previous defense), 50.8% for CIFAR-100 (18 points higher), and 62.3% for ImageNet (26 points higher than any previous defense). While our defense is not perfect and is not appropriate in all scenarios, we believe it is a pragmatic defense that can be deployed without major loss of clean accuracy, while making life as difficult for attackers as possible within that constraint.
Figure 1: (a) Proposed threat model: transfer attack with public source models (TAPM). We consider a low-cost black-box adversary who generates adversarial examples from publicly available models with a known attack algorithm. (b) Our approach is based on stopping each major category of attack with a combination of multiple mechanisms. (c) Our defense, PubDef, trains the defended model to resist transfer attacks from several publicly available source models. Our model is robust to a wide range of transfer attacks, including both those from source models that were trained against and others that were not trained against, while also maintaining high clean accuracy.
## 2 Related Work
We provide an introduction to several types of attacks and threat models seen in the literature, for comparison to our new threat model.
**White-box attacks.** In this threat model, the attacker is assumed to know everything about the target model, including all model weights. This is the most studied threat model in the literature. Adversarial training (Madry et al., 2018) has been the primary defense against white-box adversarial examples. However, adversarial training sacrifices a considerable amount of clean accuracy (Tsipras et al., 2019), rendering it unattractive to deploy in practice.
**Transfer attacks.**Papernot et al. (2016) first demonstrated the threat of transfer attacks: adversarial examples generated on one ML model (the surrogate) can successfully fool another model if both models are trained on the same task. Liu et al. (2017); Tramer et al. (2017); Demontis et al. (2019) propose various methods for quantifying a degree of the attack transferability including the distance to the decision boundary as well as the angle between the gradients of two models. A great number of transfer attack algorithms have been proposed over the years (Zhao et al., 2022), e.g., using momentum during optimization (Dong et al., 2018; Lin et al., 2020; Wang et al., 2021), applying data augmentation (Xie et al., 2019; Wang et al., 2021; Lin et al., 2020), and alternative loss functions (Zhang et al., 2022; Huang et al., 2019). In this threat model, researchers often assume that the training set for the defended model is available to the attacker, and the attacker can either train their own surrogate models or use publicly available models as a surrogate.
Adversarial training can defend against transfer attacks but at the cost of excessive degradation to clean accuracy. One can also combine adversarial training with an ensemble of models to provide robustness against both white-box and transfer attacks (Tramer et al., 2018; Adam et al., 2018; Pang et al., 2019; Yang et al., 2020; 2021). We compare our method to DVERGE (Yang et al., 2020) and TRS (Yang et al., 2021), the two state-of-the-art defenses against transfer attacks.
**Query-based attacks.** Consider the setting where the attacker can query the target model (submit an input and obtain the classifier's output for this input), but does not know the model's weights. This threat model is particularly relevant for classifiers made available via an API or cloud service. Attacks can iteratively make a series of queries to learn the decision boundary of the model and construct an adversarial example (Brendel et al., 2018; Ilyas et al., 2018; Andriushchenko et al., 2020). Systems-level defenses include returning only hard-label predictions (the top-1 predicted class but not the confidence level), rate-limiting, and monitoring queries to detect query-based attacks (Biggio et al., 2013; Goodfellow, 2019; Chen et al., 2020; Li et al., 2020).
## 3 Threat Model
We first define our threat model for this paper: _transfer attack with public models_ (TAPM). It is designed to capture a class of attacks that are especially easy and low-cost to mount, do not require great sophistication, and are not easily prevented by existing defenses. It fills in a part of the attack landscape that has not been captured by other well-known threat models (e.g., white-box, query-based, and transfer attack). Under TAPM, the adversary has the following capabilities:
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Defenses} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{ImageNet} \\ \cline{2-7} & Clean & Adv. & Clean & Adv. & Clean & Adv. \\ \hline No defense & 96.3 & 0.0 & 81.5 & 0.0 & 80.4 & 0.0 \\ Best white-box adv. train & 85.3 & 68.8 & 68.8 & 32.8 & 63.0 & 36.2 \\ DVERGE + adv. train & 87.6 & 59.6 & 6.3 & 2.1\({}^{*}\) & & \\ TRS + adv. train & 86.9 & 66.7 & 63.9 & 39.1 & & \\
**PubMed(ours)** & 96.1 (+0.18) & 88.6 (+19.8) & 76.2 (+7.4) & 50.8 (+18.0) & 78.6 (+15.6) & 63.0 (+26.8) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Clean and adversarial accuracy of PubMedvs the best previously published defenses against transfer attacks. Adversarial accuracy is measured in the TAPM threat model. “White-box adv. train” are the most robust models from RobustBench which share the same architecture as PubMedF. DVERGE (Yang et al., 2020) and TRS (Yang et al., 2021) are two state-of-the-art defenses against transfer attacks. “DVERGE is designed for CIFAR-10 and is difficult to train on the other datasets. TRS/DVERGE with adversarial training is not included for ImageNet due to its computation cost.
1. They have white-box access to all publicly available models trained for the same task. They can mount a transfer attack, using any public model as the surrogate.
2. They cannot train or fine-tune a neural network. This might be because a reasonable training set is not publicly available, or because the training process requires substantial expertise and resources that outweighs the economic gain of the attack.
3. They can submit one or more adversarial inputs to the target model but cannot run query-based attacks. This assumption is particularly well-suited to security-sensitive tasks, e.g., authentication and malware detection, where the adversary is caught immediately if the attack fails, or to systems where other effective defenses against query-based attacks can be deployed.
We also assume that the defender is also aware of the same set of public models.
**Notation.** Let \(\mathcal{S}=\{\mathsf{S}_{1},\ldots,\mathsf{S}_{s}\}\) denote a finite set of all public models on the same task and \(\mathcal{A}=\{\mathsf{A}_{1},\ldots,\mathsf{A}_{a}\}\) a set of known transfer attack algorithms. An attack generates an \(\ell_{p}\)-norm bounded adversarial example from a model \(\mathsf{A}\) and an input sample \(x\):
\[x_{\mathrm{adv}}=\mathsf{A}(\mathsf{S},(x,y))\quad\text{such that}\quad\left\|x_{ \mathrm{adv}}-x\right\|_{p}\leq\epsilon \tag{1}\]
where \(\mathsf{S}\in\mathcal{S}\) is a source or surrogate model. A transfer attack \(x_{\mathrm{adv}}\) is uniquely defined by a pair \((\mathsf{S},\mathsf{A})\). The attack is then evaluated on a target model \(\mathsf{T}\notin\mathcal{S}\) and considered successful if \(\mathsf{T}(x_{\mathrm{adv}})\neq y\).
## 4 Game-Theoretic Perspective
We begin by motivating our defense through a game-theoretic lens. Prior work has formulated adversarial robustness as a two-player zero-sum game (Araujo et al., 2020; Meunier et al., 2021; Rathun et al., 2022) but under different threat models and contexts. Under our TAPM setup, the attacker's strategy is naturally discrete and finite. The attacker chooses a source model \(\mathsf{S}\in\mathcal{S}\) and an attack algorithm \(\mathsf{A}\in\mathcal{A}\) and obtains an adversarial sample \(x_{\mathrm{adv}}\) (as defined in Eq. (1)). Essentially, each pair \((\mathsf{S},\mathsf{A})\) corresponds to one of \(|\mathcal{S}|\cdot|\mathcal{A}|=s\cdot a\) attack strategies. We will describe two versions of the game with different defender strategies.
### Simple Game
As a warm-up, we will first consider a discrete defense strategy where the defender trains \(s\cdot a\) models, one against each of the attack strategies. Denote a defender's model by \(\mathsf{T}\in\mathcal{T}\) where \(|\mathcal{T}|=s\cdot a\). The defender's strategy is to choose \(\mathsf{T}\) to classify a given \(x_{\mathrm{adv}}\) where \(\mathsf{T}\) is trained to minimize the expected risk of both the normal samples and the transfer adversarial samples \(x_{\mathrm{adv}}\) from Eq. (1).
\[\operatorname*{arg\,min}_{\theta}\;\mathbb{E}_{x,y}\left[\mathsf{L}(\mathsf{T }_{\theta}(x),y)+\mathsf{L}(\mathsf{T}_{\theta}(x_{\mathrm{adv}}),y)\right] \tag{2}\]
Note that this formulation is similar to the well-known adversarial training (Goodfellow et al., 2015; Madry et al., 2018) except that \(x_{\mathrm{adv}}\) is independent to \(\theta\) or the model being trained. The payoff of the defender is defined as _the expected accuracy_ on \(x_{\mathrm{adv}}\) chosen by the attacker:
\[r_{D}(\boldsymbol{\pi}_{A},\boldsymbol{\pi}_{D})=\mathbb{E}_{\mathsf{T}\sim \boldsymbol{\pi}_{D}}\mathbb{E}_{(\mathsf{S},\mathsf{A})\sim\boldsymbol{\pi} _{A}}\mathbb{E}_{x,y}[1\left\{\mathsf{T}(\mathsf{A}(\mathsf{S},(x,y)))=y \right\}] \tag{3}\]
where \(\boldsymbol{\pi}_{A},\boldsymbol{\pi}_{D}\) are _mixed_ (i.e., potentially randomized) strategies for the attacker and the defender, respectively. In other words, \(\boldsymbol{\pi}_{A},\boldsymbol{\pi}_{D}\) each represents a multinomial distribution over the \(s\cdot a\) pure (i.e., non-randomized) strategies. The attacker's payoff is \(r_{A}(\boldsymbol{\pi}_{A},\boldsymbol{\pi}_{D})=-r_{D}(\boldsymbol{\pi}_{A},\boldsymbol{\pi}_{D})\). The payoff matrix \(\boldsymbol{R}\in\mathbb{R}^{sa\times sa}\) is defined by \(\boldsymbol{R}_{i,j}=r_{D}(\boldsymbol{e}_{i},\boldsymbol{e}_{j})\).
As an example, we empirically compute \(\boldsymbol{R}\) (Fig. 2) choosing \(\mathcal{S}=\{\mathsf{S}_{1},\ldots,\mathsf{S}_{4}\}\) to be four public models and \(\mathcal{A}\) as only PGD attack (Madry et al., 2018). We will later describe how these models are chosen in Section 5.2. The defender also has four models \(\mathcal{T}=\{\mathsf{T}_{1},\ldots,\mathsf{T}_{4}\}\), where \(\mathsf{T}_{i}\) is adversarially trained to be robust against transfer attacks from \(\mathsf{S}_{i}\). Notice that the diagonal entries are large because \(\mathsf{T}_{i}\) is trained on the attack from \(\mathsf{S}_{i}\). Von Neumann's
Figure 2: The payoff matrix of the simple game.
minimax theorem guarantees the existence of a Nash equilibrium, i.e., an optimal strategy for each player (v. Neumann, 1928) (see Appendix A.3). The optimal strategy can be efficiently computed using linear programming (van den Brand, 2020). For the payoff matrix in Fig. 2, the expected payoff for the optimal strategy is 73.0, meaning that when both the attacker and the defender choose their strategies optimally, the target model can achieve 73.0% accuracy on average. This is reasonable, but as we show next, we can do better.
### Complex Game
Now we make the defender's action space more flexible: the defender can choose their model's weights arbitrarily, instead of being limited to one of \(s\cdot a\) models. We extend the loss function in Eq. (2) to represent the loss against a transfer attack chosen according to mixed strategy \(\pi_{A}\):
\[\operatorname*{arg\,min}_{\theta}\;\operatorname{\mathbb{E}}_{x,y}\left[ \mathsf{L}(f_{\theta}(x),y)+\sum_{i=1}^{s-a}\;\pi_{i}\mathsf{L}\left(f_{\theta }\left(x_{\mathrm{adv},i}\right),y\right)\right] \tag{4}\]
where \(x_{\mathrm{adv},i}\) is an attack generated by the \(i\)-th attack strategy, and the attacker's (mixed) strategy is given by \(\boldsymbol{\pi}=(\pi_{1},\ldots,\pi_{sa})\) representing a probability distribution over the \(s\cdot a\) (pure) attack strategies. Note that we can recover the simple game if the adversary is restricted to choosing \(\pi_{i}\)'s s.t. \(\pi_{i}=1\) for a single \(i\) and 0 otherwise. However, when \(\boldsymbol{\pi}\) represents any probability distribution, the reward function is no longer linear in \(\boldsymbol{\pi}\) so von Neumann's minimax theorem no longer applies. A Nash equilibrium may exist, but there is no known efficient algorithm to compute it.1
Footnote 1: If we discretize each \(\pi_{i}\), the equilibrium is still guaranteed to exist by Nash’s theorem (Nash, 1951), but we need to deal with an exponential (in \(sa\)) number of defense models.
One naive strategy for the defender is to assume the attacker will choose uniformly at random from all \(s\cdot a\) attacks and find the best response, i.e., find model weights \(\theta\) that minimize Eq. (4) when \(\pi_{i}=1/sa\) for all \(i\)'s. This amounts to adversarially training a model against this particular (mixed) attacker strategy. In this case, the defender's payoff (adversarial accuracy) against each of the four attacks turns out to be \([96.3,90.4,94.6,96.0]\), for the payoff matrix in Fig. 2. This means the defender achieves over 90% accuracy against all four transfer attacks, which is a significant improvement over the equilibrium of the simple game (73%). This suggests that while we may not be able to solve the complex game optimally, it already enables a much better strategy for the defender. In Section 5, we will explore several heuristics to approximately find a "local" equilibrium.
## 5 Our Practical Defense
### Loss Function and Weighting Constants
We propose several heuristics for solving the complex game (Section 4.2) that do not require explicitly computing the full payoff matrix. Instead, the defender trains only one model with adjustable weights \(\pi_{i}\)'s as defined in Eq. (4). We experiment with the three main methods below and report the best one unless stated otherwise.
**1. Fixed**: Here, all \(\pi_{i}\)'s are fixed to \(1/sa\). We experiment with two very similar loss functions: (1) All: The model is trained on all transfer attacks (pairs of \((\mathsf{S},\mathsf{A})\)) simultaneously. This is exactly the same as Eq. (4). (2) Random: Randomly sample one pair of \((\mathsf{S},\mathsf{A})_{i}\) at each training iteration.
**2. Top-\(k\)**: This scheme modifies All by taking, at each iteration of training, the top \(k\) pairs of \((\mathsf{S},\mathsf{A})_{i}\) that maximize the loss on the current defense model being trained. Effectively, in each batch, we attack the current model weights with all \(s\cdot a\) attacks, choose the \(k\) most effective attacks, and set their \(\pi_{i}\)'s to \(1/k\) and the other \(\pi_{i}\)'s to 0. For \(k=1\), this is a minimax problem similar to adversarial training, but the maximization is over the choice of transfer attacks instead of the perturbation:
\[\operatorname*{arg\,min}_{\theta}\;\operatorname{\mathbb{E}}_{x,y}\left[ \mathsf{L}(f_{\theta}(x),y)+\max_{i\in[s-a]}\;\mathsf{L}\left(f_{\theta}\left( x_{\mathrm{adv},i}\right),y\right)\right] \tag{5}\]
**3. Dynamic weights**: This method can be considered a smooth version of the top-\(k\). Instead of setting each \(\pi_{i}\) to either 1 or 0, we dynamically adjust it in proportion to the overall loss or error rate
of the attack \((\mathsf{S},\mathsf{A})_{i}\). We call these methods DynamicLoss and DynamicAcc respectively. We use an exponential moving average \(\mu\) with a decaying factor \(\alpha\) to estimate the loss and the accuracy:
\[\mu_{i}^{t+1} =(1-\alpha)\mu_{i}^{t}+\alpha\mathsf{L}\left(f_{\theta}\left(x_{ \mathrm{adv},i}\right),y\right) (\text{DynamicLoss}) \tag{6}\] \[\mu_{i}^{t+1} =(1-\alpha)\mu_{i}^{t}+\alpha\mathbb{E}_{x,y}\left[\mathbbm{1} \left\{f_{\theta}\left(x_{\mathrm{adv},i}\right)=y\right\}\right] (\text{DynamicAcc})\] (7) \[\pi_{i} =\frac{\mu_{i}}{\sum_{j=1}^{s\cdot a}\mu_{j}} (\text{Normalize to }[0,1]) \tag{8}\]
We can normalize the \(\pi_{i}\)'s by their sum because both the loss and the error rate are non-negative.
### Defender's Source Model Selection
Given that the set of publicly available models is public and known to both the attacker and the defender, the most natural choice for the defender is to train against _all_ publicly available models. However, the computation cost can be prohibitive. We show that we can achieve nearly as good performance by choosing only a small subset of the publicly available models. However, finding an optimal set of source models is non-trivial without trying out all possible combinations.
Intuitively, to be robust against a wide range of transfer attacks, the defender should train the target model against a diverse set of source models and algorithms. The "diversity" of a set of models is challenging to define. Natural approaches include using a diverse set of architectures (e.g., ConvNet vs Transformer), a diverse set of (pre-)training methodologies (e.g., supervised vs unsupervised), and a diverse set of data augmentation strategies. In our experiments, we found that the _training procedure_--namely (1) normal, (2) \(\ell_{\infty}\)-adversarial, (3) \(\ell_{2}\)-adversarial, or (4) corruption-robust training--has the largest effect on the defense. For the rest of the paper, we categorize the source models into one of these four groups.2 We will discuss the implications of this grouping in Sections 6.2 and 7.1.
Footnote 2: For CIFAR-100 and ImageNet, it is hard to find \(\ell_{2}\)-adversarially trained models that are publicly available, so we exclude this group and only consider the remaining three (normal, \(\ell_{\infty}\), and corruption).
This motivates a simple yet surprisingly effective heuristic that we use for selecting the set of source models: when training PubDef, we use one source model from each group (four source models in total for CIFAR-10 and three for CIFAR-100 and ImageNet). In more detail, we first choose four source models: the public model that is most robust against \(\ell_{\infty}\) white-box attacks, the public model that is most robust against \(\ell_{2}\) white-box attacks, the public model that is most corruption robust, and one arbitrary public model that is normally trained. Then, we compute the adversarial accuracy against transfer attacks from every publicly available model. If the adversarial accuracy against transfer attacks from some other public model \(\mathsf{S}^{\prime}\) is significantly lower than the adversarial accuracy against transfer attacks from \(\mathsf{S}\) (the chosen model in the same group as \(\mathsf{S}^{\prime}\)), then we swap in \(\mathsf{S}^{\prime}\) and remove \(\mathsf{S}\). We made one swap for CIFAR-100 and ImageNet and no swap for CIFAR-10. We find that this simple heuristic works well in practice and performs better than a random subset (Section 7.1).
## 6 Experiments
### Setup
**Metrics.** The two most common metrics used to compare models in the past literature are clean and adversarial accuracy. The clean accuracy is simply the accuracy on the test set, with no attack. There are multiple ways to measure the adversarial accuracy under our threat model depending on the attacker's strategy (e.g., average or worst-case). We conservatively assume that the attacker knows the defender's strategy and chooses the best pair of \((\mathsf{S},\mathsf{A})\) to run the attack.3 In other words, we report the adversarial accuracy against the worst-case TAPM attack.
Footnote 3: This assumption is realistic because, with limited queries, the attacker can intelligently pick a good source model using the method from Maho et al. (2023), for example.
**Baseline defenses.** We compare PubDef to the best white-box adversarially trained model from RobustBench that has the same architecture. For CIFAR-10, CIFAR-100, and ImageNet, the best are Addepalli et al. (2022b), Addepalli et al. (2022a), and Salman et al. (2020), respectively. Additionally, we evaluate the two defenses with strongest results against transfer attack: DVERGE (Yang
et al., 2020) and TRS (Yang et al., 2021). These use an ensemble of models for greater diversity, so that an attack that succeeds on one might fail on another, hopefully making transfer attacks harder.
**Public source models.** For each dataset, we select 24 public pre-trained models: 12 normally trained and 12 robustly trained models. The normal group comes from multiple public model zoos including Hugging Face (Face, 2023), timm (Wightman, 2019), and (for CIFAR-10/100) two Github repositories (Chen, 2023; Phan, 2023). The robust models are all hosted on RobustBench (Crocc et al., 2020). We select a wide variety of models based on their architecture (e.g., assorted ConvNet, ViT (Dosovitskiy et al., 2021), Swin (Liu et al., 2021), BeiT (Bao et al., 2022), ConvMixer (Trockman and Kolter, 2023), zero-shot CLIP (Radford et al., 2021), etc.) and training methods to ensure high diversity. We try our best to not select two models with the same architecture or from the same paper.
**Attack algorithms.** We evaluate the robustness of the defenses with 11 different attack algorithms. All of the attacks are gradient-based, but they utilize different techniques for improving the transferability of the attacks, e.g., momentum, data augmentation, intermediate representation, etc. Please refer to Appendix A.1 for the complete list of both the source models and the attack algorithms.
**PubDef training.** We adversarially train the defended model by selecting a subset of source models according to the heuristic in Section 5.2 and then using a PGD transfer attack on that subset. We use a WideResNet-34-10 architecture for CIFAR-10/100 and ResNet-50 for ImageNet. We do not use any extra data or generated data for training. Appendix A.1.3 has the complete training details.
### Results
**PubDef is more robust to all 264 transfer attacks than the baselines by a large margin.** We generate 264 transfer attacks, one for each of 11 attack algorithms and 24 source models, and evaluate PubDef according to the most successful attack. For CIFAR-10, PubDef achieves 20 percentage points higher adversarial accuracy than the best previously published defense, with comparable clean accuracy to an undefended model (Table 1). For CIFAR-100, PubDef achieves 10-18 p.p. higher robustness and 7-12 p.p. higher clean accuracy than the best prior defense. For ImageNet, PubDef achieves 26 p.p. better adversarial accuracy and 15 p.p. better clean accuracy than adversarial training;
Figure 3: Adversarial accuracy of PubDef against 264 transfer attacks (24 source models \(\times\) 11 attack algorithms) on ImageNet. \(\mathbf{\Theta}\) denotes the source models this defense is trained against. We cannot produce NA attack on timm’s VGG model (shown as “n/a”) because of its in-place operation.
its clean accuracy is only 2 p.p. lower than an undefended model. It is beyond our resources to train TRS and DVERGE for ImageNet, due to the combination of ensembling and adversarial training.
Fig. 3 shows all adversarial accuracies of PubMed by pairs of \((\mathsf{S},\mathsf{A})\) on ImageNet. Here, the overall best attack is NI-Admix-TI-DIM from a ResNet-50 on Hugging Face. M-PGD, Admix, and NI-Admix-TI-DIM are generally the strongest attack algorithms across source models and datasets. Surprisingly, PGD usually performs well above average compared to the other more sophisticated attacks. Appendix A.5.2 shows more detail.
**PubMedef maintains high clean accuracy.** Compared to the state-of-the-art top-1 clean accuracy for undefended models, our defense experiences only a 0-5% drop in the clean accuracy (Table 1). Compare to white-box adversarial training, which suffers a 11-18% drop. We emphasize that the minimal drop in the clean accuracy is one of the most attractive properties of PubMedef making it far more practical than white-box adversarial training.
**PubMedef generalizes well to unseen source models and attack algorithms.** Our defense is trained against only the PGD attack and either four (CIFAR-10) or three (CIFAR-100, ImageNet) source models. This amounts to four potential transfer attacks out of 264. Table 2 shows that PubMedef generalizes incredibly well to the 260 unseen attacks losing only 1.7%, 1.8%, and 7.6% in robustness across the three datasets. This result implies that these 264 transfer attacks may be much more "similar" than the community expected; see Section 7.2.
**PubMedef should be trained with one source model from each of the four groups.** Table 3 shows an ablation study, where we omit one of the four source models during training. We see that including at least one source model from each of the four groups is important for strong robustness. Including at least one \(\ell_{2}\)-robust and at least one corruption-robust model in the source set seems particularly important. Without them, adversarial accuracy drops by 28.5% or 31.7%, respectively. We provide further evidence to support this finding in with a more sophisticated ablation study (Appendix A.4) that controls for the number of source models (Fig. 8).
**Training PubMedef against more source models is not necessarily better.** Fig. 4 shows that adding more source models (8, 12, or 24) to PubMedef increases the adversarial accuracy by only \(\sim\)1%, and
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Src.} & \multirow{2}{*}{Algo.} & CIFAR-10 & CIFAR-100 & ImageNet \\ \hline \multirow{2}{*}{Seen} & \multirow{2}{*}{Seen} & 90.3 & 52.6 & 70.6 \\ & & & 52.6 (\(-\)0.0) & 68.6 (\(-\)2.0) \\ & & 90.3 (\(-\)0.0) & 50.8 (\(-\)1.3) & 63.0 (\(-\)7.6) \\ & & 88.6 (\(-\)1.7) & 50.8 (\(-\)1.3) & 63.0 (\(-\)7.6) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Adversarial accuracy of PubMedef under seen/unseen transfer attacks. Seen attacks (seen src. and seen algo.) are the 3–4 attacks that were used to train our defense, unseen attacks are all others from the set of 264 possible attacks. They are categorized by whether the source models (src.) and the attack algorithms (algo.) are seen. All non-PGD attacks are unseen attack algorithms.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Defender Src. Model Groups} & \multirow{2}{*}{Clean} & Adv. \\ \hline All groups & **96.1** & **88.6** \\ All groups but normal & 95.4 & 83.4 (\(-\)5.2) \\ All groups but \(\ell_{\infty}\) & 95.3 & 80.6 (\(-\)8.0) \\ All groups but \(\ell_{2}\) & 95.0 & 60.1 (\(-\)25.5) \\ All groups but corruption & 94.9 & 56.9 (\(-\)31.7) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effects on accuracy when excluding one (out of four) defender’s source models from PubMedef trained on CIFAR-10.
Figure 4: Clean and adversarial accuracy on four PubMedef models trained with 4 (\(4\times 1\)), 8 (\(4\times 2\)), 12 (\(4\times 3\)), and 24 (All) source models. “4 \(\times m\)” means \(m\) source models are chosen from each of the four groups.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Src.} & \multirow{2}{*}{Algo.} & CIFAR-10 & CIFAR-100 & ImageNet \\ \hline \multirow{2}{*}{Seen} & \multirow{2}{*}{Seen} & 90.3 & 52.6 & 70.6 \\ & & & 52.6 (\(-\)0.0) & 68.6 (\(-\)2.0) \\ & & 90.3 (\(-\)0.0) & 50.8 (\(-\)1.3) & 63.0 (\(-\)7.6) \\ & & 88.6 (\(-\)1.7) & 50.8 (\(-\)1.3) & 63.0 (\(-\)7.6) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Adversarial accuracy of PubMedef under seen/unseen transfer attacks. Seen attacks (seen src. and seen algo.) are the 3–4 attacks that were used to train our defense, unseen attacks are all others from the set of 264 possible attacks. They are categorized by whether the source models (src.) and the attack algorithms (algo.) are seen. All non-PGD attacks are unseen attack algorithms.
it also decreases clean accuracy by \(\sim\)1%. This suggests that our simple heuristic of selecting one source model per group is not only necessary but also sufficient for training PubDef.
**A simple loss function suffices.** For CIFAR-10 and CIFAR-100, the Random training loss achieves the best results (Table 4). For ImageNet, All is slightly better and is only slightly worse than the best result (DynamicLoss). We use these training methods in all of our evaluations.
## 7 Discussion
### Ablation Studies
**Random source model selection.** We experiment with two random methods for choosing the source models for training PubDef. In three out of four cases, PubDef with the random selection method still outperforms the white-box adversarial training by roughly 10 p.p., but in all cases, it is worse than our default selection method. This result lets us conclude that (i) our simple selection scheme in Section 5.2 is much more effective than random and (ii) all of the source model groups should be represented which is in line with Section 6.2. We refer to Appendix A.4.1 for more details.
**Attacks from ensembles of the public source models.** A more sophisticated adversary could use an ensemble of public models to generate a transfer attack, which has been shown to improve the attack transferability (Liu et al., 2017; Gubri et al., 2022). We experiment with this approach by creating three ensembles of four source models (one from each group) and generate adversarial samples with all 11 attack algorithms. On CIFAR-10, the best attack out of these attempts turns out to be weaker than the best attack from a single model (91.7% vs 88.6% adversarial accuracy on PubDef). We leave more sophisticated ensemble-based attacks (e.g., Chen et al. (2023b;a)) as future work.
### Generalization and Adversarial Subspace
**Surprising generalization.** In Section 6.2, PubDef demonstrates excellent robustness against a broad range of TAPM attacks, even the transfer attacks from a source model and/or an attack algorithm that were not trained against. We suspect that the surprising generalization of PubDef indicates a low-dimensional structure underlying transfer adversarial examples. We visualize our intuition in Appendix A.5.3. In this section, we investigate this phenomenon more closely.
**Generalization with one source model.** We train four PubDef models each against only one source model (not one per group). The adversarial accuracy by groups in Fig. 6 shows an unexpected result: **training against either an \(\ell_{2}\) or corruption source model alone provides above 55% accuracy against the best attack**. Furthermore, training against the \(\ell_{2}\) (resp. corruption) source model provides over 80% robustness against the \(\ell_{\infty}\) (resp. normal) group. This generalization effect does not necessarily hold in reverse (e.g., training against a \(\ell_{\infty}\) source model yields little robustness to the \(\ell_{2}\) group). Some source models are better than others to train PubDef with.
To verify the manifold hypothesis, we attempt to quantify it using two metrics: cosine similarity and principal component analysis (PCA). Fig. 5 shows the pairwise cosine similarity values among all 264 attacks aggregated by the four source model groups and averaged over all CIFAR-10 test samples. The cosine similarity is notably higher when comparing adversarial examples generated within the same group in contrast to those across two groups, especially for the \(\ell_{\infty}\) and the \(\ell_{2}\) adversarial training groups (0.23 and 0.24 respectively). The cosine similarity is albeit lower in the normal and the corruption groups. PCA analysis also supports this observation, showing evidence for a low-dimensional linear subspace for the \(\ell_{\infty}\) and the \(\ell_{2}\) groups. We defer the detailed discussion to Appendix A.5.3.
### Practical Considerations
PubDef is intended to stop a specific class of attacks that are particularly easy to mount, and that are not stopped by any reasonable systems-level defense. However, it has many limitations:
Figure 5: Cosine sim. among pairs of adversarial perturbations by source model group.
1. PubDef is not robust to white-box attacks and is only suitable if model weights can be kept secret. If the model is deployed to users, then attackers can reverse-engineer it to find the weights and mount a white-box attack (Liang et al., 2016; Tencent Keen Security Lab, 2019).
2. A sufficiently dedicated and well-financed attacker can likely train their own surrogate model, e.g., by somehow assembling a suitable training set and paying annotators or using the target model to label it, then mount a transfer attack from this private surrogate, potentially bypassing our defense.
3. We treat query-based attacks as an orthogonal concern and rely on existing defenses against query-based attacks. It is not yet clear whether known defenses will be effective against a knowledgeable attacker (Feng et al., 2023).
4. We only defend against \(\ell_{\infty}\)-norm-bounded attacks. Many researchers have argued persuasively that we also need to consider broader attack strategies (Gilmer et al., 2018; Kaufmann et al., 2023), which we have not examined in this work.
Despite these limitations, PubDef has several practical benefits: (1) Minimal drop in clean accuracy: The robustness gain of PubDef is (almost) free! This makes it almost ready to deploy in the real world. (2) Low training cost: Adversarial training takes 5-10\(\times\) longer due to the adversarial example generation inside of the training loop. In contrast, PubDef training is much faster, as transfer attacks can be pre-generated prior to the training, only need to be done once, and can be done in parallel.
## 8 Conclusion
In this paper, we propose a pragmatic method for achieving as much robustness as possible, in situations where any more than minimal decrease in clean accuracy is unacceptable. We identify transfer attacks from public source models (TAPM) as a particularly important class of adversarial attacks, and we devise a new method, PubDef, for defending against it. Putting everything together yields a plausible defense against adversarial examples by aligning ML defenses with the most feasible attacks in practice that existing systems-level defenses cannot prevent. We hope other researchers will build on these ideas to achieve even stronger protections against adversarial examples.
### Acknowledgements
This work was supported in part by funds provided by the National Science Foundation (under grant 2229876), the KACST-UCB Center for Secure Computing, the Department of Homeland Security, IBM, the Noyce Foundation, Google, Open Philanthropy, and the Center for AI Safety Compute Cluster. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.
|
2308.02764 | Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query
Sculpting | We present aggregate query sculpting (AQS), a faceted visual query technique
for large-scale multidimensional data. As a "born scalable" query technique,
AQS starts visualization with a single visual mark representing an aggregation
of the entire dataset. The user can then progressively explore the dataset
through a sequence of operations abbreviated as P6: pivot (facet an aggregate
based on an attribute), partition (lay out a facet in space), peek (see inside
a subset using an aggregate visual representation), pile (merge two or more
subsets), project (extracting a subset into a new substrate), and prune
(discard an aggregate not currently of interest). We validate AQS with
Dataopsy, a prototype implementation of AQS that has been designed for fluid
interaction on desktop and touch-based mobile devices. We demonstrate AQS and
Dataopsy using two case studies and three application examples. | Md Naimul Hoque, Niklas Elmqvist | 2023-08-05T01:51:22Z | http://arxiv.org/abs/2308.02764v1 | # Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query Sculpting
###### Abstract
We present _aggregate query sculpting_ (AQS), a faceted visual query technique for large-scale multidimensional data. As a "born scalable" query technique, AQS starts visualization with a single visual mark representing an aggregation of the entire dataset. The user can then progressively explore the dataset through a sequence of operations abbreviated as \(\mathbb{P}^{6}\): _pivot_ (facet an aggregate based on an attribute), _partition_ (lay out a facet in space), _peek_ (see inside a subset using an aggregate visual representation), _pile_ (merge two or more subsets), _project_ (extracting a subset into a new substrate), and _prime_ (discard an aggregate not currently of interest). We validate AQS with DataOPS, a prototype implementation of AQS that has been designed for fluid interaction on desktop and touch-based mobile devices. We demonstrate AQS and DataOPS using two case studies and three application examples.
Multidimensional data visualization, multivariate graphs, visual queries, visual exploration.
## 1 Introduction
_E plarihuus unum_--Latin for "out of many, one"--is one of the official mottofs of the United States, and also happens to be a dominant strategy for managing scale in data visualization: through _aggregation_ of many data items into one visual mark [15]. Visualizing today's real-world datasets, from the Facebook social network to the billion-parameter large language models (LLMs) of Jurassic-1 and GPT-3, is more or less impractical when using a representation that insists on using one mark per data item--so-called _unit visualizations_[38]. This fact is exacerbated by today's trend towards mobility in data analysis [28] using mobile devices that have pathologically small mobile screens [21]. And finally, even if we had enough pixels, there is a limit to the number of data items that the human perceptual system can interpret effectively [15, 33]. What is truly needed to tackle the next generation of data visualization challenges are techniques that are "born scalable"; i.e., that have been designed to be scalable from inception.
In this paper, we propose such a "born scalable" visualization technique that we call _aggregate query sculpting_ (AQS). AQS is primarily designed for multidimensional tabular data, but can also be used for multivariate networks (i.e., links connecting data items or nodes). Unlike typical overviews containing thousands or tens of thousands of data items, the initial AQS view is a substrate containing a single _su
_pernode_ (aggregate node) representing all of the data items or nodes in the dataset (Figure 1a). From this point on, the goal of AQS is to provide the user with a set of fluid interactions to split the supernode into smaller data subsets during exploration, similar to how a sculptor will iteratively cut a large piece of clay into pieces to shape and mold separately. There are six such interactions, and they are summarized by the acronym \(\mathbb{P}^{6}\) for _pivot, partition, peek, pile, project_, and _prune_. Piving splits a supernode into facet supernodes based on a data attribute; partitioning laws out facet nodes in visual space; peeking shows the data distribution inside a node; piling merges two or more nodes; projection extracts a subset into a new visual substrate where \(\mathbb{P}^{6}\) operations can continue; and pruning eliminates undesired supernodes.
To validate the utility of aggregate query sculpting, we present DataOPS, a web-based prototype implementation of AQS capable of visualizing thousands of entities in a standard web browser. Dataposy and its AQS concepts draw on many existing concepts. Pivoting is based on graph pivoting techniques from PivotGraph [48] as well as Tableau (nee Polaris [46]). Microsoft's Sand Dance [12] and Google's Facets [20, 49] use a similar attribute-based 2D layout, but Dataopsy is designed for networks and maintains aggregate supernodes instead of a unit representation. Furthermore, the projection technique, which is inspired by Shneiderman and Aris's semantic substrates [3, 44], enables creating multiple linked substrates in the same visual space to avoid inelegant deep facet nesting, which is a problem for PivotGraph, Polaris, Sand Dance, and Facet browsers. Finally, the Dataposy interface is designed for fluid interaction on a desktop and tablet. The result is a smooth and scalable data exploration application for multivariate data synthesizing features no comparable tool possesses.
We present two case studies and three application examples involving Dataposy to demonstrate the utility of aggregate query sculpting. In the first case study, two data scientists and algorithmic fairness researchers used Dataposy to evaluate machine bias in the Adult income dataset [17, 6]. In the second case study, a creative writer used Dataopsy to navigate a complex set of scenes, locations, chapters, characters, and events from a fiction novel and sculpted it for adaptation to a screemplay format. As an application example, we showed how an analyst could use Dataposy to understand the linguistic properties of inter-community conflicts from 300,000 Reddit posts [27]. We then analyzed 1.7 billion taxi rides in New York City to identify hotspots for rides [34]. Finally, we used the VisPub [24] dataset to analyze the IEEE VIS scientific community over the years. Overall, the case studies and examples show the generalizability and scalability of AQS and Dataposy in exploring diverse multidimensional datasets.
## 1 Background
Building and managing queries is as old as visualization itself; the "zoom and filter" part of Shneiderman's visual information seeking mantra [43] refers to controlling which data items to show on the screen, primarily to manage scale. In this section, we review the related work on query management and visual information seeking, including for multivariate datasets, faceted browsing, and multivariate graphs.
### Multivariate or Multidimensional Visual Exploration
Multidimensional datasets consist of many attributes per observation, and are routinely found in both tabular as well network applications. For example, the U.S. Census dataset of citizen demographics includes hundreds of attributes capturing individuals living in the United States, including properties such as age, gender, education, annual income, and marital status. Searching and filtering such multivariate datasets was a challenging prospect often involving writing SQL queries until Williamson and Shneiderman proposed _dynamic queries_ using double-ended range sliders [50]. These sliders enabled selecting an interval in both quantitative and--later--categorical axes [2] in the dataset.
Significant work has since been conducted on searching and querying multidimensional datasets. Many visual query techniques are intimately tied to a visual representation. For example, axis filtering [41] is designed for filtering on the axes in a parallel coordinate plot. Exact Plates [25]_spatializes_ multidimensional interaction into 2D space. As the name implies, the ScatterDice system [14] is based on scatterplot matrices and introduces the concept of _query sculpting_ where a dataset is filtered from different angles until the final desired result is reached. The idea was later generalized in the VisDock [7] cross-cutting interaction library and became a fundamental feature of the Keshift visual data browser [51, 52]. In this paper, we build on query sculpting but generalize it to visual aggregates, where massive datasets have been hierarchically grouped into aggregation trees for scalability [15].
Polaris [46] and FromDaDy [23] were early examples of highly interactive multivariate query and visualization systems. Most multivariate visualizations are _unit visualizations_[38] in that they represent each data item with exactly one visual mark. More recently, Microsoft's SandDance [12] and Google Facets [20, 49] enable using similar interaction, layout, and query techniques to visualize multivariate datasets, such as machine learning training data. ATOM [38] was designed as a declarative grammar for building unit visualizations. We differ from prior works in several dimensions. _First_, instead of unit marks, we use aggregated marks to scale unit visualization to large datasets. _Second_, AQS introduces six interactions for iterative explorations of the aggregated marks. While some of these interactions are motivated by prior works, the combination of them provides analytical capabilities that no prior works possess. For example, Polaris's Cross and Nest operations motivated our pivot and partitioning operations. However, Cross and Nest could create inelegant nesting and visual clutter. We solve this limitation by integrating the Projection operation, motivated from semantic subtrates [3, 44]. _Finally_, prior works primarily focus on desktop applications whereas AQS is suitable for data analysis in touch-based mobile devices and extends to network analysis.
### Faceted Browsing
Faceted browsing [53], where a corpus is explored along one or more conceptual dimensions, was introduced as an alternative to keyword search and image similarity for browsing large-scale image repositories. The idea was quickly generalized to any multidimensional dataset and then adopted by many internet search providers, particularly for e-commerce and real estate websites.
Of course, faceted browsing is a powerful idea with applications to many information retrieval and query research problems. One of the early applications was FacetMap [45], a highly visual and dynamic visualization that summarizes the current state of the filters and search results based on a space-filling rectilinear layout approach. FacetMap shares many similarities with Dataposy and AQS, but our approach uses a single set of vertical and horizontal axis mappings for displaying dimensions. For this reason, Dataposy yields visual representations that are more stable and easier to understand. Nevertheless, we draw inspiration from FacetMap's aggregated and scalable visual encoding.
Several other research tools are based on faceted browsing. FacetLens [29] build on FacetMap and uses a similar representation, but support visual comparison view as well as more advanced pivoting operations. FacetZoom [9] enable smooth exploration of hierarchical metadata using a continuous zooming interaction. Finally, PivotPaths [11] provides a fluid and highly interactive browsing experience that externalizes the links (or paths) between different facet values. These existing tools all served as inspiration for our work in this paper.
### Multivariate Graphs
From a data visualization perspective, the leap from table to network is small: all you need are relations connecting entities [33]. Practically, this means adding a second "edge table" linking keys in the original node table. While layout techniques are mostly radically different for tables vs. networks, there is one approach that is shared: _attribute-based layout_[35, 48], where the position of a node on a geometric axis is dependent on a specific attribute associated with the node. Not surprisingly, this kind of visual mapping is commonly applied to data points when mapping a data table to visual space, such as in a scatterplot.
A canonical example of attribute-based layout is PivotGraphs [48], which uses vertical and horizontal space to unpack a single aggregated node in a node-link into 2D space. Aggregated edges show relations in the resulting graph. Our work in this paper draws heavily on PivotGraphs, but generalizes the idea to both tabular as well as network data,
and also introduces several new operations to improve on the idea.
GraphDice [4] is another example of attribute-based layout: it is essentially a network version of the original ScatterDice [14] system discussed above. However, unlike PivotGraphs, GraphDice is a unit visualization system with one visual mark per data item. While our Dataposys system is based on visual aggregates, we borrow the faceted navigation supported by the GraphDice tool for our work.
Scale is a perpetual problem for graph visualization. ASK-GraphView [1] supports interactive visual exploration of node-link diagrams consisting of millions of nodes through the use of clustering and animation. ZAME [13] instead uses adjacency matrices and a level-of-detail pyramid to support massive scale. DOI graphs [47] handle the problem by showing only subsets of graphs. Finally, Refinery [26] uses a similar form of "associative browsing" to support browsing on limited neighborhoods of a massive heterogeneous graph.
Heterogeneous--or multimodal--graphs are a specialized subset of multivariate graphs because one attribute governs the _type_ of the node; e.g., students and courses in a registrar's database, books, magazines, and digital media in a library database, or gokarts, trainers, and drivers in a racing club roster. Arts and Shineiderman studied how to best visualize such multimodal data by separating them into individual _semantic substrates_[3, 44], one for each node type. Ghani et al. [19] studied the use of visualization techniques for heterogeneous data for social network analysis, presenting an approach akin to parallel coordinate displays for network data in response. Finally, Ploceus [31] generalizes the idea of linked tables yielding heterogeneous networks, presenting an algebra and an interactive visualization system to support it. All of these existing tools and techniques were influential in our design of AQS. However, none of them provide the same kind of fluid, highly interactive, and scalable approach to visual exploration and querying that AQS and Dataopsy do.
## 2 Aggregate Query Sculpting
_Aggregate query sculpting_ (AQS) is an iterative filtering technique for multivariate data. It draws inspiration from the ScatterDice technique [14] where the _query sculpting_ concept was introduced based on the metaphor of a sculptor repeatedly chiesing away at a block of stone until the sculpture is complete. However, while the original implementation of the techniques was intended for unit visualizations [38] such as scatterplots, where each individual data point is represented by a unique visual mark, aggregate query sculpting, as the name suggests, is designed for aggregated visual representations where individual marks can represent many--potentially thousands or even millions--of data items. We use the term "born scalable" to refer to visual representations and interaction techniques that were designed for massive scale.
Here we describe the data model for aggregate query sculpting and the fundamental \(\mathbb{P}^{6}\) operations in abstract terms. In the following section, we discuss how to implement the \(\mathbb{P}^{6}\) operations in the web-based Dataopsy prototype tool for multivariate data.
### _Design Rationale_
We designed aggregate query sculpting by harnessing prior art from the literature with three specific design goals (DG1-DG3) in mind:
* [leftmargin=*,noitemsep,topsep=0pt]
* **Born scalable:** Realistic datasets cannot be visualized as unit visualizations [38] because of both technical and perceptual limitations. Instead, robust visual representations must be designed from the ground up using visual and data aggregation [15].
* **Fluid interaction:** We envision an iterative and progressive filtering method based on fluid interaction [16], where user actions promote flow [8], are based on direct manipulation [42], and minimize the guits of execution and evaluation [36]
* **Faceted browsing:** Multidimensional datasets are easily navigated, filtered, and queried using _faceted search_[53] where filters can be expressed across multiple hierarchical dimensions (_facets_).
### _Data Model_
All AQS operations are applied to a multidimensional dataset \(\mathbb{D}^{\prime}\subseteq\mathbb{D}\), where \(\mathbb{D}\) is the full dataset currently being visualized. The \(\mathbb{D}^{\prime}\) is called a _supernode_ even if the data is not relational. Supernodes that are part of networks also have _superlinks_\(\mathbb{E}^{\prime}\); edge subsets drawn from the full edge set \(\mathbb{E}\). Some AQS operations operate purely on a data level, whereas others operate on a visual level, and others still operate on both. Furthermore, the operations tend to apply either to entire rows or entire columns--or the entire dataset--in a substrate based on the visual layout. We discuss these details for each operation below.
### _Visual Representation_
A multidimensional visual representation managed using aggregate query sculpting includes one or more _semantic substrates_[3, 44]: a 2D visual space of any geometric dimension. Additional substrates can be laid out depending on the nature of the data and the user's wishes; for example, two substrates can share a horizontal axis, making it useful to stack the substrates vertically (one above the other).
Each substrate contains one or more supernodes \(\mathbb{D}^{\prime}\) (which could potentially be the entire dataset \(\mathbb{D}\) if only one substrate is in use). Inside the substrate, visual aggregates representing datasets are organized in a regular 2D grid with row and column headers. Each supernode \(\mathbb{D}^{\prime}\) is represented by a single visual mark (DG1); this is typically a simple 2D geometric shape such as a circle. The size of the underlying data can be conveyed using multiple different methods (sometimes redundantly), such as using a label, a color scale, or the size of shape. When the dataset is a multivariate network, the relationships (edges) between entities (vertices) in the underlying network are also aggregated into superlinks that are represented using single link marks (DG1). Again, the number of aggregated edges can be conveyed using color or thickness.
### _Interaction_
We envision aggregate query sculpting as a highly interactive and fluid [16] query technique where the user rapidly performs multiple operations to identify the data they are interested in (DG2). Furthermore, operations can be performed on individual rows or columns, or entire groups of rows or columns by using the partitioning hierarchy (Figure 2). To facilitate such rapid, direct, and reversible interaction, we suggest including an interaction stack where users can easily overview, undo, and redo individual operations.
### _Query Operations_
We define aggregate query sculpting based on six fundamental sculpting operations that we call \(\mathbb{P}^{6}\) for _pivot_, _partition_, _peek_, _pile_, _project_, and _prune_. Each operation is applied on a row or column basis to ensure a consistent grid layout. For the treatment below, assume that \(\mathbb{D}\) is a multidimensional dataset consisting of cars with standard dimensions such as gas mileage, acceleration, weight, cylinders, origin, etc.
* [leftmargin=*,noitemsep,topsep=0pt]
* **Pivot.** The _pivot_ operation splits a supernode \(\mathbb{D}^{\prime}\) into \(N\) disjoint supernodes \(\{\mathbb{D}^{\prime}_{1},\ldots,\mathbb{D}^{\prime}_{N}\}\) based on a group criterion. This is a generalized form of faceted browsing (DG3). Some group criteria
Fig. 2: **Operation scope. Aggregate query sculpting operations are performed on entire facets, which means that the scope will be entire rows or columns (or dimensions that are currently hidden).**
use nominal data types; for example, we could imagine pivoting \(\mathbb{D}^{\prime}\) based on the number of cylinders, resulting in three supernodes \(\mathbb{D}^{\prime}_{i}\) for 4, 6, and 8 cylinders (Figure 3). Alternatively, we could pivot by binning a quantitative value, such as five intervals of gas mileages for \([0,10),[10,20),[20,30),[30,40)\), and \([40,\infty)\) miles per gallons.
Partition.The _partition_ operation is a pure visual operation for laying out supernodes \(\mathbb{D}^{\prime}_{i}\) in 2D space along a vertical or horizontal geometric axis. Partitioning will use the entire available space along the chosen geometric axis in the current substrate. This is typically done by allocating an equal amount of visual space to each supernode, although it is also possible to allocate visual space proportional to the size of each supernode (i.e., the number of items in each supernode). The approach is similar to PivotGraphs [48] and Polaris [46], but supports nesting in multiple levels. If the chosen geometric axis has already been used for partitioning, the next level of partitioning will be nested. For example, if we first partition the horizontal axis based on the three sets of cylinders (4, 6, and 8), we can then partition each of these three categories based on the four gas mileage groups; see Figure 4.
\(\bullet\)Peek.Sometimes the user wants to see inside a supernode without pivoting and partitioning. The _peek_ operation transforms the visual representation of one or all aggregate marks into a _glyph representation_[15] showing the contents of each mark based on some axis. For example, peeking can change the color-coded circles into pie charts showing the origin (U.S., Europe, or Asia) of each group of cars pivoted and partitioned based on number of cylinders and then gas mileage.
\(\bullet\)Pile._Piling_ merges two or more selected supernodes into a single supernode (i.e., as the union of the selected supernodes), potentially enabling the user to name the resulting supernode. This can be useful when an automatic binning operation yields too many individual supernodes, some of which are meaningless on their own. For example, the user could choose to pile the [0, 10) and [10, 20) gas mileage supernode into a single supernode that they name "poor fuel economy" (Figure 6).
\(\bullet\)P Project.Partitioning multiple pivots into the same geometric axis will eventually yield deep nesting and an explosion of supernode combinations. To reduce clutter, the _project_ operation enables selecting a subset of the data and projecting it onto a new semantic substrate that is laid out independent of the originating substrate. The selected data is subtracted from the original substrate, ensuring that the substrates remain disjoint. For example, the user could select all of the low fuel economy cars and project them onto a new substrate to enable further exploration while avoiding to add to the existing nested hierarchy of partitions (Figure 7).
\(\bullet\)Prune.Finally, _prune_ allows for eliminating (e.g., hiding; all actions are reversible) selected supernodes from view. The operation is similar to the FromDaDy multidimensional visualization tool [23]. It can be applied to entire nested hierarchies, or to specific data values. For example, the user could easily eliminate all U.S. cars from the low fuel economy substrate by pruning on that data value in the origin dimension (Figure 8).
## 4 Dataopsy: AQS for Multidimensional Data
We developed Dataopsy, a web-based visual analytics tool, to demonstrate AQS in practice. Dataopsy can be used in a standard web browser using any medium to large screen device For example, Figure 9 shows Dataops on a Samsung Galaxy S8 tablet device. In this section, we describe the visual interface of Dataopsys as well as query actions and interactions supported. We also include a video demonstration in the supplemental materials.
Fig. 4: **Nested axis partitioning. Example showing how to nest multiple pivoted axes inside a single geometric axis.**
Fig. 5: **Peeking into supernodes. Representing each visual aggregate as a pie chart showing the origin of each subset of cars.**
Fig. 3: **Pivoting and partitioning supernodes. Laying out pivoted supernodes along the horizontal axis.**
Fig. 6: **Piling supernodes. Grouping the two lowest mileage supernode of cars into a single supernode.**
Fig. 7: **Projecting to a new substrate. Extracting low fuel economy cars to a new semantic substrate for continued exploration.**
### _Card Design_
The central user interface (UI) component of Dataopsy is a card (Figure 0(c)), following the design of the popular UI component with the same name.1 Cards are typically used to couple relevant information into a modular container. We chose cards as our core component as our system should support semantic substrates, views with identical functionalities albeit different underlying data subsets.
Footnote 1: [https://www.nmgroup.com/articles/cards-component/](https://www.nmgroup.com/articles/cards-component/)
The default card, called Main, contains all data points in a single supernode (Figure 0(a)). From this initial card, the user can use the V Projection operation to create a new substrate, which is then projected in a new card with identical design. The cards in our system are flexible. By default, they align horizontally and take 2/3 of the horizontal and vertical dimensions of the screen; but users can change their order, collapse them, or delete them at any time. Each card has two sub-components: a header and a body.
#### 4.1.1 Card Header
The header contains styled icons to support \(\mathbb{P}^{6}\) (Figure 0(b)). We do not include a separate icon for \(\clubsuit\) pivoting as \(\clubsuit\) partitioning or laying out the visual marks on the \(\mathbb{D}\) space directly depends on pivoting. Instead, we provide two icons and dropdowns within them to \(\clubsuit\) partition horizontal and vertical axis. Clicking the \(\clubsuit\) prune icon deletes selected data points from the card. Similarly, clicking the \(\clubsuit\) project icon copies selected data points from the current card and opens a new card with the copied data. \(\clubsuit\) Pile option combines selected data points together. Users can optionally provide a name to the merged categories using a popup. A user selects data points by directly interacting with the visualization (described in Section 4.3).
All AQS operations are saved in an \(\clubsuit\) interaction log or stack (Figure 0(d)). Using this stack, a user can go back and forth between any stage of the exploration process. Further, we provide options to \(\clubsuit\) save and download the current state of the data as a CSV file. This is helpful for exporting data after transforming the original data using pruning and pip. Finally, a user can optionally \(\clubsuit\) configure data attributes such as defining alphabetical or numerical sorting options for the attributes.
#### 4.1.2 Card Body
The card body contains the SVG container for the visualization. We describe the visualization design within the card body next.
### _Visual Representation_
Dataopsy uses a 2D grid view to lay out the supernodes along the horizontal and vertical axis. Figure 11 shows an example representation of 300,000 posts among different communities on Reddit [27]. We describe the details about the representation below.
#### 4.2.1 Axis Labels
We used hierarchical grouping to place the labels on the geometric axes. The order of the variables on the hierarchy depends on the order they were added for \(\clubsuit\) partitioning by the user. For example, in Figure 11, we first add the readability index and then sentiment on the vertical axis. We also place variable names on the axis whenever space permits. On the vertical axis, we only show the first variable name as nested variable names may look indecent.
#### 4.2.2 Supernodes
The visual marks in Dataopsy are typically aggregations of multiple data points, in contrast to the typical one mark per one data. We call these visual marks _supernodes_, although they may or may not have links depending on whether the domain is a network or not. In the current implementation of Dataopsy, we represent the supernodes with circles; however, they can be any 2D geometric shapes such as rectangles.
Dataopsy automatically determines the radius of the circles based on Equation 1.
\[\begin{split} S&=\min(width/N_{x},height/N_{y})\\ r&=\begin{cases}S,&\text{if }S>\alpha\\ \alpha,&\text{otherwise}\end{cases}\end{split} \tag{1}\]
Here \(N_{x}\) and \(N_{y}\) are the numbers of categories on the horizontal and vertical axes. \((width,height)\) is the dimension of the SVG, inherited from the card body. We set \(\alpha\), the minimum possible radius, to 5. When \(r=\alpha\), we update the size of the SVG by using Equation 2. However, we do not change the size of the card body; instead, we wrap the extended SVG within the card body and provide scrollbars to see the extended contents (see supplement for an example). This allows us to scale the representation for a large number of categories and avoid visual clutter.
\[\begin{split} width&=N_{x}\cdot r\\ height&=N_{y}\cdot r\end{split} \tag{2}\]
By default, each circle encodes the number of data points in the supernode using a linear color scale. However, the user can transform
Fig. 10: **Different components of the card header.** (a) Buttons and dropdowns as styled icons for supporting \(\mathbb{P}^{6}\). (b) Two checkboxes to toggle seeing links and link arrows. (c) A sample dropdown containing the data attributes that opens when a user clicks on either \(\clubsuit\) partition or \(\clubsuit\) peek icons. (d) Interaction log recorded by Dataopsy. This dropdown opens when a user clicks on the Log icon. The user can go back and forth between different stages using dropdown. (e) A histogram of the target variable. A user can select and prune data using this histogram (e.g., pruning data points with less than 5 occurrences).
Fig. 9: **Dataopsy on a Samsung Galaxy S8 tablet.** The tool is particularly powerful on a mobile tablet device with touch or pen interaction.
the circles into pie charts for seeing distribution along a new dimension using the \(\circledR\) peek operation. Figure 11 shows one example of \(\circledR\) peeking, where we see the number of conflicts arising from the Reddit posts in pie charts. We also place the number of data points at the center of the circles whenever space permits. For example, Figure 9 and 12 show the numbers in the circles whereas Figure 11 does not show the numbers due to the small size of the circles. The user can always see the numbers in a popup when hovering over the circles or pie charts.
#### 4.2.3 Superlinks
Similar to the concept of supernodes, we call aggregated links connecting the supernodes _superlinks_. We encode edge weights with the thickness of the links. We used D3's car function to draw the links. To reduce visual clutter, we bundle the links and set their default color to light gray with opacity set to 0.3. Despite these design decisions, too many links can create clutter. To avoid that, Superlinks are hidden by default. A user can choose to see all links by toggling the "Show Links" option (Figure 10b). Similarly, a user can choose to see direction arrows for the links by toggling "Show Link Arrows." Finally, on hovering over a node, we also highlight links originating and ending at the node with light purple and green colors, respectively (see Figure 13 and 15).
### Interactions
We designed fluid interactions to help users perform AQS operations. There are multiple ways to interact with the visualization in Dataopsy.
The first is to interact with the axis labels. On hovering over an axis label, Dataopsy highlights all supernodes belonging to that row or column. Clicking labels will toggle selecting the whole row or column. After selection, a user can use the icon buttons in the card header (Figure 10) to \(\circledR\) prune, \(\circledR\) project, or \(\circledR\) pile the selected values.
Similar to the axis labels, a user can interact with the supernodes directly. On hovering over a node, we highlight the axis labels relevant to the node and show the number of data points belonging to the node in a popup. If the data contains links, we also show the links originating and ending at the node. On clicking a node, Dataopsy will toggle selecting the data points belonging to the node and allow a user to \(\circledR\) prune, \(\circledR\) project, or \(\circledR\) pile.
### Implementation Notes
Dataopsy is currently a web-based prototype. We used Python running in a flask server as our backend. We used D3 for rendering the visualization in the frontend. All user interactions are supported by JavaScript. The source code and a demo of Dataopsy is available here: [https://github.com/tornmoycsedu/Dataopsy](https://github.com/tornmoycsedu/Dataopsy)
## 5 Case Studies and Application Example
Section 4 demonstrated how AQS can be used to analyze large-scale social media data (Figure 11). In this section, we demonstrate AQS using Dataopsy in four more scenarios: two case studies involving participants and two application examples.
### Case Study: Data Exploration and Fairness Evaluation
ML and fairness researchers and practitioners often need to ensure how their models perform with respect to sensitive attributes (e.g., gender and race) in the datasets. This is important since ML models can inherit biases from datasets and propagate the biases in sensitive domains (e.g., loan approval, hiring, and healthcare allocation) [6, 18].
We worked with two data scientists and fairness researchers to explore how AQS and Dataopsy can be used to evaluate fairness of an ML model. The first participant (P1) is a male research scientist at a large technology company with a Ph.D. in Computer Science and more than 7 years of research experience in data science, visual analytics, and algorithmic fairness. The second participant (P2) is a female Ph.D. student of Computer Science with more than 4 years of research experience in data science and algorithmic fairness.
After a discussion with the participants, we decided to use the Adult Income dataset in this study. We chose this dataset as it is widely used in the algorithmic fairness and visual analytics literature [6, 17, 18]. The dataset contains 45,222 data points where each data point represents a person described by 14 attributes recorded from the U.S. 1994 census. Here, the prediction task is to classify if a person's income will be greater or less than $50,000 based on attributes such as age, gender, education, marital status, etc.
The first author of this paper met with the participants separately over Zoom. Before the meetings, we asked participants to train a classification model using the dataset. Both participants used Logistic Regression to train the model. Their models achieved 86% accuracy across the training and test sets (70%-30% split). Participants saved the predicted labels and original attributes in a CSV file.
Each study session started with a training phase where participants explored different features of Dataopsy using a training dataset. We encouraged participants to ask questions at this stage. After training, participants uploaded their saved CSV files to Dataopsy and analyzed the model performance. Participants followed a think-aloud protocol during the study. The sessions ended with semi-structured interviews focusing on the utility, limitations, and future directions of Dataopsy.
#### 5.1.1 Results and Feedback
Evaluating Intersectional Fairness.To evaluate the fairness of the trained model, P1 started by \(\circledR\) partitioning the horizontal axis to _train_ and _test_ sets and the vertical axis to _male_ and _female_ individuals (Figure 12a). P1 immediately noticed that the dataset is highly skewed towards men. Suspecting the skewed dataset might impact accuracy across the subsets, P1 visualized the ratio of the accurate predictions using the \(\circledR\) peek action (Figure 12b). However, the model performed better for females in terms of accuracy. To investigate further, P1 added race on the vertical axis to \(\circledR\) partition male and female individuals (Figure 12c). P1 noticed that accuracies are consistent across training and test sets for all subsets. Among the subsets, _Female White_ and _Male White_ have the highest number of data points. P1 selected these two subsets and use the \(\circledR\) projection action to create a new substrate.
Figure 11: **Two-dimensional grid view with hierarchical nesting.** Analyzing linguistic properties of 300,000 inter-community posts on Reddit [27]. Each row in this dataset contains information about a Reddit post from a source to a target community. We partition the vertical axis by the use of anger-related words, according to Linguistic Inquiry and Word Count (LIWC) and sentiment of the posts, and the horizontal axis by the readability index. We \(\circledR\) peek at the variable of interest, whether or not a post starts a conflict between the source and target community. Posts with higher uses of anger-related words give rise to more conflicts among communities. Also, posts that are difficult to read (readability index > 100) have less chance of starting a conflict.
P1 then restarted \(\boxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxed{\textboxeded{\textboxed
writer chose _Anne of Green Gables_ by L. M. Montgomery, a famous children's book with around 400 pages. We used BookNLP2 to extract different types of entities (Persons, Facilities, and Locations) from the story. Similar to our previous case studies, the study included a training phase at the start of the session. We then asked the writer to visualize the collection of entities using Dataopsy and devise a skeleton of the proposed adapted screenplay with the help of the AQS operations.
Footnote 2: [https://github.com/booknlp/booknlp](https://github.com/booknlp/booknlp)
#### 5.2.1 Results and Feedback
Obtaining an Overview from Different Perspectives. W1 started exploring the story by \(\overline{\texttt{m}}\) partitioning the horizontal axis into the 35 chapters of the book. They then \(\overline{\texttt{m}}\) partitioned the vertical axis by the type of entities (persons, places, etc.), followed by the actual entities (Figure 13). W1 then quickly lowered over several entities to see how they are connected to each other.
W1 stated that \(\overline{\texttt{m}}\) partitioning helped them to see the story from different perspectives. During the session, we noticed W1 continuously changing partitioning order. For example, sometimes W1 used entities and chapters to partition the vertical axis linearly. Other times, W1 used chapters on the horizontal axis and entities on the vertical.
Iterative Pruning and Piling. W1 found \(\mathfrak{H}\) pruning and \(\overline{\texttt{m}}\) piling to be the most useful operations for developing a skeleton for the adapted screenplay. W1 started by pruning entities with low frequency and dependency in the story. Pruning allowed W1 to reduce several plotlines, characters, and places. W1 also used piling to reduce the size of the story. For example, after pruning several entities of a chapter, W1 piled (i.e., merged) the chapter with the previous chapter.
Recommendation for Dataopsy as a Writing Support Tool. W1 provided several suggestions for adopting AQS and Dataopsy in a writing support tool. One expected suggestion was to include a text editor and link the text with entities in the visualization. This would allow writers to see the context in the text and take informed decisions before pruning or piling. Another suggestion was to include social relations (e.g., brother, mother) as a feature so that writers can decide which characters to prune from a social circle.
### Example: Understanding Taxi Trips in New York City
We present an application example on the New York City (NYC) taxi ride dataset [34] to demonstrate how AQS and Dataopsy scale to large datasets. The dataset contains every reported trip from 2009 to 2022 in NYC (approximately 1.7 billion trips). We chose this application because it is a large dataset (69 GB) with many facets for exploration.
The size of the dataset yields a range of analyses to perform. For this example, we show how past taxi rides can be analyzed to identify hotspots and devise a policy for allocating taxi eos in 2022. Lockdowns and a lack of passenger during the COVID-19 pandemic (2020-2021) heavily disrupted NYC taxi service.3 Many taxi drivers changed their profession during this time. As the world repened after the pandemic, analyzing past taxi rides can inform the allocation policy of taxis throughout the city.
Footnote 3: [https://www.cnn.com/2021/01/09/us/yellow-taxi-drivers-new-york-covid/](https://www.cnn.com/2021/01/09/us/yellow-taxi-drivers-new-york-covid/)
We used Dask, a Python library for parallel computing, to conduct the backend analysis. After loading the dataset, we first \(\overline{\texttt{m}}\) partition the vertical axis by Year (Figure 14a). Applying the partition took 30 seconds for Dataopsy. The number of trips has gradually declined over
Figure 14: **Exploring taxi rides in New York City (NYC).** (a) We \(\overline{\texttt{m}}\) partition the whole dataset (around 1.7 billion trips) by Year. We select the trips (around 430M) from pre-pandemic years (2016-2019) and \(\overline{\texttt{q}}\) project them in a new view. (b) We \(\overline{\texttt{m}}\) partition the pre-pandemic trips by the pickup and dropoff boroughs. Most trips (300M) originated and ended in Manhattan. (c) We can further drill down on the Manhattan trips by \(\overline{\texttt{q}}\) projecting them on a new view and \(\overline{\texttt{m}}\) partitioning them by pickup and dropoff zones within Manhattan. On hover, we see the number of trips between a hotspot, trips between Lenox Hill and Upper East Side North.
the years. As expected, we see a significant drop in 2020-2021. For demonstration purposes, we only select data from the first two months of 2022. As taxi rides should presumably return to pre-pandemic status, we select (i.e., \(\cmss{Q}\) Project) the trips from the most recent pre-pandemic years (2016-2019) for further investigation. We then partition the selected trips (430M) by their pickup and droopf boroughs (Figure 14b). As expected, most taxi rides originated and ended in Manhattan. We can \(\cmss{Q}\) Project the Manhattan trips (300M) and further partition these trips by their pickup and droopf zones within Manhattan (Figure 14c). We can now clearly see the hotspots within Manhattan. Note that the default _width_ and _height_ of the SVG are extended using Equation 2 due to the large number of zones. Please see the full screenshot of Figure 14c in the supplement. We can further explore the hotspots (e.g., \(\cmss{Q}\) projecting and \(\cmss{Q}\) partitioning a hotspot by month).
### Example: Scientometric Analysis of IEEE VIS Pubs
Our final example is a scientometric analysis of IEEE VIS publications using the VisPub [24, 40] dataset. This example was designed to show how AQS can be used to analyze multivariate networks. We can explore this dataset at different levels of aggregation. For example, Figure 15a shows a citation network between four conference tracks: Vis, InfoVis, VAST, and SciVis. On lower, Dataposy highlights references originated (light purple) at InfoVis and papers citing InfoVis papers (light green). We noticed that InfoVis papers cite their own papers the most. VAST papers also cite many papers from VAST, although they cite many InfoVis papers too (Figure 15b).
## 6 Discussion
Here we discuss design implications, limitations, and future work relevant to AQS and Dataopsy.
Changing Perspective with Partitioning.We noticed that participants often used different combinations to partition data, even within the same session. One key observation is that the order of partitioning can change the perspective even if the underlying data is the same. It can potentially affect what information or insights people see first. This phenomenon gives rise to several interesting questions for the VIS community such as _How exactly does the order of pivoting and partitioning impact the data exploration process?_ and _Is there an optimum order to hierarchically nest the partitions?_ Prior work on finding optimum ordering for parallel coordinates are inspiring in this case [39, 54].
Designing and Evaluating Fluid Interaction.Our case studies show promises for interaction design for exploring multivariate data. The supported interactions in AQS (\(\mathbb{P}^{6}\)) are larger than in a typical visualization system. Although AQS was not formally evaluated in the case studies, participants used praises such as "cool," "nice," and "wow" to describe the usability of Dataopsy. Our future work will focus on evaluating Dataopsy in comparison to similar methods (e.g., Google Facets [20, 49]). We can ask users to find answers to queries (e.g., What percentage of white, married, and female individuals were accurately labeled by the model?) and measure efficiency by counting and comparing the number of steps taken to answer the queries using different methods. We can further use NASA-TLX [22] and SUS [5] to evaluate the perceived workload and usability of Dataopsy.
Trade-offs between Aggregated and Unit Representation.AQS is a top-down technique where the exploration starts with a single mark aggregating all data items. This strong aggregation enables us to scale analysis to large datasets. However, compared to unit visualizations, there are fewer chances of serendipitous findings using our approach. Due to the lack of an overview, users need to have prior knowledge and hypotheses to construct the queries. We can partially address this limitation by introducing a recommender system that can recommend subsets using anomaly detection algorithms and prior user interactions [6, 37].
Scalability.As a theoretical concept, AQS is scalable to any number of data points. However, as an early prototype, Dataopsy currently lacks a few engineering features for handling "really big" datasets. For example, despite using parallel computing, for the NYC taxi ride dataset, on average, it took 30 seconds for Dataopsy to respond to the data operations (e.g., partitioning). There are established methods to handle such large datasets in visualization displays [30, 10, 32], which could further improve response time.
Visualizing Quantitative Values.Datoopsy has limitations for analyzing quantitative values. For example, when visualizing a quantitative attribute using \(\cmss{Q}\) peeking, Dataopsy uses binning and categorical color scales to show the distribution in a pie chart, which can be challenging to decode. One option is to extend the supported visual mark type (e.g., histogram in rectangles) to resolve this issue. We will follow the example of prior work such as Polaris [46] to integrate this feature into our tool. Another relevant problem with \(\cmss{Q}\) peeking is that it becomes difficult to measure the size of the nodes (i.e., cardinality) without the color saturation (Figure 11). A solution could be using circular curves along the circles/pies to indicate the size. Another possible solution is using varying circle sizes to represent cardinality.
Adopting Aggregate Query Sclupting.We recommend that practitioners and researchers adopt AQS if the following conditions are met:
* The data has a sufficient amount of facets (>=2).
* The number of data points is too many for unit visualization.
* The goal is to obtain higher-level insights and patterns rather than finding lower-level similarities between individual data points. (For example, AQS may not be feasible for finding clusters in an embedding space.)
* Domain-specific functionalities for the application are easy to integrate with AQS.
## 7 Conclusion
We have presented Aggregate Query Sclupting (AQS), a novel interaction technique for visualizing and exploring multivariate data. The goal of our work was to solve challenges for large-scale data containing many attributes. Visualizing such datasets using unit visualizations (e.g., scatter plots) often results in visual clutter and inelegan representation. We propose aggregation to be key for solving this issue. As a born scalable technique, AQS initially aggregates all data points into a single visual mark, a supernode. From there, AQS provides six operations, abbreviated as \(\mathbb{P}^{6}\), to iteratively sculpt the data to a desired form. Based on the concept of AQS, we developed Dataopsy, a prototype tool for exploring multivariate data. Dataopsy is equipped to analyze multivariate data from versatile domains. We hope our work will motivate future research for designing visualization that is equipped to manage large-scale data, yet easy to explore.
Figure 15: **Internal citations among the four tracks of IEEE VIS.** (a) On hover, light purple links highlight InfoVis papers citing previous papers and light green links show papers citing InfoVis papers. (b) Similar analysis for VAST.
## 8 Acknowledgments
This work was partially supported by the U.S. National Science Foundation grant 2211628. Any opinions, findings, and conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the funding agencies.
|
2308.14106 | Diffusion Schrödinger Bridges for Bayesian Computation | Denoising diffusion models are a novel class of generative models that have
recently become extremely popular in machine learning. In this paper, we
describe how such ideas can also be used to sample from posterior distributions
and, more generally, any target distribution whose density is known up to a
normalizing constant. The key idea is to consider a forward ``noising''
diffusion initialized at the target distribution which ``transports'' this
latter to a normal distribution for long diffusion times. The time-reversal of
this process, the ``denoising'' diffusion, thus ``transports'' the normal
distribution to the target distribution and can be approximated so as to sample
from the target. To accelerate simulation, we show how one can introduce and
approximate a Schr\"{o}dinger bridge between these two distributions, i.e. a
diffusion which transports the normal to the target in finite time. | Jeremy Heng, Valentin De Bortoli, Arnaud Doucet | 2023-08-27T13:22:55Z | http://arxiv.org/abs/2308.14106v1 | # Diffusion Schrodinger Bridges for Bayesian Computation
###### Abstract
Denoising diffusion models are a novel class of generative models that have recently become extremely popular in machine learning. In this paper, we describe how such ideas can also be used to sample from posterior distributions and, more generally, any target distribution whose density is known up to a normalizing constant. The key idea is to consider a forward "noising" diffusion initialized at the target distribution which "transports" this latter to a normal distribution for long diffusion times. The time-reversal of this process, the "denoising" diffusion, thus "transports" the normal distribution to the target distribution and can be approximated so as to sample from the target. To accelerate simulation, we show how one can introduce and approximate a Schrodinger bridge between these two distributions, i.e. a diffusion which transports the normal to the target in finite time.
Optimal transport, Schrodinger bridge, Score matching, Stochastic differential equation, Time-reversal. +
Footnote †: journal: Statistical Science
## 1 Introduction
The use of diffusion processes as a mathematical model is ubiquitous in many scientific disciplines. In differential form, such a process \((X_{t})_{t\in[0,T]}\) in \(\mathbb{R}^{d}\) is defined by a stochastic differential equation (SDE) (see e.g. Oksendal (2003) and Klebaner (2012) for textbooks on the subject)
\[\mathrm{d}X_{t}=f(t,X_{t})\mathrm{d}t+\sigma(t,X_{t})\mathrm{d}B_{t}. \tag{1}\]
The above describes infinitesimal changes in the process as a sum of deterministic changes driven by a drift function \(f:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) for an infinitesimal time step, and random fluctuations given by a diffusion matrix \(\sigma:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\) and infinitesimal changes of a standard Brownian motion in \(\mathbb{R}^{d}\). The interface between statistics and diffusion models have involved two main threads: statistical inference for these models and their use for Monte Carlo simulation. In the first, one considers inference for the parameters of \(f\) and \(\sigma\) when observations of the process are collected at various time points. This has generated a large and comprehensive literature with frequentist and Bayesian estimators, under various types of observation regimes, and at various frequencies (see textbooks by Iacus (2008) and Kessler, Lindner and Sorensen (2012) and references therein). In the second, following ideas from statistical mechanics, the use of certain diffusion processes and their long time behaviour have been adopted to simulate from complex and high-dimensional distributions in computational statistics. For example, by simulating Langevin dynamics whose equilibrium distribution is the target distribution of interest, one progressively transforms an easy-to-sample reference distribution to the desired target distribution. Indeed the time-discretization of such Langevin dynamics have lead to efficient gradient-based Markov chain Monte Carlo (MCMC) algorithms for sampling problems in Bayesian statistics (Grenander and Miller, 1994; Roberts and Tweedie, 1996).
This article is based on denoising diffusions, which provide an alternative approach to transform a simple reference distribution to a target distribution. These diffusions were recently introduced in the machine learning literature to obtain generative models (Sohl-Dickstein et al., 2015; Ho, Jain and Abbeel, 2020; Song et al., 2021). In contrast to running a single realization of Langevin dynamics to obtain a large number of correlated samples for MCMC, a single run of a denoising diffusion is used to simulate one independent sample from the generative model. Given access to a large dataset, say of images or proteins, generative models produce new synthetic data with approximately the same distribution. Many generative modeling techniques have been proposed in the literature, but the introduction of denoising diffusions has
recently revolutionized this field as these models are easier to train than alternatives, and they provide state-of-the-art performance in many domains. The main idea of these techniques is to progressively transform the empirical data distribution into a normal distribution through a forward "noising" diffusion process. By simulating an approximation of the time-reversal of this process, a "de-noising" diffusion, we can generate new samples that closely follow the data distribution.
After introducing notation in Section 2, we review in Section 3 how one can extend these ideas to perform approximate posterior simulation in scenarios where one can only sample from the prior and simulate synthetic observations of the likelihood (Song et al., 2021). A limitation of this approach is that it requires running the forward noising diffusion long enough to "transport" the posterior distribution into an approximately normal distribution. Hence the corresponding approximate time-reversal also needs to be run for a long time to sample from the posterior. In Section 4, we show how a Schrodinger bridge formulation of this problem can be exploited to transport in finite time the posterior into a normal and vice-versa (Shi et al., 2022), hence accelerating posterior simulation.
In scenarios where one only has access to an unnormalized density of a target distribution, which might not correspond to the posterior distribution of a Bayesian model, we show in Section 5 that the ideas behind denoising diffusion models remain applicable. In this setting, one has to optimize an alternative criterion to approximate the time-reversal of the forward noising diffusion (Vargas, Grathwohl and Doucet, 2023). Finally we present a Schrodinger bridge extension of this algorithm to speed up simulation in Section 6. To aid the reading of this article, the features of algorithms covered are summarized in Table 1. We conclude with some discussions in Section 7.
## 2 Notation
We will use throughout this paper the following notation. We will write \(x\in\mathcal{X}\) to denote latent states or parameters of a statistical model, and \(y\in\mathcal{Y}\) to denote observations. For example in Euclidean spaces, we have \(\mathcal{X}=\mathbb{R}^{d}\) and \(\mathcal{Y}=\mathbb{R}^{p}\) for \(p,d\in\mathbb{N}\). The Euclidean norm of \(x\in\mathbb{R}^{d}\) is denoted by \(\|x\|\). For \(f:\mathbb{R}^{d}\to\mathbb{R}\), we denote its gradient with respect to \(x\) as \(\nabla_{x}f(x)\). We write \(\mathcal{N}(\mu,\Sigma)\) to denote the normal distribution with mean \(\mu\) and covariance \(\Sigma\), and \(x\mapsto\mathcal{N}(x;\mu,\Sigma)\) for its probability density function. Given two probability measures \(\mu\) and \(\nu\) defined on a measurable space \(\mathcal{Z}\), the Kullback-Leibler (KL) divergence from \(\mu\) to \(\nu\) is defined as \(\mathrm{KL}(\mu|\nu)=\int_{\mathcal{Z}}\log(\mathrm{d}\mu/\mathrm{d}\nu)(z) \mathrm{d}\mu(z)\) if \(\mu\ll\nu\) and \(+\infty\) otherwise. We denote the set of integers \([K]=\{1,\ldots,K\}\). A path measure on \(\mathcal{Z}\) is a probability measure on \(C([0,T],\mathcal{Z})\), where \(C([0,T],\mathcal{Z})\) is the space of continuous functions from \([0,T]\) to \(\mathcal{Z}\). If \(\mathbb{P}\) is a path measure then we denote its _time-reversal_ as \(\mathbb{P}^{R}\), defined such that if \((X_{t})_{t\in[0,T]}\sim\mathbb{P}\) then \((X_{T-t})_{t\in[0,T]}\sim\mathbb{P}^{R}\). We will also denote the time \(t\in[0,T]\) marginal distribution of \(\mathbb{P}\) by \(\mathbb{P}_{t}\), and write its probability density function as \(p_{t}\). Given a probability measure \(\mu\) on \(\mathcal{Z}_{1}\), and a Markov kernel \(\mathrm{K}\) from \(\mathcal{Z}_{1}\) to \(\mathcal{Z}_{2}\), we define the probability measure \(\mu\otimes\mathrm{K}\) on \(\mathcal{Z}_{1}\times\mathcal{Z}_{2}\) as \((\mu\otimes\mathrm{K})(\mathsf{A}_{1}\times\mathsf{A}_{2})=\int_{\mathsf{A}_{1 }}\mathrm{d}\mu(z_{1})\mathrm{K}(z_{1},\mathsf{A}_{2})\) for any Borel set \(\mathsf{A}_{1}\times\mathsf{A}_{2}\). Finally, for notational ease, we will use \((X_{t})_{t\in[0,T]}\) to denote _forward_ diffusion processes driven by Brownian motion \((B_{t})_{t\in[0,T]}\), and \((Z_{t})_{t\in[0,T]}\) to denote _backward_ diffusion processes driven by Brownian motion \((W_{t})_{t\in[0,T]}\).
## 3 Denoising Diffusions for Posterior Sampling
We recall that \(x\in\mathcal{X}\) denote latent states or parameters of a statistical model and \(y\in\mathcal{Y}\) denote observations. Given a prior distribution \(\mu(x)\) and likelihood function \(x\mapsto g(y|x)\), Bayesian inference is based on the posterior distribution
\[p(x|y)=\frac{p(x,y)}{p(y)},\quad p(x,y)=\mu(x)g(y|x), \tag{2}\]
where \(p(y)=\int_{\mathcal{X}}\mu(x)g(y|x)\mathrm{d}x\) is the marginal likelihood. We begin by describing an amortized variational inference procedure that allows us to sample approximately from \(p(x|y)\) for all possible observations \(y\). This method known as _denoising diffusions_ was introduced by Song et al. (2021) for \(\mathcal{X}=\mathbb{R}^{d}\). We will restrict ourselves here to this setup but it has been extended to general spaces and discrete spaces (Austin et al., 2021; Hoogeboom et al., 2021; Campbell et al., 2022; Benton et al., 2022; De Bortoli et al., 2022; Huang et al., 2022). In this Section and Section 4, we assume that we can obtain samples from the joint distribution \(p(x,y)\). Practically we sample first \(X\sim\mu(x)\) followed by \(Y\sim g(y|X)\).
We first define a forward "noising" process \((X_{t})_{t\in[0,T]}\) according to the Ornstein-Uhlenbeck (OU) process
\[\mathrm{d}X_{t}=-\tfrac{1}{2}X_{t}\mathrm{d}t+\mathrm{d}B_{t},\quad X_{0} \sim p(x|y), \tag{3}\]
where \((B_{t})_{t\in[0,T]}\) is a standard \(d\)-dimensional Brownian motion. Writing the transition density of (3) as \(p_{t|0}(x_{t}|x_{0})\), the distribution of \(X_{t}\) is given by \(p_{t}(x_{t}|y)=\int_{\mathcal{X}}p_{t|0}(x_{t}|x_{0})p(x_{0}|y)\mathrm{d}x_{0}\). Since Equation (3) can be seen as the SDE of a Langevin diffusion with the standard normal distribution \(\mathcal{N}(0,I)\) as its stationary distribution, by ergodicity properties of the process, \(p_{T}(x_{T}|y)\) for large time \(T>0\) approaches \(\mathcal{N}(0,I)\) for all \(y\). The corresponding time-reversed process \((Z_{t})_{t\in[0,T]}=(X_{T-t})_{t\in[0,T]}\) can be shown to satisfy (weakly) the SDE (Anderson, 1982; Haussmann and Pardoux, 1986)
\[\mathrm{d}Z_{t}=\tfrac{1}{2}Z_{t}\mathrm{d}t+\nabla_{z_{t}}\log p_{T-t}(Z_{t}| y)\mathrm{d}t+\mathrm{d}W_{t}, \tag{4}\]
with another standard \(d\)-dimensional Brownian motion \((W_{t})_{t\in[0,T]}\) and \(Z_{0}\sim p_{T}(x_{T}|y)\). If one could simulate the backward "denoising" process \((Z_{t})_{t\in[0,T]}\), then \(Z_{T}\) would be a sample from the posterior distribution \(p(x|y)\). Practically, we cannot sample from (4), so consider instead a diffusion approximating it of the form
\[\mathrm{d}Z_{t}=\tfrac{1}{2}Z_{t}\mathrm{d}t+s_{T-t}^{\theta}(Z_{t},y)\mathrm{ d}t+\mathrm{d}W_{t}, \tag{5}\]
with \(Z_{0}\sim\mathcal{N}(0,I)\). For the diffusion (5) to approximate (4), we need i) the diffusion time \(T\) to be large enough for \(p_{T}(x_{T}|y)\approx\mathcal{N}(x_{T};0,I)\); ii) an approximation of the Stein score function \(s_{t}^{\theta}(x_{t},y)\approx\nabla_{x_{t}}\log p_{t}(x_{t}|y)\) for all \((t,x_{t},y)\in[0,T]\times\mathcal{X}\times\mathcal{Y}\). We show next how such an approximation can be obtained.
Let \(\mathbb{P}_{y}\) denote the path measure on \(C([0,T],\mathcal{X})\) induced by (3) with observations \(y\in\mathcal{Y}\) and let \(\mathbb{Q}_{y}^{\theta}\) be the law on \(C([0,T],\mathcal{X})\) induced by the time-reversal of (5). We consider a parametric function class \(\{s_{t}^{\theta}(x_{t},y):\theta\in\Theta\}\) such as neural networks. We obtain \(\theta\) approximating the score by minimizing the expected KL divergence between \(\mathbb{P}_{y}\) and \(\mathbb{Q}_{y}^{\theta}\) over \(y\sim p(y)\), which satisfies (Song et al., 2021b, Theorem 1)
\[\mathcal{L}(\theta) =2\mathbb{E}_{y\sim p(y)}[\mathrm{KL}(\mathbb{P}_{y}|\mathbb{Q}_ {y}^{\theta})] \tag{6}\] \[\equiv\int_{0}^{T}\mathbb{E}_{\overline{p}}\big{[}\|s_{t}^{ \theta}(X_{t},Y)-\nabla_{x_{t}}\log p_{t|0}(X_{t}|X_{0})\|^{2}\big{]}\,\mathrm{ d}t,\]
where '\(\equiv\)' means equality up to an additive constant and the second expectation in (6) is w.r.t. \(\overline{p}(x_{0},x_{t},y)=p(x_{0},y)p_{t|0}(x_{t}|x_{0})\). This loss corresponds to a _denoising score matching_ (DSM) loss (Vincent, 2011). Under the transition density of the OU process in (3), the gradient appearing in (6) is \(\nabla_{x_{t}}\log p_{t|0}(x_{t}|x_{0})=\{x_{0}\exp(-t/2)-x_{t}\}/\{1-\exp(-t)\}\). The loss \(\mathcal{L}(\theta)\) can be minimized using stochastic gradient based algorithms such as Adam (Kingma and Ba, 2014) if one can simulate \((X_{0},Y)\sim p(x,y)\) from the model (2). Hence this approach is applicable to problems where both the prior and the likelihood are intractable but one can simulate parameters and synthetic data from them. From this perspective, it is an alternative to Approximate Bayesian Computation (ABC) methods (Marin et al., 2012; Sisson, Fan and Beaumont, 2018; Beaumont, 2019). Empirical comparisons between denoising diffusions, MCMC, and ABC methods can be found in (Benton et al., 2022; Sharrock et al., 2022; Shi et al., 2022).
Having found a minimizer \(\theta\) of (6), the resulting denoising diffusion posterior sampler (DDPS) described in Algorithm 1 involves simulating (5) using a suitable numerical integrator (see e.g. Karras et al. (2022)) and returning \(Z_{T}\) as an approximate sample from the posterior distribution \(p(x|y)\) in (2). Various techniques that have been proposed to accelerate denoising diffusion models are also applicable to DDPS (Dockhorn, Vahdat and Kreis, 2022; Salimans and Ho, 2022). Finally, we note that one can tailor the above procedure to specific observations \(y\) in the case where the likelihood function can be evaluated. This relies on the following alternative decomposition to estimate the score
\[\nabla_{x_{t}}\log p_{t}(x_{t}|y)=\nabla_{x_{t}}\log\mu_{t}(x_{t})+\nabla_{x_ {t}}\log g_{t}(y|x_{t}) \tag{7}\]
In this setting, we have that \(\mu_{t}(x_{t})=\int_{\mathcal{X}}\mu(x_{0})p_{t|0}(x_{t}|x_{0})\mathrm{d}x_{0}\) is the diffused prior and \(g_{t}(y|x_{t})=\int_{\mathcal{X}}g(y|x_{0})p_{t|t}(x_{0}|x_{t})\mathrm{d}x_{0}\) is the modified likelihood. The first term on the r.h.s. of (7) can be estimated by \(s_{t}^{\theta}(x_{t})\) with \(\theta\) obtained by minimizing \(\mathrm{KL}(\mathbb{P}|\mathbb{Q}^{\theta})\), where \(\mathbb{P}\) is induced by the noising diffusion (3) initialized using \(X_{0}\sim\mu(x)\) and \(\mathbb{Q}^{\theta}\) is induced by the reversal of the form (5) with \(s_{t}^{\theta}(x_{t},y)\) replaced by \(s_{t}^{\theta}(x_{t})\). Using evaluations of \(g(y|x)\), the term \(g_{t}(y|x_{t})\) can be approximated using regression, _conditional guidance_ techniques (Chung et al., 2023), or Monte Carlo methods (Song et al., 2023).
The next section presents a principled framework based on Schrodinger bridges to accelerate training and sampling in DDPS.
```
— Training procedure: while not converged do Sample \((X_{t}^{k})_{t\in[0,T]}\) using the SDE (3) where \((X_{0}^{k},Y^{k})\sim p(x,y)\) for \(k\in[K]\). Approximate the loss (6) using \(((X_{t}^{k})_{t\in[0,T]},Y^{k})_{k\in[K]}\). Update \(\theta\) in \(s_{t}^{\theta}\) using Adam optimizer. endwhile — Sampling procedure: Sample \((Z_{t})_{t\in[0,T]}\) using the SDE (5) for the observation \(y\). return:\(Z_{T}\) approximately distributed according to \(p(x|y)\).
```
**Algorithm 1** Denoising diffusion for posterior sampling
\begin{table}
\begin{tabular}{l||c|c|c|c|c|} Method name & Require samples & Require unnormalized density & Iterative & Section \\ \hline Denoising Diffusion for Posterior Sampling (DDPS) & ✓ & ✗ & No & Section 3 \\ Diffusion Schrödinger Bridge for Posterior Sampling (DSB-PS) & ✓ & ✗ & Yes & Section 4 \\ Denoising Diffusion for General Sampling (DDGS) & ✗ & ✓ & No & Section 5 \\ Diffusion Schrödinger Bridge for General Sampling (DSB-GS) & ✗ & ✓ & Yes & Section 6 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of algorithms in this article.
## 4 Diffusion Schrodinger Bridges for Posterior Sampling
For DDPS to perform well, the diffusion time \(T\) has to be long enough so that \(p_{T}(x_{T}|y)\approx\mathcal{N}(x_{T};0,I)\). Shi et al. (2022) considered the _Schrodinger bridge_ (SB) formulation described in (De Bortoli et al., 2021; Vargas et al., 2021; Chen, Liu and Theodorou, 2022) to improve DDPS in this setting. Let \(\mathbb{D}\) denote the path measure on \(C([0,T],\mathcal{Y})\) induced by
\[\mathrm{d}Y_{t}=0,\quad Y_{0}\sim p(y), \tag{8}\]
i.e. \(Y_{t}=Y_{0}\) for all \(t\in[0,T]\). The SB is defined by the following constrained KL minimization problem over probability measures on the path space \(C([0,T],\mathcal{X}\times\mathcal{Y})\):
\[\Pi^{*}=\arg\min_{\Pi}\{\mathrm{KL}(\Pi|\mathbb{P}):\Pi_{0}(x_{0},y_{0})=p(x_{0},y_{0}), \tag{9}\] \[\Pi_{T}(x_{T},y_{T})=\mathcal{N}(x_{T};0,I)p(y_{T})\},\]
where \(\mathbb{P}=\mathbb{P}_{y_{0}}\otimes\mathbb{D}\) is the joint measure associated to (3) and (8), and \(\Pi_{t}\) denotes the time \(t\) marginal distribution under \(\Pi\). As the minimizer has the form \(\Pi^{*}=\mathbb{P}_{y_{0}}^{*}\otimes\mathbb{D}\), by simulating a backward process \((Z_{t})_{t\in[0,T]}\) with law \(\mathbb{P}_{y}^{*}\) and \(Z_{0}\) initialized from \(\Pi_{T}^{*}(x_{T}|y)=\mathcal{N}(x_{T};0,I)\), the terminal state \(Z_{T}\) returns a sample from the posterior distribution \(\Pi_{0}^{*}(x_{0}|y)=p(x_{0}|y)\), \(p(y)\)-almost surely.
To solve for the SB, we apply the _iterative proportional fitting_ (IPF) procedure (Fortet, 1940; Deming and Stephan, 1940; Kullback, 1968). This is defined by the following recursions for \(n\in\mathbb{N}_{0}\):
\[\Pi^{2n+1}=\arg\min_{\Pi}\{\mathrm{KL}(\Pi|\Pi^{2n}):\] \[\Pi_{T}(x_{T},y_{T})=\mathcal{N}(x_{T};0,I)p(y_{T})\},\] \[\Pi^{2n+2}=\arg\min_{\Pi}\{\mathrm{KL}(\Pi|\Pi^{2n+1}): \tag{10}\] \[\Pi_{0}(x_{0},y_{0})=p(x_{0},y_{0})\},\]
initialized at \(\Pi^{0}=\mathbb{P}\), which involves iterative KL-projections to impose the marginal distribution \(\Pi_{0}(x_{0},y_{0})=p(x_{0},y_{0})\) at time \(0\), and \(\Pi_{T}(x_{T},y_{T})=\mathcal{N}(x_{T};0,I)p(y_{T})\) at time \(T\). Convergence of the iterates \((\Pi^{n})_{n\in\mathbb{N}}\) to the SB \(\Pi^{*}\) has been studied under various sets of assumptions (Ruschendorf, 1995; Chen, Georgiou and Pavon, 2016; De Bortoli et al., 2021; Leger, 2021). It can be shown that \(\Pi^{2n+1}=\mathbb{P}_{y_{T}}^{2n+1}\otimes\mathbb{D}\) and \(\Pi^{2n+2}=\mathbb{P}_{y_{0}}^{2n+2}\otimes\mathbb{D}\), where \(\mathbb{P}_{y_{T}}^{2n+1}\) and \(\mathbb{P}_{y_{0}}^{2n+2}\) are path measures on \(C([0,T],\mathcal{X})\). The time-reversal path measure \((\mathbb{P}_{yr}^{2n+1})^{R}\) is induced by (11) while \(\mathbb{P}_{y_{0}}^{2n+2}\) is induced by (12)
\[\mathrm{d}Z_{t}=f_{T-t}^{2n+1}(Z_{t},y_{T})\mathrm{d}t+\mathrm{d }W_{t},\quad Z_{0}\sim\mathcal{N}(0,I), \tag{11}\] \[\mathrm{d}X_{t}=f_{t}^{2n+2}(X_{t},y_{0})\mathrm{d}t+\mathrm{d}B_ {t},\quad X_{0}\sim p(x|y_{0}), \tag{12}\]
where \((B_{t})_{t\in[0,T]}\) and \((W_{t})_{t\in[0,T]}\) are \(d\)-dimensional standard Brownian motions. Hence \((Z_{t})_{t\in[0,T]}\) given by (11) corresponds to a backward process, while \((X_{t})_{t\in[0,T]}\) given by (12) corresponds to a forward process. Sampling from the time-reversal of \(\Pi^{2n+1}=\mathbb{P}_{y_{T}}^{2n+1}\otimes\mathbb{D}\) requires additionally sampling \(Y_{T}\sim p(y)\) to ensure \((Z_{0},Y)\sim\Pi_{T}(z,y)\) while sampling from \(\Pi^{2n+2}=\mathbb{P}_{y_{0}}^{2n+2}\otimes\mathbb{D}\) requires sampling \(Y_{0}\sim p(y)\) to ensure \((X_{0},Y_{0})\sim\Pi_{0}(x,y)\). The above drift functions satisfy the following recursions for \(n\in\mathbb{N}_{0}\):
\[f_{t}^{2n+1}(x_{t},y)=-f_{t}^{2n}(x_{t},y)+\nabla_{x_{t}}\log\Pi_{t }^{2n}(x_{t}|y), \tag{13}\] \[f_{t}^{2n+2}(x_{t},y)=-f_{t}^{2n+1}(x_{t},y)+\nabla_{x_{t}}\log \Pi_{t}^{2n+1}(x_{t}|y), \tag{14}\]
initialized at \(f_{t}^{0}(x_{t})=-x_{t}/2\). At iteration \(n=0\), under the law \(\mathbb{P}_{y}^{1}\), the process (11) is exactly the denoising process (4) with initialization \(Z_{0}\sim\mathcal{N}(0,I)\). Hence we can proceed with DSM to approximate \(\nabla_{x_{t}}\log\Pi_{t}^{0}(x_{t}|y)\). This recovers DDPS of Section 3; iterating further allows us to improve performance when \(T\) is not sufficiently large. The next iteration \(n=1\) then defines a forward process (12) under \(\mathbb{P}_{y}^{2}\) by time-reversing the backward process associated to \(\mathbb{P}_{y}^{1}\) and initializing \((X_{0},Y_{0})\sim p(x,y)\). Another application of DSM approximates the next score function \(\nabla_{x_{t}}\log\Pi_{t}^{1}(x_{t}|y)\) by simulating trajectories under \(\mathbb{P}_{y}^{1}\) using the previous score approximation. One can avoid storing all score approximations by adopting the _mean-matching_ approach in (De Bortoli et al., 2021; Shi et al., 2022). Iterating in this manner until convergence yields a backward process (11) that we refer to as the Diffusion Schrodinger Bridge Posterior Sampler (DSB-PS); see Algorithm 2 for an algorithmic outline, and Shi et al. (2022, Section 4) for more implementation details1. In practice, the drift functions \(f_{t}^{2n+1}\) and \(f_{t}^{2n+2}\) are approximated by \(f_{t}^{\theta,2n+1}\) and \(f_{t}^{\phi,2n+2}\).
Footnote 1: Code for DDPS and DSB-PS is available here.
## 5 Denoising Diffusions for General Sampling
As DDPS and DSB-PS are amortized variational inference procedures, we cannot expect good performance for observations \(y\in\mathcal{Y}\) that are very unlikely under the model distribution \(p(y)\), although this can be partially mitigated by designing mechanisms to sample synthetic observations closer to the available ones (Sharrock et al., 2022). These samplers are also not applicable to general distributions that cannot be written as the posterior distribution of a statistical model and for which we only have access to an unnormalized density instead of samples from the joint model distribution. We now present an algorithm proposed by Vargas, Grathwohl and Doucet (2023) that can deal with these two limitations. Let \(p(x)=\gamma(x)/\mathcal{Z}\) denote a target distribution on \(\mathcal{X}\), where
can be evaluated pointwise and the normalizing constant \(\mathcal{Z}=\int_{\mathcal{X}}\gamma(x)\mathrm{d}x\) is intractable.
Like before, we consider an OU process \((X_{t})_{t\in[0,T]}\) (3) with initialization \(X_{0}\sim p(x)\), and denote the induced path measure on \(C([0,T],\mathcal{X})\) as \(\mathbb{P}\). The time-reversal \((Z_{t})_{t\in[0,T]}=(X_{T-t})_{t\in[0,T]}\) satisfies (weakly) the SDE
\[\mathrm{d}Z_{t}=\tfrac{1}{2}Z_{t}\mathrm{d}t+\nabla_{x_{t}}\log p_{T-t}(Z_{t}) \mathrm{d}t+\mathrm{d}W_{t}, \tag{15}\]
with \(Z_{0}\sim p_{T}(x_{T})\) and \(p_{t}(x_{t})=\int_{\mathcal{X}}p_{t\uparrow 0}(x_{t}|x_{0})p(x_{0}) \mathrm{d}x_{0}\) denotes the marginal distribution of \(X_{t}\). In contrast to Section 3, we cannot approximate the score function \(\nabla_{x_{t}}\log p_{t}(x_{t})\) by minimizing a loss similar to (6) as it is infeasible to obtain or generate samples from \(p(x)\). Hence we shall seek an alternative representation of \(\mathbb{P}\). To do so, we introduce a reference path measure \(\mathbb{M}\) induced by a stationary OU process, i.e. (3) with \(X_{0}\sim\mathcal{N}(0,I)\). This process is reversible, and its time-reversal satisfies
\[\mathrm{d}Z_{t}=-\tfrac{1}{2}Z_{t}\mathrm{d}t+\mathrm{d}W_{t},\quad Z_{0}\sim \mathcal{N}(0,I). \tag{16}\]
For \(s<t\), we denote the transition density of (16) as \(m_{t|s}(z_{t}|z_{s})\). Writing \(\mathbb{P}=p\otimes\mathbb{M}_{|0}=\Phi\cdot\mathbb{M}\), where \(\mathbb{M}_{|0}\) denotes the law of \(\mathbb{M}\) conditioned on \(X_{0}\) and \(\Phi(x_{0})=p(x_{0})/\mathcal{N}(x_{0};0,I)\), we express \(\mathbb{P}^{R}\) as a _Doob's \(h\)-transform_(Rogers and Williams, 2000, p. 83) of (16) under \(\mathbb{M}\)
\[\mathrm{d}Z_{t}=-\tfrac{1}{2}Z_{t}\mathrm{d}t+\nabla_{x_{t}}\log h_{T-t}(Z_{t} )\mathrm{d}t+\mathrm{d}W_{t}, \tag{17}\]
with \(Z_{0}\sim p_{T}(x_{T})\) and where we have
\[h_{t}(x_{t})=\int_{\mathcal{X}}\Phi(x_{0})m_{T|T-t}(x_{0}|x_{t})\mathrm{d}x_{0},\]
which can be characterized by a backward Kolmogorov equation and satisfies \(\nabla_{x_{t}}\log h_{t}(x_{t})=\nabla_{x_{t}}\log p_{t}(x_{t})+x_{t}\) from (15). An implementation of (17) will involve setting \(T\) sufficiently large for \(p_{T}\approx\mathcal{N}(0,I)\), and approximating \(\nabla_{x_{t}}\log h_{t}(x_{t})\) for all \((t,x_{t})\in[0,T]\times\mathcal{X}\) with a parametric function class \(\{u^{\theta}_{t}(x_{t}):\theta\in\Theta\}\). We write \(\mathbb{Q}^{\theta}\) for the law on \(C([0,T],\mathcal{X})\) induced by the time-reversal of the diffusion defined by \(Z_{0}\sim\mathcal{N}(0,I)\) and
\[\mathrm{d}Z_{t}=-\tfrac{1}{2}Z_{t}\mathrm{d}t+u^{\theta}_{T-t}(Z_{t})\mathrm{ d}t+\mathrm{d}W_{t}. \tag{18}\]
As we cannot sample from \(\mathbb{P}\), we cannot easily obtain unbiased low-variance estimates of the gradient of the forward KL loss \(\mathrm{KL}(\mathbb{P}|\mathbb{Q}^{\theta})\) w.r.t. \(\theta\). So we instead estimate \(\theta\) by minimizing the reverse KL loss (Vargas, Grathwohl and Doucet, 2023, Proposition 1)
\[\mathcal{L}(\theta) =\mathrm{KL}(\mathbb{Q}^{\theta}|\mathbb{P}) \tag{19}\] \[=\mathbb{E}_{\mathbb{Q}^{\theta}}[\tfrac{1}{2}\int_{0}^{T}\|u^{ \theta}_{T-t}(Z_{t})\|^{2}\mathrm{d}t-\log\Phi(Z_{T})].\]
This is a specific instance of a more general class of KL _optimal control_ problems with many connections to sampling (Kappen, Gomez and Opper, 2012; Kappen and Ruiz, 2016; Heng et al., 2020; Zhang and Chen, 2022). We note that intractability of the normalizing constant \(\mathcal{Z}\) appearing in \(\Phi\) is not an issue as it does not affect the minimizers of \(\mathcal{L}(\theta)\).
With a minimizer \(\theta\) of (19), we have a denoising diffusion general sampler (DDGS, Algorithm 3) that gives approximate samples from \(p(x)\) by simulating (18) and returning \(Z_{T}\), and an unbiased estimator of \(\mathcal{Z}\) using importance sampling with proposal law \(\mathbb{Q}^{\theta}\) and target law \(\mathbb{P}\). An approximate sample from \(p(x)\) and an alternative estimator of \(\mathcal{Z}\) can be obtained using a _probability flow_, i.e. an ordinary differential equation (ODE) that has the same marginal distributions on \(\mathcal{X}\) as \(\mathbb{Q}^{\theta}\); see Song et al. (2021, Section 4.3), Vargas et al. (2023, Section 2.4), and Appendix A. In practice, we have to consider time-discretizations of these continuous-time processes and some care when choosing numerical integrators is necessary (Vargas, Grathwohl and Doucet, 2023, Section 3). Empirical comparisons between DDGS and Sequential Monte Carlo (SMC) methods can be found in Vargas, Grathwohl and Doucet (2023).
## 6 Diffusion Schrodinger Bridges for General Sampling
Analogous to Section 4, we could also improve DDGS by adopting the following SB formulation over probability measures on \(C([0,T],\mathcal{X})\):
\[\Pi^{\star}=\arg\min_{\Pi}\{\mathrm{KL}(\Pi|\mathbb{M}): \tag{20}\] \[\Pi_{0}(x_{0})=p(x_{0}),\quad\Pi_{T}(x_{T})=\mathcal{N}(x_{T};0, I)\}.\]
We refer readers to Appendix B and De Bortoli et al. (2021, Section 3.1) for connections to entropy-regularized _optimal transport_ problems and a Monge-Kantorovich problem in the zero-noise limit (Mikami, 2004; Leonard, 2012, 2014). The IPF recursion solving (20) is
\[\Pi^{2n+1}= \arg\min_{\Pi}\{\mathrm{KL}(\Pi|\Pi^{2n}):\Pi_{0}(x_{0})=p(x_{0})\},\] \[\Pi^{2n+2}= \arg\min_{\Pi}\{\mathrm{KL}(\Pi|\Pi^{2n+1}):\Pi_{T}(x_{T})=\pi _{T}(x_{T})\},\]
for \(n\in\mathbb{N}_{0}\), with \(\pi_{T}(x_{T})=\mathcal{N}(x_{T};0,I)\) and \(\Pi^{0}=\mathbb{M}\). The path measure \(\Pi^{2n+1}\) is induced by the forward process (21) while \((\Pi^{2n+2})^{R}\) is induced by the backward process (22)
\[\mathrm{d}X_{t}= f_{t}^{2n+1}(X_{t})\mathrm{d}t+\mathrm{d}B_{t},\quad X_{0} \sim p(x), \tag{23}\] \[\mathrm{d}Z_{t}= f_{T-t}^{2n+2}(Z_{t})\mathrm{d}t+\mathrm{d}W_{t},\quad Z_{0} \sim\mathcal{N}(0,I),\]
with drift functions
\[f_{t}^{2n+1}(x_{t})= -f_{t}^{2n}(x_{t})+\nabla_{x_{t}}\log\Pi_{t}^{2n}(x_{t}), \tag{24}\] \[f_{t}^{2n+2}(x_{t})= f_{t}^{2n}(x_{t})+\nabla_{x_{t}}\log h_{t}^{2n}(x_{t}),\]
initialized at \(f_{t}^{0}(x_{t})=-x_{t}/2\) and \(\Pi_{t}^{0}(x_{t})=\mathcal{N}(x_{t};0,I)\). Here
\[h_{t}^{2n}(x_{t})=\int_{\mathcal{X}}\Phi^{2n}(x_{0})q_{T|T-t}^{2n}(x_{0}|x_{t} )\mathrm{d}x_{0}, \tag{25}\]
where \(\Phi^{2n}(x_{0})=p(x_{0})/\Pi_{0}^{2n}(x_{0})\) and \(q_{t|s}^{2n}(x_{t}|z_{s})\) for \(s<t\) denotes the transition density of \((Z_{t})_{t\in[0,T]}\) under \((\Pi^{2n})^{R}\).
Since the first two iterates are \(\Pi^{1}=\mathbb{P}\) defined in Section 5 and \(\Pi^{2}=\mathcal{N}(0,I)\otimes\mathbb{P}_{|T}\), we recover DDGS of Section 5 by noticing that \(\nabla_{x_{t}}\log h_{t}^{0}(x_{t})=\nabla_{x_{t}}\log h_{t}(x_{t})\). As \(p_{T}\to\mathcal{N}(0,I)\) when \(T\to+\infty\), DDGS provides an approximation of the SB when the time horizon \(T\) is sufficiently large. When the latter is not the case, further iterations can improve performance. For \(n\geq 1\), \(\Pi^{2n+1}=p\otimes\Pi_{0}^{2n}\) is the law of a forward process (21) with a drift function (23) that involves \(\nabla_{x_{t}}\log\Pi_{t}^{2n}(x_{t})\). Simulating from our approximation of \(\Pi^{2n}\), DSM allows us to construct an approximation \(s_{t}^{\phi,2n}(x_{t})\) of this score function. Next by rewriting \(\Pi^{2n+2}=\mathcal{N}(0,I)\otimes\Pi_{|T}^{2n+1}=(h_{0}^{2n}/h_{T}^{2n}) \cdot\Pi^{2n}\), we see that its reversal is the law of a backward process (22) that is given in terms of a Doob's \(h\)-transform of the backward process associated to \((\Pi^{2n})^{R}\); see Appendix C. To approximate \(\nabla_{x_{t}}\log h_{t}^{2n}(x_{t})\) with \(u_{t}^{\theta,2n}(x_{t})\), we consider another reverse KL minimization
\[\mathcal{L}^{2n+2}(\theta) =\mathrm{KL}(\mathbb{Q}^{\theta}|\Pi^{2n+1}) \tag{26}\] \[=\mathbb{E}_{\mathbb{Q}^{\theta}}[\tfrac{1}{2}\int_{0}^{T}\|u_{T -t}^{\theta,2n}(Z_{t})\|^{2}\mathrm{d}t-\log\Phi^{2n}(Z_{T})],\]
where \((\mathbb{Q}^{\theta})^{R}\) is induced by the diffusion defined by \(Z_{0}\sim\mathcal{N}(0,I)\) and
\[\mathrm{d}Z_{t}=f_{T-t}^{2n}(Z_{t})\mathrm{d}t+u_{T-t}^{\theta,2n}(Z_{t}) \mathrm{d}t+\mathrm{d}W_{t}. \tag{27}\]
In an implementation of (26) and (27), one has to replace \(f_{t}^{2n}(x_{t})\) with the drift approximation from the previous KL minimizations, and approximate \(\Pi_{0}^{2n}(x_{0})\) in \(\Phi^{2n}(x_{0})=p(x_{0})/\Pi_{0}^{2n}(x_{0})\) by numerically integrating the probability flow ODE using the score approximation \(s_{t}^{\phi,2n}(x_{t})\). One should iterate the above steps until convergence and avoid storing all approximations in memory. The final backward process (22) yields the diffusion Schrodinger bridge general sampler (DSB-GS), outlined in Algorithm 4. During the training part of Algorithm 4, we can skip the approximation of \(\nabla\log\Pi_{t}^{2n}\) when \(n=0\), since in that case \(\Pi_{t}^{0}=\mathcal{N}(0,I)\). Doing so, we see that the first iteration of Algorithm 4 recovers Algorithm 3.
An alternative Schrodinger bridge formulation to sampling has been formulated in (Follmer, 1984; Dai Pra, 1991; Tzen and Raginsky, 2019). This corresponds to (20) with the reference measure \(\mathbb{M}\) defined using a pinned Brownian motion running backwards in time, and the terminal distribution \(\Pi_{T}(x_{T})\) given by a Dirac measure at the origin. The appealing feature of this simpler SB is that IPF converges in two iterations; various numerical approximations have been developed for this case (Barr, Gispen and Lamacraft, 2020; Zhang, Sahai and Marzouk, 2021; Zhang and Chen, 2022; Vargas et al., 2023). However, it was observed in Vargas, Grathwohl and Doucet (2023) that the resulting scores one needs to estimate are very steep for times close to \(T\) due to the degenerate terminal distribution. As a result, this approach can be numerically quite unstable.
## 7 Discussion
We have presented a concise overview of how diffusion processes can be used to sample approximately from posterior distributions and any general target distributions. These methods have been successfully used to solve a wide class of sampling problems, and are alternatives to standard MCMC, SMC, and ABC techniques. However, compared to these well-established methods, while there are convergence results available for such diffusion-based samplers (De Bortoli, 2022; Chen et al., 2023), these results are based on assumptions about
the score function estimation error which remain difficult to verify. Promising methodological alternatives to diffusion approaches have also been recently proposed where one constructs a process between two distributions one can sample from, but this process is built using an ordinary differential equation whose drift function can be approximated by solving a simple regression problem (Albergo and Vanden-Eijnden, 2023; Lipman et al., 2023; Liu, Gong and Liu, 2023). The techniques developed in these works have been further extended to compute the SB (Shi et al., 2023; Peluchetti, 2023) and provide an alternative to DSB-PS.
## Appendix A Probability flow ODE
Consider the following SDE on \(\mathcal{X}\)
\[\mathrm{d}X_{t}=f_{t}(X_{t})\mathrm{d}t+\mathrm{d}B_{t},\quad X_{0}\sim p_{0} (x), \tag{28}\]
where \((B_{t})_{t\in[0,T]}\) is a standard \(d\)-dimensional Brownian motion. The marginal density \(p_{t}\) of \(X_{t}\) satisfies the Fokker-Planck-Kolmogorov equation
\[\partial_{t}p_{t}(x) =-\mathrm{div}(f_{t}p_{t})(x)+\tfrac{1}{2}\Delta p_{t}(x) \tag{30}\] \[=-\mathrm{div}(\bar{f}_{t}p_{t})(x) \tag{29}\]
where \(\mathrm{div}(u)(x):=\sum_{i=1}^{d}\frac{\partial u_{i}(x)}{\partial x_{i}}\) for differentiable \(u:\mathcal{X}\to\mathcal{X}\), \(\Delta a(x):=\sum_{i=1}^{d}\frac{\partial^{2}a(x)}{\partial x_{i}^{2}}\) for twice differentiable \(a:\mathcal{X}\to\mathbb{R}\), and
\[\bar{f}_{t}(x)=f_{t}(x)-\tfrac{1}{2}\nabla\log p_{t}(x). \tag{31}\]
Equation (30) shows that the ODE
\[\mathrm{d}X_{t}=\bar{f}_{t}(X_{t})\mathrm{d}t,\quad X_{0}\sim p_{0}(x), \tag{32}\]
known as the _probability flow_ ODE, is such that \(X_{t}\sim p_{t}(x)\) for all \(t\in[0,T]\), i.e. it admits the same marginal distributions as the SDE in (28). By using the instantaneous change of variables formula (Chen et al., 2018), we obtain
\[\log p_{0}(x_{0})=\log p_{T}(x_{T})+\int_{0}^{T}\mathrm{div}(\bar{f}_{t})(x_{ t})\mathrm{d}t, \tag{33}\]
for \((x_{t})_{t\in[0,T]}\) obtained by solving (32) initialized at \(X_{0}=x_{0}\).
## Appendix B Schrodinger bridge as entropy-regularized optimal transport
Consider the following generic SB problem over probability measures on \(C([0,T],\mathcal{X})\):
\[\Pi^{\star}=\arg\min_{\Pi}\{\mathrm{KL}(\Pi|\mathbb{S}): \tag{34}\] \[\Pi_{0}(x_{0})=\nu_{0}(x_{0}),\quad\Pi_{T}(x_{T})=\nu_{T}(x_{T})\},\]
where \(\mathbb{S}\) is a generic path measure on \(C([0,T],\mathcal{X})\). We note that the dynamic formulation (34) admits a static analogue. Let \(s_{0,T}\) denote the marginal distribution of \((X_{0},X_{T})\) under \(\mathbb{S}\), and \(\mathbb{S}_{|0,T}\) denote the law \(\mathbb{S}\) conditioned on \((X_{0},X_{T})\), i.e. we have \(\mathbb{S}=s_{0,T}\otimes\mathbb{S}_{|0,T}\). By decomposing the KL divergence in (34), we see that \(\Pi^{\star}=\pi_{0,T}^{\star}\otimes\mathbb{S}_{|0,T}\), where the static SB \(\pi_{0,T}^{\star}\) is defined by minimizing over probability measures on \(\mathcal{X}\times\mathcal{X}\) with the same marginal constraints
\[\pi_{0,T}^{\star}=\arg\min_{\pi_{0,T}}\{\mathrm{KL}(\pi_{0,T}|s_{0,T}): \tag{35}\] \[\pi_{0}(x_{0})=\nu_{0}(x_{0}),\quad\pi_{T}(x_{T})=\nu_{T}(x_{T})\}.\]
We can rewrite
\[\mathrm{KL}(\pi_{0,T}|s_{0,T})=-\mathbb{E}_{\pi_{0,T}}[\log s_{T|0}(X_{T}|X_{0 })]-\text{H}(\pi_{0,T}),\]
where \(s_{T|0}(x_{T}|x_{0})\) denotes the transition density under \(\mathbb{S}\) and \(\text{H}(\pi_{0,T})=-\mathbb{E}_{\pi_{0,T}}[\log\pi_{0,T}(X_{0},X_{T})]\) is an entropy term. In particular, if \(\mathbb{S}\) is the law of a standard Brownian motion on \(\mathcal{X}\), we have
\[\pi_{0,T}^{\star}=\arg\min_{\pi_{0,T}}\{\mathbb{E}_{\pi_{0,T}}[|| X_{T}-X_{0}||^{2}]-2T\text{H}(\pi_{0,T}):\] \[\pi_{0}(x_{0})=\nu_{0}(x_{0}),\quad\pi_{T}(x_{T})=\nu_{T}(x_{T})\}.\]
The above can be seen as an entropy-regularized _optimal transport_ problem that converges to a Monge-Kantorovich problem as \(T\to 0\)(Mikami, 2004; Leonard, 2012, 2014).
## Appendix C Iterative Proportional Fitting as Doob's \(h\)-transforms
Recall from Section 6 that the IPF iterates satisfy
\[\Pi^{2n+1}=p\otimes\Pi^{2n}_{|0},\quad\Pi^{2n+2}=\mathcal{N}(0,I)\otimes\Pi^{2n +1}_{|T} \tag{36}\]
for \(n\in\mathbb{N}_{0}\), which implies
\[\Pi^{2n+2}=\frac{p(x_{0})\mathcal{N}(x_{T};0,I)}{\Pi^{2n}_{0}(x_{0})\Pi^{2n+1} _{T}(x_{T})}\Pi^{2n}. \tag{37}\]
From the definition of \(h^{2n}_{t}(x_{t})\) in (25), we note that
\[h^{2n}_{0}(x_{0})=\Phi^{2n}(x_{0})=p(x_{0})/\Pi^{2n}_{0}(x_{0}), \tag{38}\]
and
\[h^{2n}_{T}(x_{T})=\int_{\mathcal{X}}\Phi^{2n}(x_{0})q^{2n}_{T|0}(x_{0}|x_{T}) \mathrm{d}x_{0}. \tag{39}\]
Denoting the conditional distribution of \(X_{T}\) given \(X_{0}\) under \(\Pi^{2n}\) as \(\pi^{2n}(x_{T}|x_{0})\), we have
\[\Pi^{2n+1}_{T}(x_{T}) =\int_{\mathcal{X}}p(x_{0})\pi^{2n}(x_{T}|x_{0})\mathrm{d}x_{0} \tag{40}\] \[=\mathcal{N}(x_{T};0,I)\int_{\mathcal{X}}\Phi^{2n}(x_{0})q^{2n}_ {T|0}(x_{0}|x_{T})\mathrm{d}x_{0},\]
which gives
\[\frac{\mathcal{N}(x_{T};0,I)}{\Pi^{2n+1}_{T}(x_{T})}=h^{2n}_{T}(x_{T})^{-1}. \tag{41}\]
Combining (37), (38), and (41) gives
\[\Pi^{2n+2}=\frac{h^{2n}_{0}(x_{0})}{h^{2n}_{T}(x_{T})}\Pi^{2n}. \tag{42}\]
## Funding
A.D. is partially supported by EPSRC grants EP/R034710/1 (CoSinES) and EP/R018561/1 (Bayes4Health). J.H. was funded by CY Initiative of Excellence (grant "Investissements d'Avenir" ANR-16-IDEX-0008).
|
2303.16463 | Remote attestation of SEV-SNP confidential VMs using e-vTPMs | Trying to address the security challenges of a cloud-centric software
deployment paradigm, silicon and cloud vendors are introducing confidential
computing - an umbrella term aimed at providing hardware and software
mechanisms for protecting cloud workloads from the cloud provider and its
software stack. Today, Intel SGX, AMD SEV, Intel TDX, etc., provide a way to
shield cloud applications from the cloud provider through encryption of the
application's memory below the hardware boundary of the CPU, hence requiring
trust only in the CPU vendor. Unfortunately, existing hardware mechanisms do
not automatically enable the guarantee that a protected system was not tampered
with during configuration and boot time. Such a guarantee relies on a hardware
RoT, i.e., an integrity-protected location that can store measurements in a
trustworthy manner, extend them, and authenticate the measurement logs to the
user.
In this work, we design and implement a virtual TPM that virtualizes the
hardware RoT without requiring trust in the cloud provider. To ensure the
security of a vTPM in a provider-controlled environment, we leverage unique
isolation properties of the SEV-SNP hardware that allows us to execute secure
services as part of the enclave environment protected from the cloud provider.
We further develop a novel approach to vTPM state management where the vTPM
state is not preserved across reboots. Specifically, we develop a stateless
ephemeral vTPM that supports remote attestation without any persistent state on
the host. This allows us to pair each confidential VM with a private instance
of a vTPM completely isolated from the provider-controlled environment and
other VMs. We built our prototype entirely on open-source components. Though
our work is AMD-specific, a similar approach could be used to build remote
attestation protocols on other trusted execution environments. | Vikram Narayanan, Claudio Carvalho, Angelo Ruocco, Gheorghe Almási, James Bottomley, Mengmei Ye, Tobin Feldman-Fitzthum, Daniele Buono, Hubertus Franke, Anton Burtsev | 2023-03-29T05:33:11Z | http://arxiv.org/abs/2303.16463v2 | # Remote attestation of SEV-SNP confidential VMs using e-vTPMs
###### Abstract
Departing from _your data is safe with us_ model where the cloud infrastructure is trusted, cloud tenants are shifting towards a model in which the cloud provider is not part of the trust domain. Both silicon and cloud vendors are trying to address this shift by introducing _confidential computing_ - an umbrella term that provides mechanisms for protecting the data in-use through encryption below the hardware boundary of the CPU, e.g., Intel Software Guard Extensions (SGX), AMD secure encrypted virtualization (SEV), Intel trust domain extensions (TDX), etc.
In this work, we design and implement a virtual trusted platform module (vTPM) that virtualizes the hardware root-of-trust without requiring to trust the cloud provider. To ensure the security of a vTPM in a provider-controlled environment, we leverage unique isolation properties of the SEV-SNP hardware and a novel approach to ephemeral TPM state management. Specifically, we develop a stateless _ephemeral_ vTPM that supports remote attestation without persistent state. This allows us to pair each confidential VM with a private instance of a vTPM that is completely isolated from the provider-controlled environment and other VMs. We built our prototype entirely on open-source components - Qemu, Linux, and Keylime. Though our work is AMD-specific, a similar approach could be used to build remote attestation protocol on other trusted execution environments (TEE).
## 1 Introduction
Over the last two decades, public clouds have become de facto standard execution environments for deploying a broad range of modern software. The move to the cloud, however, created a unique security challenge - cloud tenants no longer own or control an environment in which their software is deployed. Tenants are required to trust not only the provider itself, but also a complex software stack virtualizing the physical hardware across multiple users. In the last decade three most widely deployed virtual machine monitors (VMMs) - Xen, KVM, and VMware - suffered from 428 [30], 111 [11] and 154 [28] vulnerabilities each. Moreover, physical access to the system opens the door for a range of hardware attacks, e.g., memory extraction such as cold-boot [76], RAM-Bleed [53, 70], etc.
Recent CPU architectures have introduced support for a hardware-protected trusted execution environments (TEEs) as a way to minimize the trusted computing base (TCB) of a cloud application [41, 48, 2, 8, 9]. By using a TEE, the cloud provider infrastructure (including the hypervisor) is removed from the TCB, since the application is isolated from it.
One of such TEE solutions is SEV-SNP, a variant of AMD secure encrypted virtualization (SEV) technology available in the EPYC 7003 series of processors [4]. SEV-SNP launches a guest virtual machine in a hardware context that is isolated from the rest of the system by means of memory encryption. Hence, even an attacker who has full access to both hardware and privileged cloud software (i.e., platform firmware, hypervisor and operating system) cannot access the encrypted state of the protected guest VM (also referred as confidential VM).
Thus hardware memory encryption provides confidentiality of the application's code and data. Unfortunately, confidentiality without integrity does not provide strong security guarantees. A range of attacks allow a malicious entity on the host to attack the execution of the confidential VM by modifying the code, data, or configuration parameters during the boot sequence or later as long as attacker possesses administrative privileges (e.g., enabling debug flags that could leak secrets, loading malicious kernel extensions, downgrading security-critical subsystems to vulnerable versions, etc.)
To ensure integrity, i.e., the guarantee that the system is not tampered with, modern systems rely on a combination of _measured boot_[77, 61] and _runtime attestation_[38, 47]. Measured boot consists of measuring all boot time binaries, i.e., platform firmware, bootloader(s), and the operating system. Runtime attestation combines measured boot with integrity measurement architecture (IMA) that uses the same principle - measuring all executables loaded into the memory - after the system is booted. Attestation works by comparing entries in the measured boot and IMA logs with a pre-defined set of acceptable values (called an _attestation policy_) and exposing any measurements that do not conform to policy expectations.
Support for attestation requires a _root-of-trust device_, i.e., an integrity protected location that can store measurements
in a trustworthy manner, extend them, and authenticate the measurement logs to the user (remote attestation). This device is typically a trusted platform module (TPM) chip. TEE technologies typically offer mechanisms to support _pretatestation_, i.e., measuring all the memory initialized in the confidential VM before boot, but lack mechanisms that allow the software stack to extend and log subsequent measurements.
Thus, to perform continuous runtime attestation and integrity monitoring of a confidential VM, the VM needs a different root-of-trust. Given the existing software stack for TPMs, it is natural to think of a virtual TPM (vTPM). Providing vTPMs with strong security guarantees in an environment in which neither the guest system nor provider are trusted is challenging. Several design constraints are critical for the security of the system:
* **Isolation**: To ensure security of an encrypted VM, the vTPM must be protected from the host controlled by the provider (an obvious TEE requirement). In addition, the vTPM also needs isolation from the guest operating system, since it acts as a root-of-trust device for attestation. Existing vTPM designs trust the host OS and or the hypervisor to run a vTPM manager and vTPM instances [1, 6, 13, 26], isolating the vTPM from the guest but leaving it open to attack from the provider side.
* **Secure communication**: In a typical vTPM, the TPM commands and responses are transmitted through the untrusted hypervisor [32, 34, 49, 57, 66]. An attacker can interpose on the channel and alter the request or response defeating the security guarantees offered by a TPM [36]. To provide confidentiality and integrity of messages exchanged between the VM and the vTPM in an untrusted host environment, we require a secure communication channel.
* **Persistent state**: A physical TPM guards its internal state inside a mechanical chip wrapper and as such, the TPM state cannot be easily exfiltrated. By contrast, a vTPM instance requires the injection of its state every time it is created. The persistent state needs to be properly secured both at rest, and while in use. At rest protection can be done by using encryption mechanisms on the permanent state file. In use protection is more difficult, since it requires both protecting the vTPM memory from improper access, and ensuring that the state cannot be exfiltrated with legitimate operations coming from a malicious actor.
In this work, we propose SVSM-vTPM, a new vTPM architecture that solves the above security challenges. Our work leverages unique architectural properties of the AMD SEV-SNP execution environment; however, we will discuss how to generalize this solution at the end of the paper. Specifically, we rely on the VM privilege level (VMPL) feature offered by the secure VM service module specification (SVSM) to isolate the vTPM from both the host and the guest system. With VMPLs, we implement a vTPM running inside a privilege isolated memory region inside the guest VM address space. Since the vTPM is still running inside the confidential VM, it is also isolated from the host. Additionally, being inside the confidential VM allows us to automatically protect the request and response messages to and from the TPM by leveraging the confidential VM memory encryption. This allows us, by construction, to eliminate the need for establishing a secure communication channel between the VM and the vTPM.
Finally, we address the problems of persistent vTPM state by proposing a stateless, ephemeral vTPM. Specifically, we pair each confidential VM with a private instance of a vTPM that manufactures fresh state and keys every time it boots. This means that there is no shared state managed by a TPM manager and no persistent vTPM state to protect. The vTPM state lives entirely within the encrypted memory of the virtual machine, with no mechanism for exporting it, allowing us to address a whole class of attacks based on exfiltration of vTPM state. The cost of doing this ephemeral vTPM is that the TPM itself can retain no persistent keys or non-volatile indexes. While this makes our vTPM implementation not compliant to the specifications, we note that persistent state is not required for the attestation and integrity monitoring procedures we describe in this paper. Our contributions are as follows:
* We propose the use of an ephemeral vTPM to remove attacks to the vTPM state.
* We are the first to leverage the new features of AMD SEV to provide a secure implementation of a vTPM.
* We demonstrate a full remote attestation workflow for our SVSM-vTPM solution, implicitly proving that remote attestation frameworks can provide measured boot and remote attestation with an ephemeral vTPM.
## 2 Background
### AMD secure encrypted virtualization
In 2016, AMD introduced secure encrypted virtualization (SEV) where the entire virtual machine is encrypted with an ephemeral key that is managed by a dedicated co-processor, AMD secure processor (AMD-SP). AMD-SP takes care of lifecycle management of the SEV VMs [31] and serves as the integrated root-of-trust for the AMD processor [55]. By using a unique key per VM, SEV isolates the guest VMs from the rest of the host operating system and from other guests. Since 2016, AMD incrementally added additional protection features to SEV. AMD introduced SEV-ES (SEV encrypted state) to protect the register state in the virtual machine control block (VMCB) with encryption and integrity protection [17]. To communicate and share data with the hy
pervisor during hypercall, a new structure called guest hypervisor communication block (GHCB) was introduced [21] that would remain unencrypted. In their next version, SEV-SNP (secure nested paging), AMD introduced a reverse mapping table (RMP) which performs page validation and keeps track of page ownership to prevent replay attacks [4].
Virtual machine privilege levelsAMD also introduced virtual machine privilege levels (VMPLs) in SEV-SNP. Similar to protection rings in x86 architecture, VMPLs allow a guest VM address space to be subdivided into four levels with different privileges (with VMPL0 being the highest privilege level). These levels can be used to implement privilege isolated abstraction layers within a confidential guest virtual machine [4].
The introduction of VMPL allows the design and deployment of secure services that are completely isolated from the untrusted host operating system and the guest VM. Secure VM service module (SVSM) specification [19] also defines a standard interface for communicating between various services offered by the software running at VMPL0 and the guest operating system. The protocol uses registers to pass the arguments and return values. In the absence of SVSM firmware, the entire guest VM can execute under VMPL0 unmodified. However, with SVSM, they run at a lower privilege level, corresponding to a higher VMPL (i.e., 1-3), and require interaction with the SVSM for some privileged operations.
### Virtual trusted platform module (vTPM)
A vTPM is a pure software implementation of a TPM module as defined by the TPM 2.0 specification [24]. vTPM enables virtualization of a hardware root of trust across multiple entities, i.e., virtual machines, and is aimed at providing functionality identical to a hardware TPM. Berger et al. [34] proposed the first design for virtualizing a TPM that can be used for providing TPM functionalities to virtual machines. Their design consists of a vTPM manager and a set of vTPM instances, where the vTPM manager executes as part of the VMM and takes care of multiplexing physical hardware across multiple VMs. Berger et al. extend the TPM command specification to include support for creating virtual instances and rely on hardware TPM for establishing trust.
Stumpf et al. [68] proposed a virtual TPM design by applying hardware virtualization techniques from Intel VT-x technology. Their multi-context TPM contains different modes of execution and has a dedicated TPM control structure (TPMCS) for every VM, which would be loaded by the VMM before invoking the TPM commands. Several vTPM architectures were proposed over the years: from a generalized vTPM [66] to separating vTPM functionalities across Xen domains with different privileges [49, 32, 57]. However, they were either placing trust on the host environment (VMM, host OS) or relying on hardware TPM for establishing trust. Unfortunately, none of those designs satisfies the security and confidentiality requirements of confidential computing. Recent vTPM designs move their implementation inside a TEE such as Intel SGX [69, 74, 75, 62]. Though this design offers protection from the cloud provider, the state of the TPM must be securely stored and should be protected against rollback attacks.
### Runtime integrity
Modern kernels deploy a number of security mechanisms to prevent runtime exploitation of low-level vulnerabilities [37], e.g., stack canaries [42], address space randomization (ASLR) [67], data execution prevention (DEP) [73], superuser-mode execution and access prevention [39, 43], and even control-flow [40] and code-pointer integrity [52]. These mechanisms, however, are not designed to stop an attacker who has control over the boot sequence of a system, e.g., an untrusted cloud provider, or gains administrative privileges and can load malicious kernel extensions, or downgrade security critical subsystems to exploitable versions. To prevent such attacks modern systems rely on a combination of measured boot and runtime integrity monitoring.
Measured boot is the process of recording the measurements of all boot components during the system initialization process. The hash of the components are recorded in a log file that is authenticated using the Trusted Platform Module (TPM). This authentication works by extending TPM Platform Configuration Registers with digests of individual events in the boot log. A TPM signed _quote_ an be used to vouch for the accuracy of the log.
Integrity measurement architecture (IMA) is a Linux sub-system that collects the hashes of files when opened, before it is read or executed [65]. To ensure the integrity of these measurements, they are extended into the TPM PCRs, similar to the measured boot log. Together with measured boot, IMA enables remote attestation to ensure the runtime integrity of the system.
Remote attestation is a process that proves or ascertain properties of a set of devices/machines to an outside observer. As an example, one might be interested to ascertain the booted kernel, on a set of machines in a data center. These properties of interest are cumulatively called an attestation policy.
## 3 Threat model
We assume that an attacker has physical access to the machine and unrestricted privileges on the software and firmware executing on the host machine, i.e., firmware, hypervisor and virtualization stack, and the host operating system. However, the memory of the confidential VM is protected by the AMD SEV technology, i.e., encrypted with a key only known to the AMD secure processor (AMD-SP). We trust the AMD
hardware and the implementation of SEV-SNP and SVSM.
Ciphertext side channel attacks [54, 56] on the SEV encrypted VM (by building a dictionary of plaintext-ciphertext pairs) are out of scope. Attacks against the integrity measurement architecture (IMA) such as TOCTOU [35], other measurement gaps such as code injected by extended berkeley packet filter (eBPF) are out of scope. Also, runtime attacks exploiting stack or heap overflows such as return-oriented programming on the guest VM are out of scope as IMA measures only the persistent files.
## 4 TPM virtualization with SVSM
SVSM-vTPM is a secure virtual TPM designed to enable remote attestation and runtime integrity measurement in a provider-controlled confidential computing environment backed by an AMD SEV hardware. Specifically, we do not trust any software on the host machine. To achieve strong isolation from the host, we leverage unique capabilities of AMD SEV environment and execute a virtual TPM instance along with the guest system inside a hardware-protected TEE enclave (Figure 1). The entire SEV-SNP confidential VM memory is encrypted by the AMD-SP. SVSM-vTPM runs inside the VM privilege level 0 (VMPL0), which allows us to both isolate it from the rest of the guest system and provide secure communication between the guest and the TPM. Specifically, we load a minimal baremetal execution environment in VMPL0 when a new confidential VM is created. Finally, we completely eliminate the burden of TPM state management such as preserving the state, injecting it to the correct confidential VM during boot-up, and also prevent a whole class of attacks based on exfiltration of the TPM state with a novel idea of an ephemeral TPM - our TPM instances have no persistent to save or guard against.
### Isolation
As vTPM offers a virtual root-of-trust for the virtual machine, it has to be hosted in an environment that provides strong isolation of its state and is designed to minimize the attack surface for a potential attacker. Arguably, two design flaws undermine the security of existing vTPMs to be used in a confidential computing environment.
First, until recently, the cloud provider was a de facto part of the trust domain. vTPMs were often managed and implemented as a component inside the hypervisor [34] or as a part of the virtualization stack [32, 49, 57]. To reduce the attack surface on the component hosting the vTPM, several alternative vTPM architectures were proposed. Triglav vTPM utilized dynamic root of trust (DRTM) as a mechanism to ensure the integrity of the hypervisor [60]. Another vTPM solution utilized x86 system management mode (SMM) for isolation and protection of the TPM [58]. Though such designs offer some form of protection against a non-malicious cloud environment, they do not satisfy the requirements of confidential computing where the entire host environment is untrusted. Recent TEE-based vTPMs run the vTPM manager and several instances in a hardware isolated TEE such as SGX [69, 74, 75], AMD SEV confidential VM [62] or in ARM Trustzone [63].
Second, historically, virtualization of TPM relied on a centralized architecture. The core part of the vTPM, a vTPM manager, responsible for instantiating a TPM, multiplexing the communication between multiple VMs and vTPMs, and saving the TPM state in a secure location was shared across all vTPM instances [32, 34, 49, 57, 62]. As the manager handles the lifecycle of all vTPMs on a machine and has access to the physical TPM hardware, it naturally becomes a central point for attack. A malicious VM can launch attacks ranging from a simple denial-of-service to sophisticated attacks trying to exfiltrate the secrets by exploiting the vulnerabilities in a centralized the software component of the vTPM manager. If exploited, the security of all the vTPMs handled by the manager is compromised.
Private, isolated TPMsInstead of relying on a central vTPM manager that manages several instances of vTPM in an untrusted environment, we base our design on two insights. First, to provide strong isolation of the vTPM code, we leverage the architectural support offered by AMD SEV. Second, to avoid centralized management, we rely on SVSM specification that offers a way to implement secure services inside the guest VM.
Specifically, to ensure isolation, we leverage the VM privilege levels inside the confidential VM address space provided by the SVSM specification as part of the SEV-SNP architecture. In our architecture, every confidential VM has its own private vTPM that runs at a higher privilege level (i.e., VMPL0) inside each confidential VM and is encrypted by
Figure 1: SVSM-vTPM architecture and its components
AMD-SP and has the same isolation guarantees of an encrypted VM.
By running our vTPM within an isolated privilege level within the guest address space, we eliminate all the attacks that could be mounted on the component that runs the vTPM. Additionally, operating at the VMPL0 offers additional protection that it cannot be interfered by the guest or the host OS.
We use Qemu/KVM environment for running the confidential VM. Figure 2 shows how a confidential VM is launched. A user provides the boot-time binaries (typically SVSM and OVMF) to be loaded as part of the guest image ( 1 ). Qemu communicates with the KVM which communicates with the SEV firmware running inside the AMD-SP through an API interface to create a confidential VM ( 2 ). The SVSM firmware is placed in VMPL0 and the OVMF firmware and the rest of the guest environment (i.e., the kernel and initrd in case of direct boot) is placed at VMPL1.
Unlike a regular programming environment that provides operating system abstractions (e.g., syscalls, timers, etc) and feature-rich libraries, SVSM firmware runs on a restrictive bare-metal environment without access to such features. Enclave environments often come with such restrictions, for instance, one would need a sophisticated library OS [71] to run unmodified applications inside SGX. However, in a bare-metal environment such as SVSM, one do no have operating system abstractions such as timers/clocks, availability of crypto libraries, etc. However, a vTPM needs to have access to timers, random numbers, and cryptographic libraries for realizing a software TPM module. We manually port the necessary libraries to satisfy the dependencies of the TPM module. Due to the encrypted code pages and the lack of interfaces between the debugger and Qemu to install breakpoints inside the encrypted pages, we had to rely on print statements during development for debugging.
### Secure communication between VM and vTPM
The communication channel between a VM and a corresponding vTPM is a potential target for a range of security attacks, e.g., by altering the TPM command requests and response buffers it is possible to subvert measured boot and runtime attestation protocols [36]. One way to mitigate such attacks is to secure the communication channel by implementing standards such as TPM HMAC and encryption [24] or DMTF secure protocol and data model (SPDM) specification [20]. Though the TPM specification describes encryption and HMAC security layers, there are very few TPM implementations that support them. Developing a complex secure communication protocol such as SPDM requires a large engineering effort. Recent vTPM designs that rely on a hardware-protected TEEs implement secure communication channel using transport layer security (TLS) protocol [16]. Unfortunately, even a standard TLS protocol negatively affects the TCB size of the TPM.
Secure communicationInstead of implementing a secure communication protocol, we rely on the mechanism provided by AMD SEV and its ability to pass execution between virtual machine privilege levels. While the transition between VMPL1 and VMPL0 triggers an exit into an untrusted hypervisor controlled by the cloud provider, the internals of the message remain protected inside the hardware-encrypted memory. Moreover, AMD SEV specification ensures that the hypervisor can only resume execution of the VM at a corresponding privilege level, i.e., VMPL0, if the guest system triggers a transition to the TPM. Hence, the hypervisor is unable to suppress messages unless the whole VM is halted.
To implement the communication protocol, we developed a modified version of the Qemu emulator and the guest Linux kernel to interact with our SVSM-vTPM operating at VMPL0. Qemu exports the TPM space as a memory-mapped IO (MMIO) region and traps all guest accesses to this region to forward the TPM commands to the appropriate backend implementation - i.e., a passthrough device or an emulator (swtpm). We cannot utilize the existing Qemu TPM interfaces for communicating with the SVSM-vTPM as MMIO region of the TPM remains unencrypted. A malicious hypervisor could alter the commands and perform a man-in-the-middle (MITM) attack [36]. One could possibly implement a secure communication channel using secure protocol and data model (SPDM) protocol to prevent such attacks. However, it is much simpler to implement a new TPM interface for Qemu.
We created a new Qemu TPM interface based on the command response buffer (CRB) specification [23] in which the exported MMIO region is part of the guest memory and is encrypted and accessible only to the confidential VM. However, with this mechanism the accesses to this MMIO region are not trapped anymore and the hypervisor is no longer aware
Figure 2: Generating SEV-SNP attestation report inside SVSM-vTPM
on when to invoke the TPM backend interface. Moreover, with SEV-SNP, the TPM region is marked as private to the guest confidential VM in the reverse mapping table and a hypervisor cannot perform a page remapping attack.
We modify the Linux CRB driver in the guest kernel to trigger an exit into the hypervisor, after every write to the TPM MMIO region. Upon re-entry, the hypervisor puts the vCPU in VMPL0 where SVSM-vTPM handler looks for TPM command ready flag and inturn invokes the appropriate TPM command API to formulate the response buffer. Then, the vCPU exits into VMPL1 and continues with the execution of the guest VM. We also make modifications to the TPM driver in OVMF to interact with our SVSM-vTPM.
### vTPM state
A discrete physical TPM stores all the persistent state of the module inside the chip's non-volatile (NV) store which holds the seeds for generation of endorsement key (EK), storage root key (SRK) and also retains other values such as NV Index values, objects made persisted by the TPM user, and state saved when TPM is shutdown. The TCG specification requires a TPM implementation to have some amount of non-volatile storage for the operation of the TPM [24].
As opposed to a physical TPM where the state of the TPM is securely stored inside the TPM hardware chip inside a non-volatile RAM (NVRAM), a vTPM must manage its state in software. Software vTPMs typically implement the NV store in a disk-backed file [32, 34, 49, 57, 62, 63, 74]. Along with the software that implements the vTPM, this NVRAM file is part of the trusted computing base. When a vTPM is first initialized, the state file has to be created on-the-fly or loaded from a file that is pre-created.
However, the state stored in the file needs to be secured against tampering and rollback attacks [44]. This could be achieved by encrypting the NV store file such that it could be decrypted only by the vTPM module. This design calls for securely storing the secret key used to encrypt/decrypt the NV state and inject it as a secret during the boot-up of vTPM module. This brings in several complexities in the context of confidential computing as the secret could only be injected during the launch phase. First the user has to verify the pre-attestation report of the load-time components (i.e., firmware, OVMF, etc.) before delivering the encrypted TPM state along with the key to decrypt the TPM state. The booting of the platform is blocked, waiting for the user to inject the secret. Additional care has to be taken to not give up the state to a confidential VM that is under the control of an attacker.
Ephemeral vTPMInstead, our design choice of using an ephemeral vTPM is much more simplistic and pragmatic. The vTPM goes through the manufacturing process to generate a fresh set of seeds, keys on every boot. We avoid all the problems of handling persistent state, injecting it on every boot, and guarding the encrypted state file by designing an ephemeral vTPM with no state. First, ephemeral vTPM is simple to implement: the NV storage becomes a volatile storage and does not preserve any values across power cycles. Second, it does not require any form of secrets to boot-up the vTPM and the platform. Though there are downsides to this design such as: secrets cannot be preserved across reboots, this offers much more flexibility as there is no secret to guard against the aforementioned attacks. Moreover, the programming environment for SVSM is extremely constrained in terms of capabilities. To save the TPM state on shutdown and to load the state on a reboot, the SVSM should implement additional software to encrypt and decrypt the state file.
### SVSM-vTPM provisioning
After launching the confidential VM, the hypervisor first loads and executes the SVSM binary in VMPL0. Our modified SVSM follows the standard manufacturing process of instantiating a vTPM instance as specified by the TPM2.0 specification [24]. First, we create a new endorsement key (EK) pair \(EK_{pub}\) and \(EK_{priv}\) from random seeds. However, we do not create an endorsement key certificate (\(EK_{cert}\)) or a platform certificate, as there is no entity to sign these certificates.
A significant, and much under discussed problem in Confidential Computing is seeding the random number generator. A VM when it boots has no natural sources of entropy that are not under the control of the untrusted host. In an ordinary VM, the x86 instructions RRAND and RRSEED cause VREXITS; however, in confidential VMs, these instructions are guaranteed to provide direct access to the CPU hardware random number in a way that the host cannot influence. We use these instructions as the initial random number entropy source for generating the random seeds.
### Adding vTPM to the trust chain
Since our SVSM-vTPM module is instantiated with random seeds and does not come with a manufacturer's certificate to verify the identity of the TPM, we need to ensure the following security properties:
* Certify that the SVSM-vTPM is running in a real confidential VM on genuine AMD hardware
* Certify that the vTPM module is not tampered with.
* Communicate \(EK_{pub}\) in a secure, tamper-proof way.
To ensure these security properties, we rely on the attestation report from the AMD-SP hardware.
SEV-SNP attestation reportSoftware running at any VMPL level can request an attestation report by sending a message to the SEV firmware running inside the AMD-SP. The request structure contains the VMPL level and 512-bits of space for user-provided data which would be included as part of the attestation report signed by the AMD hardware.
Figure 2 shows the steps involved in getting an attestation report. On receiving a request to launch a VM, the platform loads the image and cryptographically measures the contents of the image ( 1 ). Once the guest image is launched, the hypervisor puts the vCPU in VMPL0 mode passing control to the SVSM firmware (after 2 ). The SVSM firmware initializes the guest CPU, memory and sets up a pagetable for execution and finally instantiates a vTPM. The vTPM is provisioned as described in section 4.4. Then, the vTPM module requests an attestation report by sending a SWP_REPORT_REQ message to the AMD-SP hardware ( 3 ). We place the digest of the public part of the generated endorsement key (i.e., \(EK_{pub}\)) in the user-data field of the request to communicate the identity of the TPM to the guest VM. The request message is encrypted with the appropriate VM platform communication key (VMPCK) for that VMPL level and prepended with a message header which is integrity protected with authenticated encryption (AEAD). The AMD-SP hardware decrypts the message, verifies the integrity and responds with an attestation report( 4 ) that contains the _pre-attestation_ measurements, vmpl level and the user-data (i.e., \(digest\left(EK_{pub}\right)\)). We write this report into the NVIndex where the TPM would normally place its EK certificate. We can retrieve the saved attestation report at any point in time ( 5 ) as long as the guest VM is operational. If needed, the guest VM can also place a report request to the AMD-SP hardware from other VMPL levels to generate a new attestation report.
Ensuring S1We can easily verify **S1** because the attestation report is generated by the AMD-SP processor and signed using AMD's private keys. Verifying that the attestation report is genuine implicitly guarantee that we obtained it from a genuine AMD processor, within a confidential VM.
Ensuring S2Before launching the confidential VM, the AMD-SP hardware measures all the load-time binaries as part of the _pre-attestation_. This includes the SVSM and our SVSM-vTPM code. By verifying these measurements that are included as part of the attestation report, we can ensure that our SVSM-vTPM binary, and anything else running in VMPL0, has not been tampered.
Ensuring S3By verifying that the report request originated from VMPL0, we can confirm that the report was requested by a legitimate SVSM-vTPM, based on **S2**. By including the \(digest(EK_{pub})\) as part of the attestation report (via user-data field), we offer a tamper-proof way to communicate the identify of the TPM (\(EK_{pub}\)) to the entities interacting with this specific vTPM. Since \(EK_{pub}\) and \(EK_{priv}\) are generated from random seeds provided by the hardware (i.e., RRAND and ROSEED), as long as the generator is tamper-proof, no entity can recreate \(EK_{priv}\) and impersonate this vTPM.
## 5 Implementation
We base our implementation on the software stack recommended by AMD which is publicly available on github [3]. It consists of qemu, open virtual machine firmware (OVMF) and Linux kernel for both the host and the guest all of which are modified to support the AMD SEV-SNP architecture and will eventually be upstreamed. We make minor modifications to the open-source framework Keylime [10] for performing remote attestation of VMs that has the SVSM-vTPM in the root of trust.
To implement SVSM-vTPM, we extend the open-source SVSM implementation [12] with a minimal C library (a stripped-down version of Musl [14]), WolfSSL library [29] for cryptographic primitives, and Microsoft's TPM that provides a software reference implementation of TCG's TPM 2.0 specification [15].
### Software TCB
We add 1500 lines of code1 to the existing SVSM implementation in Rust. To implement vTPM, we utilize third party libraries: a minimal C library (4100 LOC), WolfSSL crypto library (970 kLOC) and Microsoft's reference TPM implementation (32 kLOC) 2[15].
Footnote 1: measured using scc[22]
Footnote 2: measured using David Wheeler’s sloccount
We trust the SVSM implementation and all the third-party crates that are part of the dependency chain (i.e., recursive dependencies). We also assume that WolfSSL and Microsoft's TPM implementations are bug-free.
### Remote attestation with Keylime
We use the Keylime package for remote attestation. Keylime is designed to perform both boot-time and runtime attestation on a fleet of systems, using the attested nodes' TPM devices as the root of trust [10]. The Keylime architecture is comprised of three major components: A Keylime _agent_ is installed on every attested node. The agent announces itself with a Keylime _registrar_ when it starts up. The Keylime _verifier_ is in charge of performing attestations on every node.
Registration protocolThe purpose of Keylime registration is to record the availability of the registering agent for attestation and to establish mutual trust between the agent and the registrar. To this end the agent's credentials are checked and an attestation key is negotiated between the agent and the registrar for use for subsequent attestation challenges. As shown in Figure 3, the agent initiates the enrollment process by sending its TPM credentials - i.e., the public part of its endorsement key (EK) and attestation identity key (AIK), as well as the EK certificate, and the node's UUID to the registrar. The registrar verifies that the TPM's identity and authenticity
using the public EK and the EK certificate. Next, the validity of the AIK is established through the _MakeCredential/ActivateCredential_ function pair by using a carefully constructed secret that can only survive the registrar to agent roundtrip when both the TPM, AIK and UUID are authentic. Identity verification of a normal TPM device involves checking that the EK certificate correctly signs the public EK, and furthermore that the EK certificate (\(EK_{cert}\)) is signed by a trusted root (such as a manufacturer key or an intermediary key).
Attestation protocolHaving successfully registered with the _registrar_, the _agent_ is now ready to service attestation challenges. The Keylime _verifier_ initiates the attestation protocol by sending a TPM quote request to the _agent_, containing a nonce (to guard against replay attacks) and a PCR mask (list of PCRs). The _agent_ sends back the requested quote signed by the TPM, using the AIK associated during the registration phase. In addition, a number of logs (e.g. measured boot log, IMA log) are sent back with the quote. The _verifier_ validates the TPM quote by decrypting it with the registered AIK; validates the logs by testing them against the PCRs contained in the quote; and finally checks the contents of the logs against the attestation policy to render a trustworthy/untrustworthy verdict.
Protocol changes to handle SVSM-vTPMsSince Keylime is built around interaction with TPM devices, we needed to make only minor modifications in the code to handle SVSM-vTPMs. Basically, we only had to modify how the Keylime verifier checks the authenticity of a TPM device (function check_ek). As mentioned above, a "normal" TPM device is authenticated through its EK certificate, which signs the public EK and in turn is verified by a manufacturer certificate. Keylime carries a list of acceptable manufacturer certificates, and any TPM in use by Keylime has to be signed by one of these. Our ephemeral SVSM-vTPM, by its very nature, is not provisioned with an EK certificate. However, the (ephemeral) public EK is signed by the SEV attestation report, which we validate by checking it against the platform manufacturer's signature (i.e., AMD). In order to minimize the required changes in Keylime, we decided to simply replace the EK certificate with an SEV attestation report (\(Att_{report}\)) in our SVSM-vTPM (that is, we reuse the NVIndex in the TPM where the EK certificate normally resides). The _agent_ reads and submits the attestation report instead of the EK certificate during registration. The modified _registrar_ validates the attestation report (ensuring that it is signed by an authentic AMD platform) instead of the validating the EK certificate (marked by a different color in Figure 3). No other parts of the registration/attestation protocols require changes for correct Keylime function.
## 6 Evaluation
We ran all our experiments on publicly-available cloudlab infrastructure [64]. We utilize a Dell Poweredge R6525 server
Figure 3: Remote attestation of a confidential VM using keylime and SVSM-vTPM
equipped with two AMD EPYC 7543 32-core processor and 256 GiB RAM. The host machine runs a 64-bit Ubuntu 20.04 Linux with v5.19 kernel and qemu v6.1.50, whereas the confidential guest VM runs a 64-bit Ubuntu 22.04 Linux with a v5.17 kernel with open virtual machine firmware (OVMF) version edk2-stable2e2208, all of which are modified to enable SEV-SNP [3]. We have also evaluated our software stack on a Lenovo ThinkServer equipped with an AMD EPYC 7763 64-core processor and 128 GiB RAM.
Performance overheadTo understand the overheads of commonly used TPM functionalities, we study the performance of several TPM commands on SVSM-vTPM and compare that with a vanilla virtual machine that utilizes a vTPM hosted by Qemu. We rely on Qemu/KVM to launch both the regular and confidential VM. Qemu-vTPM setup uses the native TPM CRB interface as its frontend with an swtpm backend where the backed communicates with the vTPM running on the host userspace via a UNIX socket interface. The SVSM-vTPM setup uses a modified TPM CRB interface as its frontend and an SVSM backend (i.e., calling into VMPL0 to utilize the TPM hosted by SVSM) running under VMPL0 inside the confidential VM environment.
Specifically, we compare the performance of four different TPM commands which are essential for remote attestation, i.e., PCRREAD, PCREXTED, TPM2_QUOTE, CREATEPHIMARY. Also, we do not compare with a physical TPM as it would be unfair since the physical TPMs are an order of magnitude slower than virtual TPMs [63]. These TPM commands briefly do the following:
* **PCR read** This command reads the platform configuration registers of the TPM. A TPM may maintain multiple banks of PCR, where each bank is a collection of PCRs extended with a specific hashing algorithm (e.g., sha1, sha256). In our benchmark, we read all the PCR values from all the banks (i.e., sha1, sha256, sha384).
* **PCR extend** performs an extend operation on a specific PCR from a bank, i.e., it computes the hash of the old PCR value concatenated with the input data, i.e., \(PCR_{new}=hash(PCR_{old}|input\_data)\). We extend a single PCR register from a sha256 bank.
* **Quote** A TPM quote contains a subset of PCRs from a bank and a nonce (to prevent replay attacks) signed by the attestation key (AIK) of the TPM. We request a quote of three PCRs (16-18) from two different banks (sha1 and sha256).
* **Create primary** The TPM command creates a primary object under the chosen hierarchy (Endorsement, Platform, Owner or NULL) and loads it into the TPM. The TPM returns only a context with which one can interact with this object and the public and private portions of the key are not returned. We create an ECC keypair with the default curve (ecc256).
We perform all the experiments by booting the confidential VM with the corresponding setup (Qemu or SVSM), and invoke the TPM commands from the guest user space using the tpm2-tools package [25]. For each TPM command, we ran the benchmark for 3000 iterations. We ran these experiments three times to measure the average latency ( Figure 4). SVSM-vTPM incurs 5x lower latency than Qemu-vTPM on PCRreads and to get a TPM2_QUOTE. We incur 1.8x and 3.5x lower latency on PCREXTED and CREATEPHIMARY TPM operations respectively. Both qemu-hosted vTPM and SVSM-vTPM incur an exit into the hypervisor to communicate with the TPM. However, our SVSM-vTPM suffers from much less overhead compared to the qemu-hosted vTPM as the latter involves communicating with the TPM emulator backend (i.e., swtpm) through the socket interface. On the other hand, the physical TPMs are at least X times slower compared to the emulated ones as the physical ones are often connected to the mainboard via a low-bandwidth bus such as serial peripheral interface (SPI).
## 7 Security Analysis
The gist of our security argument is that we are trying an ephemeral vTPM to the AMD-SP hardware's root of trust to perform runtime attestation. In this section we examine a number of potential security attacks and explain how our design prevents them. Our hypothetical attacker's goal would be to infiltrate and alter a guest confidential VM without being detected by the remote attestation system (Keylime).
Fake vTPMThe guest confidential VM boots with the SVSM firmware containing our SVSM-vTPM as part of the VM launch process. The essence of this attack is that after the system is booted and the keylime _agent_ is registered, an attacker could spawn a new software vTPM in the guest userspace to hijack all the vTPM commands and redirect to the newly spawned vTPM. The new fake software vTPM is no longer running at a higher privilege level and can be controlled by the attacker to forge TPM quotes in an attempt to authenticate fake boot and IMA logs, and therefore hide unauthorized software alterations from keyline.
However, once the registration protocol is complete, the keylime registrar has associated the \(EK\) of our ephemeral vTPM with the \(AIK\) that would be used for signing the TPM quote. With the above redirection of TPM commands to a
Figure 4: Performance overhead of SVSM-vTPM vs Qemu-vTPM
fake vTPM an attacker would not be able to forge the TPM quote, as the fake vTPM has no access to the private \(AIK\) of the original vTPM, safely hidden by VMPL0 in SVSM.
The attacker could possibly force the registration protocol to restart where an attacker could feed the TPM credentials from the newly created vTPM. Again, keyline would detect this because of the mismatch of the fake TPM's \(EK_{pub}\) with its digest in the attestation report. A fake attestation report cannot be generated because the report contains the VMPL of the entity that requested it, and the guest is not running at VMPL0.
Fake SEV-SNP attestation reportWe save the attestation report requested by SVSM-vTPM at the same NVIndex as the \(EK_{cert}\) to make it available to the keyline agent. The essence of this attack is that the attacker could overwrite this NVIndex with either garbage data or another attestation report after compromising the guest. Garbage data would be detected by the keyline _registrar_, resulting in attestation failure. When overwritten with a genuine attestation report, an attacker can potentially change the identity of the vTPM, i.e., create another vTPM (similar to Fake vTPM attack) with a new set of keys and record the new \(EK_{pub}\) as part of the user-data field of the attestation report. If successful, they can perform all the attacks mentioned under the "Fake vTPM" attack (i.e., spoof PCRs, forge quotes, etc).
Even though one could retrieve an attestation report from a different VM privilege level, the platform guarantees that no one could spoof the VMPL level in the attestation report as it could be generated only by the software running inside VMPL0 (i.e., the keys for encrypting the request message is available only at the corresponding level). Thus, the replaced attestation report, if valid, would contain a VMPL level greater than 0. To prevent this attack, we check the VMPL level while validating the attestation report to ensure the requester VMPL level is set to zero.
An attacker can overwrite the attestation report NVIndex with a genuine attestation report off another confidential VM or from a previous boot of this confidential VM. Though the attestation report is signed by the AMD hardware, the user-data will not match with the digest of \(EK_{pub}\) we have inside the SVSM-vTPM, making the attack detectable.
Confidential VMs with no SVSMThough VMPL levels are supported in the SEV-SNP specification, it cannot be enforced by the end-user on a provider-controlled environment. A malicious cloud provider could host a regular SEV VM and pretend that it is running with an SEV-SNP firmware. In this scenario, the confidential VM would run without the SVSM firmware, where the entire guest operating system will run under VMPL0. This makes it possible for a guest VM to generate its own attestation report where the requester VMPL level is set to 0. To prevent this attack, we need to verify that our confidential VM is booted with the SVSM firmware running at VMPL0. The user can compute the measurement of their boot-time binaries that includes the SVSM firmware running at VMPL0 and validate it against the measurements reported in the attestation report provided by the cloud provider. If the measurements do not match, the confidential VM is likely booted without the SVSM firmware.
Weaknesses in random number generatorA weak HWRNG not only poses threat for the vTPM implementation, but also for the software running inside the confidential VM. Failing to seed the random number generator of a confidential VM correctly can result in cryptographic key leakage [51], particularly in well documented random input signature algorithms like ECDSA [50]. Furthermore, all vTPMs require a secure random number generator to operate correctly because of their reliance on it for the generation of ephemeral keys and nonces for secure functions. The problem is particularly acute for an ephemeral vTPM because the TPM manufacturing stage requires the generation of unguessable seeds which can only be achieved if they are based on an entropy source which cannot be influenced in any way by the host.
However, AMD hardware has suffered from a buggy HWRNG in the past where RORAND instruction was always giving out a constant value instead of a random number [7]. An attacker could exploit a weak or buggy HWRNG implementation to guess the initial seeds of the vTPM and create the same secret keys as the vTPM. For example, by guessing the attestation key, one could forge TPM quotes and break the guarantees of remote attestation. To be resilient to such hardware bugs, we can seed the random number generator with additional sources of entropy such as the hash of a key derived by the AMD-SP upon user's request along with the RORSEED instruction.
## 8 Discussion
Full disk encryptionFull disk encryption (FDE) protects the confidentiality and integrity of data at-rest. To prevent accidental disclosure of the secret key (e.g., disk encryption key), it is a standard practice to encrypt the secret key (_wrap_ operation) such that it can be decrypted only by the TPM (_unwrapping_). The wrapping key (i.e., the key which wraps the secret) is often the storage root key (SRK) present in the TPM.
However, in our ephemeral vTPM, there are no persistent storage keys in the TPM to support unwrapping of keys. To support FDE on such systems, we create an intermediary storage key \(K_{iSK}\) (a restricted decryption key with sensitive-DataOrigin [45]). Now, we perform a TPM _seal_ operation on the disk encryption key by parenting it to the storage key (\(K_{iSK}\)) we just created, outputting a sealed blob which can be unsealed only by a TPM with the same key. On platform boot, the vTPM would generate an ephemeral endorsement key and an ephemeral storage root key (eSRK). By retrieving the public part of the eSRK, we can wrap the intermediary
key \(K_{i\!SK}\) with \(e\)_SRK\({}_{pub}\)_ to create a wrapped key that can be decrypted only by our vTPM. It has to be noted that all the above operations can be performed on any TPM, i.e., the user need not necessarily perform these on the vTPM of the confidential VM. Now, the disk encryption key is wrapped to the parent key and the parent is in turn wrapped to the eSRK, forming a hierarchy under the ephemeral storage root key. It is also possible to wrap the parent key with \(EK_{pub}\) instead to create a hierarchy under the ephemeral endorsement key.
As both the disk encryption key and its parent key are wrapped for our specific vTPM, they are no longer a secret and can be delivered to the confidential VM in the clear. Since the sealed disk encryption key is invariant, we can embed this into the initrd. Finally, we can deliver the wrapped parent key to the confidential VM once we have performed the initial attestation of the platform to ensure its trustworthiness.
Storing secretsWe cannot store secrets directly by wrapping the keys on our ephemeral SVSM-vTPM as the EK and SRK would be newly generated on every boot. One could use a similar technique we used for FDE to form a hierarchy of keys under an intermediary storage key. Once the system is booted, we can parent the intermediary key to the ephemeral SRK or EK forming a hierarchy under the chosen key. Using this technique, one could store a hierarchy of keys, as we do on a regular persistent TPM.
## 9 Related work
Cloud vTPMsCloud providers offering confidential VMs typically provide virtual TPM device that would serve as a root-of-trust and could also be used for remote attestation. Google cloud only offers plain SEV confidential VMs and offers measured boot attestation via a vTPM managed by the hypervisor [27]. Microsoft Azure cloud relies on azure attestation service for attesting confidential VMs [13] that generates a token to decrypt the vTPM state and the disk, hinting that Microsoft may have their custom firmware based on SVSM specification (i.e., inside VMPL0) with a persistent vTPM for attesting SEV-SNP VMs. Alibaba cloud offers vTPM support on their elastic compute service VMs [1]. Amazon AWS provides Nitro TPM, a virtual TPM implementation conforming to the TPM 2.0 specification as part of their EC2 offering [6]. Some of these providers use a qemu-backed vTPM that runs on the host, which requires trusting the cloud provider. Also, there is very limited public knowledge on how these cloud vTPMs are designed and the security guarantees of it. In contrast, we plan to publish the source code of SVSM-vTPM implementation that is built on top of other standard open-source components (i.e., Qemu, Linux, and Keylime). As our SVSM-vTPM rely only on the hardware-protected isolation environment offered by the AMD-SP hardware, by bringing their own SVSM firmware, a user can completely eliminate the need for trusting the cloud provider.
TEE-based vTPMsCoCoTPM proposes a unified architecture for attestation of confidential VMs where the hypervisor launches a confidential VM that acts as a vTPM manager and handles all the vTPM instances [62]. They require TLS for securing the communication channel between a confidential VM and its vTPM. Though the vTPM is running under a TEE, a central vTPM manager suffers from several attacks ranging from denial of service to colluding with other confidential VMs, on the other hand launching a dedicated CoCoTPM for every confidential VM results in wastage of architectural resources as the number of address space identifiers (ASIDs) are limited.
Several projects rely on running vTPM under isolation provided by other hardware TEE mechanisms such as Intel SGX [69, 74, 75] and ARM Trustzone [63]. SvTPM aims to protect against NVRAM replacement, and rollback attacks [74] by running the vTPM inside an SGX enclave for KVM-based VMs, whereas eTPM manages several enclave vTPMs in a Xen environment and relies on a physical TPM to provide root-of-trust [69], similar to Berger et al. [34]. In contrast, our SVSM-vTPM architecture equips each confidential VM with their own private vTPM instance by leveraging the SVSM architecture that implements VM privilege levels. Also, by implementing an ephemeral vTPM, we completely eliminate the classes of attacks that come with state protection.
Trusted execution environmentsArm introduced confidential compute architecture (CCA) with their Armv9-A architecture where the processor provides an isolated hardware execution environment called _Realms_, for hosting entire VMs in a secure space [9]. Similar to other TEEs [2, 8] they offer pre-attestation of realms and can do measured boot with their hardware enforced security (HES) module specification [5] which serves as the root-of-trust [18, 33].
Intel, with their trust domain extensions (TDX) introduced their own version of hardware-isolated encrypted virtual machines called trusted domains (TDs). Intel TDX relies on an SGX-based quoting enclave called the TD-quoting enclave to perform remote attestation of trusted domains [8]. However, SGX suffered from numerous vulnerabilities in the past [59] where researchers were able to extract the SGX quoting enclave's attestation keys through micro-architectural side-channel attacks to forge attestation reports [72]. The attestation keys used by these quoting enclave are long-lived, and when leaked, affect millions of devices. In our design, we do not have any secrets to guard as the attestation keys are ephemeral.
## 10 Conclusions
The landscape of cloud security is changing with the growing desire to remove the cloud provider from the trust domain. Hardware vendors lay foundation for implementing this vision |
2307.04590 | Physical properties of circumnuclear ionising clusters. I. NGC 7742 | This work aims to derive the physical properties of the CNSFRs in the ring of
the face-on spiral NGC 7742 using IFS observations. We have selected 88
individual ionising clusters that power HII regions populating the ring of the
galaxy that may have originated in a minor merger event. For the HII regions
the rate of Lyman continuum photon emission is between 0.025 and 1.5 10$^{51}$
which points to these regions being ionised by star clusters. Their electron
density, ionisation parameter, filling factor and ionised hydrogen mass show
values consistent with those found in other studies of similar regions and
their metal abundances as traced by sulphur have been found to be between 0.25
and 2.4 times solar, with most regions showing values slightly below solar. The
equivalent temperature of the ionising clusters is relatively low, below 40000
K which is consistent with the high elemental abundances derived. The young
stellar population of the clusters has contributions of ionising and
non-ionising populations with ages around 5 Ma and 300 Ma respectively. The
masses of ionising clusters once corrected for the contribution of underlying
non-ionising populations were found to have a mean value of 3.5 $\times$ 10$^4$
M$_{\odot}$, comparable to the mass of ionised gas and about 20 \% of the
corrected photometric mass. | S. Zamora, A. I. Díaz | 2023-07-10T14:31:22Z | http://arxiv.org/abs/2307.04590v1 | # Physical properties of circumnuclear ionising clusters. I. NGC 7742
###### Abstract
This work aims to derive the physical properties of the CNSFRs in the ring of the face-on spiral NGC 7742 using IFS observations. We have selected 88 individual ionising clusters that power HII regions populating the ring of the galaxy that may have originated in a minor merger event. For the HII regions the rate of Lyman continuum photon emission is between 0.025 and 1.5 10\({}^{51}\) which points to these regions being ionised by star clusters. Their electron density, ionisation parameter, filling factor and ionised hydrogen mass show values consistent with those found in other studies of similar regions and their metal abundances as traced by sulphur have been found to be between 0.25 and 2.4 times solar, with most regions showing values slightly below solar. The equivalent temperature of the ionising clusters is relatively low, below 40000 K which is consistent with the high elemental abundances derived. The young stellar population of the clusters has contributions of ionising and non-ionising populations with ages around 5 Ma and 300 Ma respectively. The masses of ionising clusters once corrected for the contribution of underlying non-ionising populations were found to have a mean value of 3.5 \(\times\) 10\({}^{4}\) M\({}_{\odot}\), comparable to the mass of ionised gas and about 20 % of the corrected photometric mass.
keywords: galaxies: abundances - galaxies: ISM - galaxies: star clusters: general - galaxies: starburst - ISM: abundances - Nebulae, (ISM:) H II regions - Nebulae
## 1 Introduction
Careful determinations of abundance distributions over galaxies at the early stages of their evolution could provide important pieces of information about their formation processes since gas accretion or gas ejection episodes will leave an imprint on these abundance distributions. However, several important caveats exist: (a) the assumption that the ionisation of the gas in inner regions of galaxies is due only to star formation processes; (b) that the star formation modes dominating at high redshifts are similar to those encountered in the local universe. Regarding the first, it is nowadays generally accepted that some connection exists between star formation and activity in galactic nuclei, and young stars appear as one component of the unified model of AGN giving rise to the blue featureless continuum which is observed in Seyfert 2 galaxies where the broad line region is obscured. But regarding the second, that is that the star formation modes dominating at high redshifts are similar to those encountered in the local universe, this might not be the case. Recently, large and massive clumps of star formation have been detected in more than half of the resolved z \(>\) 1 galaxies in the Hubble UDF (see Elmegreen & Elmegreen 2005). These star-forming entities are in galaxies at all distances covered by the ACS (0.07 < z < 5). They have sizes of about 2 kpc, estimated ages of 10 Ma and masses often larger than 10\({}^{8}\) M\({}_{\odot}\). They are so luminous that they dominate the appearance of their host galaxies. Massive clumps like these are found in galaxies with a variety of morphologies, from somewhat normal ellipticals, spirals, and irregulars, to types not observed locally, including chain galaxies and their face-on counterparts, clump-cluster galaxies.
Interestingly enough these star-forming entities which seem to constitute the star formation mode in galaxies at high redshifts resemble the well known circumnuclear star-forming regions (CNSFRs), a common mode of star formation found close to galactic nuclei. These regions, many of them a few hundred pc in size and showing integrated H\(\alpha\) luminosities which overlap with those of HII galaxies (typically higher than 10\({}^{39}\) erg s\({}^{-1}\)), seem to be composed of several HII regions ionised by luminous compact stellar clusters whose sizes, as measured from high spatial resolution Hubble Space Telescope (HST) images, are seen to be of only a few pc. These regions are young (age \(<\) 10 Ma and massive (up to 2 \(\times\) 10\({}^{8}\) M\({}_{\odot}\)) (Hagele et al. 2007, 2013). In the UV-B wavebands, they contribute substantially to the emission of the entire nuclear region, even in the presence of an active nucleus (see e.g. Colina et al. 2002). In a galaxy like NGC 3310 the starburst "ring" is the strongest organized source of far-UV (FUV) emission and 30% of the total observed FUV emission is produced within a radius of 10". At redshifts of z
2-3, this structure would be confined to a region 0.2" in diameter for \(\Omega=1\) and would appear point-like in low-resolution observations. Consequently, in the absence of diagnostic spectroscopy, a high-redshift NGC 3310-like object could be mistaken for an active galactic nucleus (AGN).
CNSFRs in nearby galaxies, being close to the galactic nuclei, are expected to be of high metal abundance. However, detailed long-slit spectroscopic analyses show that most of them have abundances consistent with solar values (Diaz et al., 2007). Also, their ionisation structure as mapped by suitable emission line ratios is more similar to that of HII galaxies than to galactic disc GEHR, pointing to relatively hard ionising sources, not expected at high metallicities. As mentioned above, similar effects have been found for a considerable sample of star-forming galaxies at 1.0-cz\(<\)1.5 (see e.g. Liu et al., 2008). The answer might be related to the influence of a hidden low luminosity AGN, the presence of shocks in zones of high specific star formation rates, or harder ionizing continuum sources among other possibilities.
The above mentioned work of Diaz et al. (2007) was based on long-slit spectroscopy, which is very time consuming, and involved a few CNSFRs in three selected galaxies with the total number of regions studied amounting to a dozen. Obviously, the best strategy to study the complex star forming regions in circumnuclear rings is the use of Integral Field Spectroscopy (IFS). The Multi-Unit Spectroscopic Explorer (MUSE) available at VLT offers the opportunity to carry out this detailed study program. Typical circumnuclear rings have sizes of less than 1kpc (20 arcsec at a distance of 10 Mpc), hence are easily accommodated in the large field of view of MUSE (1 arcmin\({}^{2}\)) which also provides the necessary combination of high spatial (0.3 - 0.4 arcsec) and spectral resolution (R \(\simeq\) 2000 - 4000). The use of this technique greatly reduces the observing time and can increase the number of analysed clusters by an order of magnitude. On the other hand, the usually high abundances of the objects involved and their low excitation produce very weak [OIII] lines, difficult to measure with confidence, precluding the use of these lines for the analysis. The extended range of wavelength to the red provided by MUSE allows the use of Sulphur as an alternative abundance and excitation tracer for the characterisation of the HII regions and ionising clusters (Diaz & Zamora, 2022, see ).
In this first paper we present the study of the physical properties of the CNSFRs in the ring of the face-on galaxy NGC 7742 using MUSE observations publicly available and the full spectral region observed, from 4800 to 9300 A. NGC 7742 is classified as an SA(r)b galaxy. It has a weakly active nucleus classified as T2/L2 in Ho et al. (1997), that corresponds to a transition object whose spectrum is dominated by emission lines characteristic of both LINER and HII regions. Its morphology is dominated by a nuclear ring which is easily identified by prominent bumps on the luminosity profiles in different photometric bands at galactocentric distance between 9 and 11 arcsec which corresponds to around 1 kpc at the assumed distance of 22 Mpc (Tully & Fisher, 1988). The feature shows up most importantly in the U band thus pointing to a star formation origin (Wakamatsu et al., 1996). Apart from the ring, the surface brightness profile can be represented by the combination of two exponential discs and one central bulge (Sil'chenko & Moiseev, 2006). The galaxy shows a high degree of circular symmetry at different spatial levels: core, ring, and main body, and hence constitutes a very good case for the study of formation mechanisms of nuclear rings in non-barred galaxies. It is also one of the approximately 10% spirals showing a gaseous counter-rotating disc (Pizzella et al., 2004), that was first reported by de Zeeuw et al. (2002).
The kinematics of gas and stars in the central part of this galaxy has been studied in detail by Martinsson et al. (2018), also using MUSE data. In their work they have mapped the ring counter-rotation and have found evidence for two distinct stellar populations: the older of them counter-rotates with the gas while the younger one, concentrated to the ring, co-rotates with the gas. They conclude that the ring has been originated in a minor merger event that took place probably 2-3 Ga ago.
The present work is centred in the study of the individual ionising clusters that power the HII regions populating the ring of NGC 7742. The observations on which the work is based are presented in section 2 together with the description of the data reduction; section 3 is devoted to the description of the measurement methods and the data analysis; section 4 presents the results; the discussion is given in section 5 and our final conclusions are in Section 6.
## 2 Observations and data reduction
In this work we analyse the almost face-on galaxy NGC 7742 that shows a prominent circumnuclear star-forming ring using publicly available observations obtained by the IFS MUSE. Some characteristics of this galaxy are given in Table 1.
The Multi-Unit Spectroscopic Explorer, MUSE (Bacon et al., 2010) is an integral-field spectrograph (IFS) located at the Nasmyth focus of the Very Large Telescope (VLT) on the Unit Telescope 4 (UT4), of the European Southern Observatory (ESO) at Cerro Paranal, Chile. It operates in the visible wavelength range, covering from 4800 A to 9300 A with a nominal dispersion of 1.25 A/pixel with a spectral resolving power from 1770 (at 4800 A) to 3590 (at 9300 A) in the blue and red arms respectively. It is composed of 24 integral field units (IFUs) which, in the Wide Field Mode (WFM), provides a field of view (FoV) of 60 arcsec\({}^{2}\) with a spatial sampling of 0.2 arcsec\({}^{2}\).
NGC 7742 was observed as part of the first MUSE Science Verification run on 2014 June 22 under ESO Programme 60.A-9301(A) (PI: M. Sarzi). The observing time was split in two exposures of 1800s with an offset of 1 arcsec in declination and a rotation of 90\({}^{\rm o}\) between observations and with a median seeing of 0.63 arcsec. Offset sky observations were taken before or after the target observations for adequate sky subtraction.
We have used the ESO Phase 3 Data Release. The reduction of the data was performed by the Quality Control Group at ESO in an automated process applying version 0.18.5 of the MUSE pipeline (Weilbacher et al., 2014). They used calibration images taken as part of the standard MUSE calibration plan using different point
\begin{table}
\begin{tabular}{c c} \hline Galaxy & NGC 7742 \\ \hline RA J2000 (deg)\({}^{a}\) & 356.065542 \\ Dec J2000 (deg)\({}^{a}\) & 10.767083 \\ Morphological type & SA(r)b \\ Luminosity Class & LC II \\ Nuclear type & LINER/HII \\ z & 0.00555 \\ Distance (Mpc)\({}^{b}\) & 22.2 \\ Scale (pc/arcsec)\({}^{c}\) & 92 \\ \hline \end{tabular} \({}^{a}\) Skrutskie et al. (2006).
\({}^{b}\) Tully & Fisher (1988).
\({}^{c}\) Cosmology corrected scale.
\end{table}
Table 1: NGC 7742 global properties.
ings of the object. The corrections applied to each exposure were: subtraction of the master-bias, division by a master-flat-field and illumination correction between all slices of the IFU. Corrections for twilight and differential atmospheric refraction were also made. Data were wavelength calibrated, corrected for telluric absorption and sky subtracted using a dedicated offset exposure of 300s. Finally, the data were flux calibrated. The astrometric solution was provided and the data were resampled into a datacube. Finally, the produced individual datacubes were weighted by their respective exposure time and resampled into a single combined one.
We have also used additional data from Hubble Space Telescope (HST) that were acquired on 1995 July 9 with the Wide Field and Planetary Camera 2 (WFPC2) as a part of the program GTO/wfc 6276 (IP: J. Westphal) providing high-resolution images with a spatial resolution of \(\simeq\) 0.1 arcsec pixel\({}^{-1}\) and FoV of 150 arcsec\({}^{2}\). The data have been obtained from the Hubble Legacy Archive and are organised in 3 exposures of 700 s each in the U broad band obtained with the F336W filter. The reduction of these data has been performed by the Space Telescope Science Institute (STScI) using available calibration files taken for this observation and keeping in mind different dithering positions. The pipeline provides standard calibrations like correction for permanent camera defects, the temperature-dependence of the WF4 detector gain, bias, dark current and flat field corrections, the position-dependent exposure time and the absolute detector efficiency. Additionally, STScI has reprocessed all WFPC2 data including improvements to the time-dependent UV contamination, the variation in the bias level in WF4 and other subtle details.
## 3 Results and Analysis
### Emission line and continuum maps
From the observed data cubes we have constructed 2D maps which are presented in Figure 1. For the different emission lines we have assumed a linear behavior of the continuum emission in the region of interest choosing side-bands around each line of a given width. Table 2 give the identification of each line in column 1, its centre wavelength, \(\lambda_{c}\) in \(\AA\) in column 2, its width, \(\Delta\lambda\), in \(\AA\) in column 3, and the limits of the two continuum side-bands, in \(\AA\) in columns 4 and 5. The H\(\alpha\) and H\(\beta\) maps have been combined to produce an extinction map. We have also produced maps in two continuum bands of 100 \(\AA\) width centered at 5400 \(\AA\) (blue) and 8150 \(\AA\) (red). All wavelengths are in rest frame.
The two top left panels of Fig. 1 show the spatial distribution of the observed H\(\alpha\) and [OIII] fluxes in logarithmic scale. Superimposed on these maps we have represented the contours of HST data from WFC2 in the F336W filter where young star clusters would be more conspicuous. This provides a good comparison between the spatial resolution provided by the two instrumental configurations. The coincidence between the young clusters identified by the HST contours with the MUSE maps, specially in the H\(\alpha\) one ensures that we are actually detecting ionising star clusters. On this H\(\alpha\) map the diffuse gas of the circumnuclear ring is also clearly seen. On the other hand, the [OIII]\(\lambda\)5007 \(\AA\) emission, traces the different excitation conditions of the ionised gas across the ring. The extinction \(A_{V}\), in magnitudes, is shown in the top right panel of the figure and has been calculated by adopting the Galactic extinction law of Miller & Mathews (1972), with a specific attenuation of R\({}_{v}\) = 2.97 and the theoretical ratio H\(\alpha\)/H\(\beta\) = 2.87 from Osterbrock & Ferland (2006) (n\({}_{e}\) = 100 cm\({}^{-3}\), T\({}_{e}\) = 10\({}^{4}\) K, case B recombination) (see also Section 3.3). The distribution of the gas extinction seems to be smooth inside clumps and typically low, with a median value of 1.053 mag in the pixel by pixel analysis peaking around 1.83 mag. The apparently higher extinction values at the edges of the HII regions could be due to the low S/N ratio in the H\(\beta\) emission line that produces an artificially high H\(\alpha\)/H\(\beta\) ratio with a large uncertainty. Alternatively, it could be the result of dust created by intense star formation and accumulated by the gas expansion. The two bottom left panels of Fig. 1 show maps of observed continuum fluxes at blue and red wavelengths, 5400\(\AA\) and 8150\(\AA\) respectively, on top of which the contours of HST-WFC3 data in the F336W filter. Exponential fits to the stellar surface brightness show two different components with different scale-lengths Sil'chenko & Moiseev (2006). In both maps, HII regions contrast over the galaxy profile and seem to follow the clusters identified in UV emission. Finally, in the bottom right panel we can see the map of the equivalent width (EW) of \(H\alpha\) (in \(\AA\)). All circumnuclear regions present in the ring have EW(H\(\alpha\)) > 20 \(\AA\). This value is consistent with the presence of star formation occurred less than 10 Ma ago.
### HII region selection
Reddening spaxel-by-spaxel analysis has a high uncertainty due to the low S/N of hydrogen H\(\beta\) emission (see the top right panel of Fig. 1). However, we can observe that the reddening is very similar for all ring regions. Thus, we have decided to use the observed H\(\alpha\) flux map, i. e. without correcting for reddening, to select ionised regions. In this case, binning is not necessary since a higher S/N is not required and we can preserve the spatial resolution. Furthermore, we do not introduce additional errors in our subsequent analysis. We have selected the spatial extension of the ring on the basis of the spaxel-by-spaxel radial distribution of the observed H\(\alpha\) flux shown in the top panel of Figure 2. The area that belongs to the circumnuclear ring can be seen as a bump, in dark-red colour, with H\(\alpha\) emission excess over the adjacent continuum, in grey. The limits of the ring are marked on the figure by vertical lines. It has an inner radius of 6 arcsec, 0.75 kpc at the chosen distance of 22.2 Mpc, and an outer radius of 13 arcsec, 1.63 kpc (see Tab. 1). Previous studies report similar radial ring limits (\(\sim\) 1 kpc; Comeron et al. 2010; Martinsson et al. 2018).
Our selection method for the HII regions has been made on the basis of the existing HII\({}_{EXPLOREER}\) package, proposed by Sanchez et al. (2012). This program works on a line emission map, usually H\(\alpha\), and detects high intensity clumps, starting with the brightest pixel and adding adjacent ones following specific criteria. It requires several input parameters: the maximum size of the regions, the absolute flux intensity background for all of them (diffuse gas emission level) and the relative flux intensity at maximum to the background for each of the regions. This procedure has already been used in a series of IFS studies, related to CALIFA and MANGA data, but it tends to select regions with similar sizes (Galbany et al. 2016b). However, HII regions exhibit different sizes, something that
\begin{table}
\begin{tabular}{l c c c c} \hline Line & \(\lambda_{c}\) (\(\AA\)) & \(\Delta\lambda\) (Å) & \(\Delta\lambda_{left}\) (Å) & \(\Delta\lambda_{right}\) (Å) \\ \hline H\(\alpha\) & 6563 & 8 & 6531.5 - 6539.5 & 6597.0 - 6605.0 \\ H\(\beta\) & 4861 & 8 & 4811.0 - 4819.0 & 4901.0 - 4909.0 \\ \([OIII]\) & 5007 & 15 & 4902.5 - 4917.5 & 5092.5 - 5107.5 \\ \([NII]\) & 6583 & 15 & 6593.5- 6608.5 & 6542.5- 6527.5 \\ \hline \end{tabular}
All wavelengths are in rest frame.
\end{table}
Table 2: Extraction parameters for emission line maps.
shows up in the higher spatial resolution MUSE data, and hence the HII\({}_{EXPLORE}\) package is not appropriate. This fact has already been remarked by Galbany et al. (2016a) and here below we describe the implementations we have made to the original package in order to tackle this problem.
We have developed a specific software with the same iterative procedure as the one followed in HII\({}_{EXPLORE}\) but with some additional requirements. We have selected the maximum extent of regions according to their typical projected size at z \(\sim\) 0.016 (Gonzalez Delgado & Perez 1997; Lopez et al. 2011, 500pc), setting a minimum for it according to the point spread function (PSF) value of the input map. We have assumed spherical symmetry and we have modulated the radius of the different regions according to their brightness. We have set an individual flux intensity threshold to each region setting a limit of 10% with respect to the emission of its centre since an asymptotic behavior was found by Diaz et al. (2000a) in H\(\alpha\) intensity in their study of CNSFRs. Finally, we have tried different values for the absolute flux intensity background of the complete map adopting that resulting of the best fit of the program, i.e. the one that minimises the dispersion of the spatial residuals in the map.
Apart from the study of the ring HII regions as described above, we have also analysed HII regions external to it for comparison purposes. In order to do that we have ran our segmentation two times, a first one for circumnuclear regions and a second one for those in the outer limit to the ring. The two procedures are slightly different since the absolute flux intensity background is much larger in the further (the diffuse gas emission in the ring is higher). The second panel of Fig.2 shows the HII regions selected with the use of the methodology described above. Two supernovae, SN 1993R and SN 2014cy, have also been plotted at their respective positions (Treffers et al. 1993; Nishimura 2014, respectively). SN 1993R is a peculiar supernova, similar to SN Ia class 1991bg, but with stronger CaII triplet lines, a weak emission of [OII]\(\lambda\)630nm and detected in X-Ray emission (Filippenko & Matheson 1993; Bregman et al. 2003). Also, it is superimposed on a very bright HII region which has the highest values of H\(\alpha\) emission, [OIII]\(\lambda\)5007 emission, A\({}_{v}\) and EW(H\(\alpha\)) (see Fig. 1). SN 2014cy was classified as SN II and, in the figure, is on top of a region with characteristics similar to the rest of the sample.
Finally, in order to further select data with high quality and also discard failures from the method, we have imposed the following requirements to the integrated spectrum extracted from each selected region: (i) to be certain that the emission has a star formation origin, EW(H\(\alpha\)) must to be higher than 6 A (Cid Fernandes et al. 2010; Sanchez et al. 2015); (ii) to claim the extracted spectrum has physical meaning, the ratio H\(\alpha\)/H\(\beta\) must be between 2.7 and 6.0, which corresponds to the theoretical values from Osterbrock & Ferland (2006) (assuming an electron density of n\({}_{e}\) = 100 cm\({}^{-3}\) and an electron temperature of T\({}_{e}\) = 10\({}^{4}\) K) and an extinction up to 2.3 mag respectively.
At the end of the entire procedure, we have obtained a total of 88 HII regions in the ring and 158 regions outside it. Table 3 shows the position of each HII region in the ring, with respect to that of the galaxy centre, its size and its integrated observed H\(\alpha\) emission
Figure 1: From left to right and top to bottom: Maps of the observed H\(\alpha\) and [OIII]\(\lambda\)5007 Å fluxes (in units of 10\({}^{-20}\) erg/s/cm\({}^{2}\) and logarithmic scale); A\({}_{v}\) extinction (in magnitudes); maps for the observed continuum in the blue and red spectral bands (5400 Å and 8150Å respectively, in units of 10\({}^{-17}\) erg/s/cm\({}^{2}\) and lineal scale); and EW(H\(\alpha\)) in Å. Upper and bottom left and centre images show superimposed contours of the HST-UV image which has been described in the text. Orientation is north up, east to the left.
flux. The identification of each region is given in column 1 of the table. SN 1993R lies close to R1.
### Emission line measurements and uncertainties
We have extracted each region spectrum, integrating its corresponding flux in every single aperture produced by the segregation, except in the case of the weak [SIII]\(\lambda\)\(\delta\) 312 A line for which we have integrated only pixels with S/N \(>\) 1.0, as described below. Fig. 3 shows one of these spectra. An underlying stellar population is slightly appreciable in some of our spectra. In order to correct for this effect, we have fitted a Gaussian to the underlying absorption both in the H\(\beta\) and H\(\alpha\) lines and subtracted it from the extracted spectra. For the brightest region, this correction has been found to be less than 3% of the observed flux which reflects in a contribution to the measured H\(\beta\) flux within the observational errors.
For each region spectrum, a global continuum has been estimated by fitting a second order polynomial, \(F_{c}(\lambda)=a\lambda^{2}+b\lambda+c\) after masking nebular and stellar features. The masks have been built assuming a width of 8A at each side the central wavelength of the lines involved. To obtain an accurate measure of line fluxes, we have estimated the standard deviation of the residuals of the global continuum fit (\(\sigma_{c}\)). After subtracting the global continuum, the measurement of fluxes is performed using a single Gaussian fit plus a linear term:
\[f(\lambda)=A_{g}\cdot e^{-\frac{(\lambda-A_{g})^{2}}{2\sigma_{g}^{2}}}+A_{c} \tag{1}\]
where A\({}_{g}\), \(\lambda_{g}\) and \(\sigma_{g}\) are the amplitude, central wavelength and width of the fitted Gaussian. The linear term, \(A_{c}\), appears as a correction to the global continuum value close to each line. It can take values between [\(-\sigma_{c}\), \(+\sigma_{c}\)] taking a given value for each measured line. To determine the error of this measurement, and impose quality conditions to it, we have calculated the local standard deviation of the residuals of the Gaussian fit (\(\sigma_{l}\)) in 30A around each line. The wavelength window selected for this modelling is adjusted for each line at the first point compatible with \(\sigma_{c}\) at each side of its central wavelength.
Using this procedure, we have measured the most prominent emission lines in our spectra: H\(\beta\) and H\(\alpha\) Balmer lines; [OIII]\(\lambda\lambda\) 4959,5007 A, [NII]\(\lambda\lambda\) 6548,84 A, [SII]\(\lambda\lambda\) 6716,31 A, [ArIII]\(\lambda\) 7136 A an [SIII]\(\lambda\) 9069 A forbidden lines. We have taken into account only fluxes of the lines that meet the requirement: \(A_{g}>3\sigma_{l}\), thus
\begin{table}
\begin{tabular}{l c c c} \hline Region ID & Area & Offsets from galaxy center \({}^{a}\) & F(H\(\alpha\)) \\ & (arcsec\({}^{2}\)) & (arcsec) & (\(10^{-15}\) erg\(\cdot s^{-1}\cdot cm^{-2}\)) \\ \hline R1* & 1.48 & -8.4, 6.0 & 17.621 \(\pm\) 0.023 \\ R2 & 1.40 & -4.6, 8.2 & 10.138 \(\pm\) 0.019 \\ R3 & 4.28 & -8.9, 2.1 & 19.753 \(\pm\) 0.053 \\ R4 & 2.28 & -5.8, -9.4 & 10.306 \(\pm\) 0.024 \\ R5 & 1.44 & -1.6, -9.6 & 8.012 \(\pm\) 0.017 \\ R6 & 0.20 & -2.3, -9.8 & 1.426 \(\pm\) 0.003 \\ R7 & 2.72 & 7.0, -7.0 & 11.355 \(\pm\) 0.026 \\ R8 & 4.12 & 8.4, -3.8 & 15.943 \(\pm\) 0.044 \\ R9 & 4.32 & 2.6, -9.8 & 16.294 \(\pm\) 0.040 \\ R10 & 2.88 & 3.6, -8.8 & 14.017 \(\pm\) 0.030 \\ \hline \end{tabular} \({}^{a}\) Offsets from centre of the galaxy to the centre of each individual region.
* Region near SN explosion.
\end{table}
Table 3: Selection characteristics for observed CNSFRs. The complete table is available online; here only a part is shown as an example.
Figure 2: Upper panel: Flux in individual spaxels as a function of radius: continuum near H\(\alpha\) with grey dots, integrated flux of this line with colored dots and limits of the ring marked with blue vertical lines. Lower panel: HII regions selected with our segregation program. SN 2014cy and SN 1993R are also plotted (see text). The logarithmic color scale is equivalent with colours presented in the top panel. Orientation is north up, east to the left. The physical scale is represented at the bottom left corner of the map. The limits of the ring are marked with blue circles.
discarding the most uncertain values. Additionally, we have measured the weak HeII\(\lambda\) 6678 A line with \(A_{g}>1\sigma_{7}\). In the case of [SIII]\(\lambda\) 6312 A and [OIII]\(\lambda\lambda\) 7320,30 A, their fluxes have been measured on the spectrum extracted by integration of the pixels where these lines are detected with a tolerance larger than 1\(\sigma_{I}\). The [SIII] line has been finally measured with sufficient accuracy in 40 ring HII regions while only 13 regions allowed accurate measurements for the [OII] lines. They have been finally measured with sufficient accuracy in 40 and 13 ring HII regions respectively.
The errors in the observed fluxes have been calculated from the expression given in Gonzalez-Delgado et al. (1994):
\[\Delta[F_{\lambda}]=\sigma_{I}\cdot N^{1/2}[1+EW/(N\Delta)]^{1/2} \tag{2}\]
where \(\Delta[F]\) is the error in the line flux, \(\sigma_{I}\) represents the standard deviation in the local continuum, N is the number of pixels used in the Gaussian fit, \(\Delta\) is the wavelength dispersion (1.25 A/pix) and EW is the line equivalent width. The mean value of continuum fluxes, \(F_{c}(\lambda)\) + \(A_{c}\), in the wavelength range [\(\lambda_{line}-\sigma_{g}\), \(\lambda_{line}+\sigma_{g}\)] has been used to compute this latter value.
Regarding the effects of reddening, we have used a simple screen distribution of the dust and have assumed the same extinction for emission lines and the stellar continuum. The measured line intensities have been corrected using a reddening constant c(H\(\beta\)), derived from the observed H\(\alpha\)/H\(\beta\) ratio, adopting the Galactic extinction law of Miller & Mathews (1972), with a specific attenuation of R\({}_{v}\) = 2.97. A theoretical value for the H\(\alpha\)/H\(\beta\) ratio of 2.87 has been assumed (Osterbrock & Ferland 2006, for n\({}_{e}\) = 100 cm\({}^{-3}\) and T\({}_{e}\) = 10\({}^{4}\) K). Given the wavelengths of the lines involved in our study, we have measured their intensities with respect to H\(\alpha\) which has a high S/N and is less affected by underlying absorption and reddening effects, thus providing a more precise measurement. Therefore the extinction correction has been applied using the following equation:
\[log\left(\frac{I(\lambda)}{I(H\alpha)}\right)=log\left(\frac{F(\lambda)}{F(H \alpha)}\right)+c(H\beta)\cdot(f(\lambda)-f(H\alpha)) \tag{3}\]
where c(H\(\beta\)) is the reddening constant, f(\(\lambda\)) gives the value of the logarithmic extinction normalised to H\(\beta\), and F\({}_{\lambda}\) and I\({}_{\lambda}\) are the observed and corrected emission line fluxes at wavelength \(\lambda\) respectively. Then the I(\(\lambda\))/I(H\(\alpha\)) ratio has been translated to I(\(\lambda\))/I(H\(\beta\)) assuming the aforementioned value of 2.87 for the theoretical H\(\alpha\)/H\(\beta\) Balmer decrement. The corresponding errors have been propagated in quadrature.
Table 4 shows, for each selected ring HII region, the reddening
Figure 3: Extracted and reddening corrected spectrum of region R1. Flux is expressed in units of \(10^{-15}erg/s/cm^{2}/\)Å.
corrected emission line intensities of strong lines relative to H\(\beta\), and its corresponding reddening constant.
### Integrated magnitudes
We have calculated integrated fluxes inside the Sloan Digital Sky Survey (SDSS) filters for each region using the following expression (Fukugita et al., 1995):
\[f_{\lambda}=\int_{\Delta\lambda}F(\lambda)\cdot\lambda\cdot T_{\lambda}\cdot d\lambda \tag{4}\]
where T\({}_{\lambda}\) denotes the response curves of the r and i bands and \(F(\lambda)\) denotes the extracted spectrum of each region. The origin of the additional \(\lambda\) term in this expression lies on the assumption of a photon-counting detector whose response is proportional to the photon-count rate, \(\lambda/(hc)\). Before computation, all nebular emission lines present in the spectra have been masked assuming a width of \(\pm 8\)A around the line central wavelength. The spectra have also been corrected for reddening using the c(H\(\beta\)) values calculated from the H\(\alpha\)/H\(\beta\) Balmer decrement (see Sec. 3.3).
The apparent magnitudes of the selected HII regions have been calculated from their integrated fluxes using:
\[m_{\lambda}=-2.5\cdot log\left(\frac{f_{\lambda}}{F_{0}^{\lambda}\cdot\int_{ \Delta\lambda}\lambda\cdot T_{\lambda}\cdot d\lambda}\right) \tag{5}\]
where \(F_{0}^{\lambda}\) is the constant flux density per unit wavelength in the AB System, \(2.88637\times 10^{-9}\) erg/cm\({}^{2}\)/s/A for r band (Zero Point = 21.35 mag) and \(1.95711\times 10^{-9}\) erg/cm\({}^{2}\)/s/A for i band (Zero Point = 21.77 mag). We have assumed a distance of 22.2 Mpc (Tully & Fisher, 1988, see Tab. 1) to translate these values into absolute magnitudes. Finally, we have calculated the r-i colours of the regions.
Integrated flux errors have been calculated from the global continuum dispersion and the width of each filter as:
\[\Delta[f_{\lambda}]=\sigma_{c}\cdot W_{eff}=\sigma_{c}\cdot\frac{\int T_{ \lambda}\cdot d\lambda}{T_{\lambda}^{max}} \tag{6}\]
where \(\Delta[f]\) is the error in the integrated flux in one band, \(\sigma_{c}\) represents the standard deviation in the global continuum flux and T\({}_{\lambda}\) is the response curve of each filter. W\({}_{eff}\) represents the width of a rectangle with the same area covered by the filter and a height equal to its maximum transmission. The calculated errors have been propagated in quadrature for the rest of the derived quantities (apparent magnitudes, absolute magnitudes and r-i colours).
Table 5 shows the integrated magnitudes and derived quantities for each HII region within the ring and lists in columns 1 to 6: (1) the region ID, (2) the apparent magnitude for the i band, (3) the apparent magnitude for the r band, (4) the absolute magnitude for the i band, (5) the absolute magnitude for the r band and (6) the r-i colour.
Figure 4 shows the colour-magnitude diagram of the studied ionised regions that represents a first approach to the properties of their stellar populations underneath. Ring HII regions show r-band luminosities larger than the rest. In regions outside the ring there is a trend of redder r-i colours for lower r-band luminosities that seems to be real given the small size of the observational errors involved (inside the symbols in the graph).
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline Line & H\(\beta\) & [OIII] & [OIII] & [NII] & H\(\alpha\) & [NII] & HeI & [SII] & [SII] & [SIII] \\ & \(\lambda\) & 4861 & 4959 & 5007 & 6548 & 6563 & 6584 & 6678 & 6717 & 6731 & 9069 \\ & f(.t) & 0.000 & -0.024 & -0.035 & -0.311 & -0.313 & -0.316 & -0.329 & -0.334 & -0.336 & -0.561 \\ \hline Region ID & c(H\(\beta\)) & l(H\(\beta\))\({}^{a}\) & & & & & & & l(\(\lambda\))\({}^{b}\) & & & \\ \hline R1* & 0.44 \(\pm\) 0.01 & 12.33 \(\pm\) 0.24 & 158 \(\pm\) 3 & 444 \(\pm\) 4 & 438 \(\pm\) 2 & 2870 \(\pm\) 24 & 1342 \(\pm\) 3 & 22 \(\pm\) 0 & 334 \(\pm\) 2 & 284 \(\pm\) 2 & 246 \(\pm\) 3 \\ R2 & 0.58 \(\pm\) 0.01 & 8.85 \(\pm\) 0.24 & 147 \(\pm\) 4 & 422 \(\pm\) 5 & 461 \(\pm\) 3 & 2870 \(\pm\) 33 & 1423 \(\pm\) 5 & 23 \(\pm\) 1 & 367 \(\pm\) 2 & 289 \(\pm\) 2 & 210 \(\pm\) 4 \\ R3 & 0.47 \(\pm\) 0.02 & 14.53 \(\pm\) 0.57 & 85 \(\pm\) 6 & 224 \(\pm\) 7 & 421 \(\pm\) 4 & 2870 \(\pm\) 48 & 1295 \(\pm\) 6 & 16 \(\pm\) 1 & 460 \(\pm\) 4 & 339 \(\pm\) 3 & 113 \(\pm\) 4 \\ R4 & 0.39 \(\pm\) 0.01 & 6.62 \(\pm\) 0.22 & 101 \(\pm\) 5 & 283 \(\pm\) 6 & 442 \(\pm\) 3 & 2870 \(\pm\) 41 & 1372 \(\pm\) 6 & 18 \(\pm\) 1 & 525 \(\pm\) 4 & 389 \(\pm\) 3 & 119 \(\pm\) 5 \\ R5 & 0.46 \(\pm\) 0.01 & 5.79 \(\pm\) 0.19 & 103 \(\pm\) 5 & 270 \(\pm\) 6 & 439 \(\pm\) 2 & 2870 \(\pm\) 40 & 1359 \(\pm\) 5 & 19 \(\pm\) 1 & 367 \(\pm\) 3 & 274 \(\pm\) 3 & 151 \(\pm\) 4 \\ R6 & 0.66 \(\pm\) 0.02 & 1.42 \(\pm\) 0.06 & 91 \(\pm\) 7 & 248 \(\pm\) 8 & 442 \(\pm\) 3 & 2870 \(\pm\) 52 & 1373 \(\pm\) 6 & 21 \(\pm\) 1 & 364 \(\pm\) 3 & 259 \(\pm\) 2 & 150 \(\pm\) 3 \\ R7 & 0.44 \(\pm\) 0.02 & 7.97 \(\pm\) 0.29 & 81 \(\pm\) 6 & 231 \(\pm\) 7 & 387 \(\pm\) 3 & 2870 \(\pm\) 45 & 1182 \(\pm\) 5 & 18 \(\pm\) 1 & 415 \(\pm\) 3 & 299 \(\pm\) 3 & 141 \(\pm\) 5 \\ R8 & 0.42 \(\pm\) 0.02 & 10.72 \(\pm\) 0.44 & 82 \(\pm\) 7 & 227 \(\pm\) 7 & 411 \(\pm\) 4 & 2870 \(\pm\) 49 & 1263 \(\pm\) 7 & 18 \(\pm\) 1 & 435 \(\pm\) 4 & 323 \(\pm\) 4 & 121 \(\pm\) 5 \\ R9 & 0.56 \(\pm\) 0.02 & 13.73 \(\pm\) 0.6 & 102 \(\pm\) 8 & 266 \(\pm\) 8 & 419 \(\pm\) 4 & 2870 \(\pm\) 53 & 1292 \(\pm\) 6 & 15 \(\pm\) 1 & 445 \(\pm\) 4 & 323 \(\pm\) 3 & 114 \(\pm\) 5 \\ R10 & 0.41 \(\pm\) 0.01 & 9.37 \(\pm\) 0.34 & 58 \(\pm\) 5 & 170 \(\pm\) 6 & 367 \(\pm\) 3 & 2870 \(\pm\) 44 & 1124 \(\pm\) 5 & 14 \(\pm\) 1 & 399 \(\pm\) 3 & 291 \(\pm\) 3 & 129 \(\pm\) 4 \\ \hline \end{tabular} \({}^{a}\) In units of \(10^{-15}\) erg/s/cm\({}^{2}\).
\({}^{b}\) Values normalized to l(H\(\beta\)) \(10^{-3}\).
\({}^{a}\) Region near SN explosion.
\end{table}
Table 4: Reddening corrected emission line intensities. The complete table is available online; here only a part is shown as an example.
Figure 4: Colour-magnitude diagram for HII regions inside (blue dots) and outside (purple squares) the galaxy ring.
### Chemical abundance determinations
HII region metallicities have been traced by their sulphur abundances following the methodology described in (Diaz & Zamora, 2022), based on red-to-near infrared spectroscopy. The wavelength range used includes the [SIII]\(\lambda\) 6312 A and [SII]\(\lambda\lambda\) 6717,31 A lines in the red part of the spectrum and the [SIII]\(\lambda\lambda\) 9069,9532 A in the far-red wavelengths. These lines are analogous to the [OII] and [OIII] lines, commonly used to derive oxygen abundances in nebulae. Since sulphur and oxygen are both produced in massive stars both elements are supposed to be proportional to each other. Due to the longer wavelengths of the sulphur lines, reddening effects are less important, which is rather interesting for the study of our observed regions since they are located in the central part of the galaxy. Also, [SII] and [SIII] lines can be measured relative to nearby hydrogen recombination lines (H\(\alpha\), P\({}_{9}\)) in order to minimise the uncertainties. In addition, sulphur, contrary to oxygen, does not seem to be depleted in diffused clouds (Jenkins, 2009) and, due to the lower energies of the involved transitions, the electron temperature sensitive line of [SIII] at \(\lambda\) 6312 A can be detected and measured up to, at least, solar abundances (Diaz et al., 2007). This methodology is ideal to deal with data from MUSE which do not include the [OII] lines at \(\lambda\lambda\) 3727,29 A. Using this approach, the [SIII] electron temperature, \(T_{e}([SIII])\), can be derived using the ratio of the nebular to auroral lines of [SIII] which originate from different upper levels with different excitation energies and hence depend strongly on temperature:
\[R_{S3}=\frac{I(\lambda 9069,9532)}{I(\lambda 6312)} \tag{7}\]
where I(\(\lambda 9069,9532\) A) denotes the sum of the two near infrared [SIII] lines. MUSE data cover only from 4800 to 9300A) and therefore do not include the [SIII]\(\lambda\) 9532 A line; we have used the theoretical relation [SIII]\(\lambda\) 9532 A / [SIII]\(\lambda\) 9069 A = 2.44 in order to account for this fact. The following expression has been used (see Diaz & Zamora, 2022):
\[t_{e}([SIII])=0.5597-1.808\cdot 10^{-4}R_{S3}+\frac{22.66}{R_{S3}} \tag{8}\]
where \(t_{e}([SIII])=10^{-4}.T_{e}([SIII])[K]\). This expression has been calculated using PyNeb (Luridiana et al., 2015) for values of electron temperatures, T\({}_{e}(\)[SIII]), between 5000 K and 25000 K, an electron density of n\({}_{e}\) = 100 cm\({}^{-3}\) and the atomic data references listed in Table 6. This equation has a very weak dependence on electron density, increasing by about 3% for values of n\({}_{e}\) between 100 and 1000 cm\({}^{-3}\)(Perez-Montero, 2017). We have used PyNeb (Luridiana et al., 2015) to calculate the HII region electron densities, \(n_{e}\), from the [SII]\(\lambda\) 6717 A / [SII]\(\lambda\) 6731 A using the atomic coefficients listed in Tab. 6. This ratio has a slight temperature dependence; a value of T\({}_{e}\) = 10000 K has been assumed. This value is close to the mean value of 9197 K derived from the relation between T\({}_{e}(\)[SIII]\()\) and the sulphur abundance given below (see Diaz & Zamora, 2022):
\[t_{e}([SIII])=(19.226\pm 0.028)+(-4.7274\pm 0.0081)\cdot\]
\[(12+log(S/H))+(0.29879\pm 0.00058)\cdot \tag{9}\]
The derived electron densities of the HII regions within the ring are found to be low and within a narrow range of values centred around 55 cm\({}^{-3}\), with a median value of 61 cm\({}^{-3}\) and a standard deviation of 37 cm\({}^{-3}\), in the lower limit for derived densities using these lines. Only eight regions (R1, R2, R16, R19, R51, R54, R66 and R70) have values higher than 100 cm\({}^{-3}\), and only three of them (R1, R75 and R66) are significantly different from the median value (>3\(\sigma\)). About 50% of the regions show electron density values lower than 50 cm\({}^{-3}\) and hence are undetermined. Mazzuca et al. (2006) also found similar results, concluding that the ring is predominantly populated by clouds of very low electronic density, which are typical of extragalactic HII regions, but lower than those derived for CNSFR (Diaz et al., 2007).
The weak [SIII]\(\lambda\) 6312 A line has been measured with a S/N higher than 1 in \(\sim\)45 % (40 out of 88) of the HII regions within the ring. For these regions total sulphur abundances have been derived directly using the described method.
In the ring HII regions of our sample, located in the central part of the galaxy and relatively close to its nucleus, most of the sulphur
\begin{table}
\begin{tabular}{c c c} \hline Ionisation state & Collisional strengths & Transition probabilities \\ \hline \(S^{+}\) & Tayal \& Zatsarinny (2010) & Podobedova et al. (2009) \\ \(S^{2+}\) & Hudson et al. (2012) & Podobedova et al. (2009) \\ \(O^{+}\) & Pradhan et al. (2006) & Zeippen (1982) \\ \(T^{(})\) & Tayal (2007) & Wiese et al. (1996) \\ \(O^{2+}\) & Aggarwal \& Keenan (1999) & Storey \& Zeippen (2000) \\ & & Wiese et al. (1996) \\ \hline Ion & \multicolumn{2}{c}{Atomic data} \\ \hline H & \multicolumn{2}{c}{Storey \& Hummer (1995)} \\ \hline \end{tabular}
\end{table}
Table 6: Atomic data sources.
\begin{table}
\begin{tabular}{c c c c c c} \hline Region ID & m\({}_{i}\) (mag) & m\({}_{r}\) (mag) & M\({}_{i}\) (mag) & M\({}_{r}\) (mag) & r\({}_{i}\) (mag) \\ \hline R1* & \(17.96\pm(0.23\times 10^{-4})\) & \(18.01\pm(0.15\times 10^{-4})\) & \(-13.77\pm(0.23\times 10^{-4})\) & \(-13.72\pm(0.28\times 10^{-4})\) & \(0.049\pm(0.275\times 10^{-4})\) \\ R2 & \(17.92\pm(0.21\times 10^{-4})\) & \(17.97\pm(0.14\times 10^{-4})\) & \(-13.81\pm(0.21\times 10^{-4})\) & \(-13.76\pm(0.25\times 10^{-4})\) & \(0.050\pm(0.247\times 10^{-4})\) \\ R3 & \(16.76\pm(0.21\times 10^{-4})\) & \(16.85\pm(0.14\times 10^{-4})\) & \(-14.97\pm(0.21\times 10^{-4})\) & \(-14.89\pm(0.25\times 10^{-4})\) & \(0.085\pm(0.254\times 10^{-4})\) \\ R4 & \(17.69\pm(0.26\times 10^{-4})\) & \(17.77\pm(0.18\times 10^{-4})\) & \(-14.04\pm(0.26\times 10^{-4})\) & \(-13.96\pm(0.32\times 10^{-4})\) & \(0.081\pm(0.319\times 10^{-4})\) \\ R5 & \(18.00\pm(0.22\times 10^{-4})\) & \(18.08\pm(0.15\times 10^{-4})\) & \(-13.73\pm(0.22\times 10^{-4})\) & \(-13.65\pm(0.26\times 10^{-4})\) & \(0.085\pm(0.263\times 10^{-4})\) \\ R6 & \(19.90\pm(0.18\times 10^{-4})\) & \(19.90\pm(0.12\times 10^{-4})\) & \(-11.83\pm(0.18\times 10^{-4})\) & \(-11.83\pm(0.22\times 10^{-4})\) & \(0.003\pm(0.215\times 10^{-4})\) \\ R7 & \(17.37\pm(0.24\times 10^{-4})\) & \(17.42\pm(0.16\times 10^{-4})\) & \(-14.36\pm(0.24\times 10^{-4})\) & \(-14.31\pm(0.29\times 10^{-4})\) & \(0.049\pm(0.285\times 10^{-4})\) \\ R8 & \(16.88\pm(0.23\times 10^{-4})\) & \(16.97\pm(0.16\times 10^{-4})\) & \(-14.85\pm(0.23\times 10^{-4})\) & \(-14.76\pm(0.27\times 10^{-4})\) & \(0.093\pm(0.275\times 10^{-4})\) \\ R9 & \(16.76\pm(0.21\times 10^{-4})\) & \(16.79\pm(0.14\times 10^{-4})\) & \(-14.97\pm(0.21\times 10^{-4})\) & \(-14.95\pm(0.25\times 10^{-4})\) & \(0.024\pm(0.249\times 10^{-4})\) \\ R10 & \(17.27\pm(0.22\times 10^{-4})\) & \
is expected to be in the form of S\({}^{+}\) and S\({}^{++}\). Given the low ionisation potential for sulphur, the contribution by S\({}^{0}\) can be neglected and no contribution from S\({}^{3+}\) is expected in HII regions of moderate to high metallicity (Diaz & Zamora, 2022). Therefore, we have assumed an only zone in which \(T_{e}(S^{+})\sim T_{e}(S^{++})=T_{e}([SIII])\), the characteristic electron temperature of the region where both the S\({}^{+}\) and S\({}^{++}\) S ions overlap and that encompasses almost the whole nebula. Abundances have been calculated using the expressions derived using the PyNeb package using the atomic coefficients listed in Tab. 6 and are given below:
\[\begin{split} 12+log\left(\frac{S^{+}}{H^{+}}\right)=& log \left(\frac{I(\lambda 6717,31)}{I(H_{\beta})}\right)+5.516+\\ &+\frac{0.884}{t_{e}([SIII])}-0.480\cdot log(t_{e}([SIII]))\end{split} \tag{10}\]
\[\begin{split} 12+log\left(\frac{S^{2+}}{H^{+}}\right)=& log \left(\frac{I(\lambda 9069,9532)}{I(H_{\beta})}\right)+6.059+\\ &+\frac{0.608}{t_{e}([SIII])}-0.706\cdot log(t_{e}([SIII]))\end{split} \tag{11}\]
where I(\(\lambda\) 6717,31 A ) denotes the sum of the intensities of the two red [SII] lines, I(\(\lambda\)9069,9532 A) denotes the sum of the intensities of the two near infrared [SIII] lines (i.e. 3.44 times the intensity of the [SIII]\(\lambda\) 9069 A line), I(H\(\beta\)) denotes the H\(\beta\) intensity and t\({}_{e}\)([SIII]) denotes the electron temperature in units of 10\({}^{-4}\) K. The errors involved in the fitting procedures used for the derivation of the above expressions are lower than observational errors, thus we have propagated the latter to calculate the final emission line intensity uncertainties.
The total abundance of sulphur has then been calculated as:
\[12+log\left(\frac{S}{H}\right)=12+log\left(\frac{S^{+}}{H^{+}}+\frac{S^{++}}{ H^{+}}\right) \tag{12}\]
and are given in Table 7 for the 39 HII regions with measurements of the [SIII]\(\lambda\) 6312 A line. The table lists in columns 1 to 7: (1) the region ID; (2) the measured [SIII]\(\lambda\) 6312 A emission line intensity; (3) the R\({}_{S3}\) line ratio; (4) the [SIII] electron temperature; (5) and (6) the ionic abundances of S\({}^{+}\) and S\({}^{++}\) relative to H\({}^{+}\); and (7) the total S/H abundance. These regions show sulphur abundance values, that range from 6.52 to 7.49 in units of 12+log(S/H) (12+log(S/H)\({}_{\odot}\)= 7.12, Asplund et al., 2009a), with errors between 0.007 to 0.049 dex.
For the rest of the regions no reliable detection of the [SIII]\(\lambda\) 6312 A line could be made and hence we had to rely on empirical calibrations to derive their sulphur abundances. This has been done through the use of the S\({}_{23}\) parameter, analogous to the commonly used R\({}_{23}\) for the case of oxygen and can be defined as:
\[S23=\frac{([SII]\lambda 6717,6731+[SIII]\lambda 9069,9532)}{H\alpha}\cdot\frac{ H\alpha}{H\beta} \tag{13}\]
This parameter has little dependence on reddening effects or calibration uncertainties since the lines involved can be measured relative to nearby hydrogen recombination lines. Also, the lines are observable even at over-solar abundances given their lower dependence with electron temperature.
A recent calibration of the S\({}_{23}\) parameter has been presented in Diaz & Zamora (2022) and has the following expression:
\[\begin{split} 12+log\left(\frac{S}{H}\right)=(6.636\pm 0.011)+\\ +(2.202\pm 0.050)\cdot logS_{23}+(1.060\pm 0.098)\cdot(logS_{23})^{2 }\end{split} \tag{14}\]
The sulphur abundances derived from this calibration for all the objects in our sample are given in Table 8.
## 4 Discussion
### Ionisation nature
According to Mazzuca et al. (2006), the emission line ratios within the ring are consistent with the predictions of star forming models, although regions near the inner edge of the ring are compatible with the ionisation being produced by shocks or an AGN component, something that could be related to the LINER nature of the galaxy nucleus. Figure 6 shows in the upper panel the classical BPT (Baldwin et al., 1981) diagnostic diagram involving the [NII] \(\lambda\) 6584 / H\(\alpha\) and [OIII] \(\lambda\) 5007 / H\(\beta\) emission line intensity ratios for the observed regions. The star-forming regions within the ring are shown to lie on the moderate to high metallicity end of the empirical star forming sequence defined by Kauffmann et al. (2003) from observations of a large sample of SDSS emission line galaxies. On the other hand, the data analysed here allow to get some inside on the nature of the nuclear ionised regions. The upper panels of Figure 5 show maps of the central part of NGC 7742 in the [OIII]\(\lambda\) 5007 A (left) and [NII]\(\lambda\) 6583 A (right) emission line ratios where a small circumnuclear ring at about 200 pc from the galaxy nucleus is seen very distinctive). We have extracted the spectra corresponding to the galaxy nucleus and three of the circumnuclear regions: (a), (b) and (c) and measured their [OIII]/H\(\beta\) and [NII]/H\(\alpha\) ratios. The lower panel of the figure shows the spectrum of region (a) where the large [NII]/H\(\alpha\) characteristic of LINER-type spectra can be seen. Below the optical spectrum we show the Gaussian fits to the H\(\beta\), [OIII], H\(\alpha\) and the [NII] lines.
The location of the galaxy nucleus and the three circumnuclear selected regions in the BPT diagram marked with a star and purple inverted triangles respectively, lie below and to the right of the yellow line that marks the division between Seyfert and LINER spectra (Schawinski et al., 2007), thus showing that the emission is probably dominated by shocks or an AGN non-thermal component of low activity. Three of our segmented ring regions to the South-East, R85, R86 and R87 (see Fig. 5), lie on the BPT diagram in the zone between the Kauffmann et al. (2003) empirical sequence and the one derived by Kewley et al. (2001) from theoretical models of starburst galaxies and may be somewhat affected by the radiation from the galaxy nucleus; consequently, they have not be considered in our analysis.
The BPT diagram shown is the diagnostic more commonly used, mostly due to the fact that is almost insensitive to reddening effects; however, it is sensitive to the N/O ratio, which is difficult to estimate for nuclear and circumnuclear ionised regions. On the other hand, the near-infrared sulphur emission lines constitute a powerful diagnostic to distinguish between shock and photo-ionisation mechanism (Diaz et al., 1985, see); it is independent of relative abundances and little sensitive to reddening. The two lower panels of Figure 6 show the location on the diagram of the observed ring HII regions colour coded for sulphur abundance (left) and ionisation parameter (right) where a segregation in these two parameters can clearly be seen. This can be further interpreted with the help
of photo-ionisation models. Using the Cloudy (Ferland et al., 2013) code we have computed models for ionisation-bounded nebulae assuming a plane parallel geometry. The computed models have ionisation parameter values log(\(\alpha\)) = -4.09, -3.46 and -2.76 and metallicity values 12+log(S/H) = 6.52, 6.89, and 7.06, with the rest of the elements in solar proportions. These parameters cover the range of derived values for the regions. A constant value of the electron density of 100 cm\({}^{-3}\) has been assumed. The nebula is ionised by a young star cluster synthesised using the PopStar code (Molla et al., 2009) with Salpeter's IMF (Salpeter, 1955) IMF with lower and upper mass limits of 0.85 and 120 M\({}_{\odot}\) respectively, including the nebular continuum in a self-consistent way. We have selected an age of 4.7 Myr to represent the simulated clusters. In the right panel we can see that, within the range of our derived values, regions with high ionisation parameters tend to occupy the lower right zone of the diagram while in the left panel regions with low metallicities lie in the upper left zone. No correlation between these two parameters: ionisation parameter and metallicity, is apparent.
### Characteristics of the observed CNSFRs
The measured H\(\alpha\) fluxes for the selected ring HII regions, prior to extinction correction, are between \((226.4\pm 2.7)\times 10^{-18}\) erg/s/cm\({}^{2}\) and \((1762.1\pm 2.3)\times 10^{-17}\) erg/s/cm\({}^{2}\).
We can compare these results with those obtained by Mazzuca et al. (2008) for this galaxy using narrow band photometry data obtained with the Auxiliary Port camera of the William Herschel Telescope (WHT) at a spatial resolution comparable to that of the MUSE data. For this comparison we have identified 12 matching regions from Fig. 3 of Mazzuca et al. (2008) and converted our measured fluxes to their tabulated values by adding to them the continuum fluxes in a wavelength band 50 A wide centred at H\(\alpha\) and the fluxes of the [NII]\(\lambda\lambda\)6548,84 A emission lines also included within this band. Although a linear correlation seems to exist between both sets of measurements, Mazzuca et al.'s H\(\alpha\) fluxes are found to be between \(\sim 6.8\) and \(\sim 37.6\) times our flux values. Only 38 HII regions within the ring were selected by these authors, hence their sizes could be larger than those selected in this work.
In our analysis, we have used a simple screen distribution to the dust and we have assumed that extinction affects lines and stellar continuum in a similar way and we have derived the extinction from the observed Balmer H\(\alpha\)/H\(\beta\) as compared with the theoretical one. Figure 7 shows that no relation is apparent between the derived extinction values, A\({}_{V}\), in magnitudes with the derived sulphur abundances tracing the global metal content, hence we can conclude that dust is distributed along the line of sight and is not an intrinsic characteristic of each particular region. The fact that the extinction values are very similar both in the regions within the ring and outside it supports our assumption.
The number of hydrogen ionising photons, Q(H\({}_{0}\)), in each of the HII regions can be calculated from their extinction corrected H\(\alpha\) fluxes, F(H\(\alpha\)), once translated into luminosities. Using a local universe typical galaxy distance of 10 Mpc we can write L(H\(\alpha\)) in erg s\({}^{-1}\) as:
\[L\ (H\alpha)=1.2\cdot 10^{38}\left(\frac{F(H\alpha)}{10^{-14}}\right)\left( \frac{D}{10}\right)^{2} \tag{15}\]
where F(H\(\alpha\)) is expressed in erg s\({}^{-1}\) cm\({}^{-2}\) and D is the distance to NGC 7742 which has been taken as 22.2 Mpc (see Tab. 1). The corresponding number of hydrogen ionising photons per second is:
\[Q\ (H_{0})=7.31\cdot 10^{11}L(H\alpha)s^{-1} \tag{16}\]
where L(H\(\alpha\)) is expressed in erg s\({}^{-1}\)(see for example, Gonzalez-Delgado et al., 1995). This equation has been derived using the recombination coefficient of the H\(\alpha\) line assuming a constant value of electron density of 100 cm\({}^{-3}\), a temperature of \(10^{4}\) K and case B recombination (Osterbrock & Ferland, 2006).
For the ring regions this number is between \(1.7\times 10^{49}\) and 1.8
\begin{table}
\begin{tabular}{c c c c c c} \hline Region ID & I([SIII]\(\lambda\)6312)\({}^{a}\) & R\({}_{\rm S3}\) & t\({}_{\rm e}\)([SIII])\({}^{b}\) & 12+log(S\({}^{\star}\)/H\({}^{\star}\)) & 12+log(S\({}^{\star}\)/H\({}^{\star}\)) & 12+log(S/H) \\ \hline R1* & \(59.2\pm 1.3\) & \(176.2\pm 1.6\) & \(6.6564\pm 0.0014\) & \(6.7413\pm 0.0034\) & \(7.0415\pm 0.0098\) & \(7.218\pm 0.007\) \\ R2 & \(39.6\pm 1.4\) & \(161.1\pm 2.8\) & \(0.6712\pm 0.0030\) & \(6.7331\pm 0.0062\) & \(6.9454\pm 0.0180\) & \(7.153\pm 0.011\) \\ R3 & \(29.1\pm 1.5\) & \(193.6\pm 2.9\) & \(0.6417\pm 0.0023\) & \(6.8886\pm 0.0057\) & \(6.7310\pm 0.0122\) & \(7.118\pm 0.009\) \\ R4 & \(13.6\pm 0.6\) & \(199.3\pm 2.4\) & \(0.6374\pm 0.0018\) & \(6.9577\pm 0.0045\) & \(6.7637\pm 0.0196\) & \(7.172\pm 0.008\) \\ R5 & \(21.4\pm 0.8\) & \(140.1\pm 2.2\) & \(0.6961\pm 0.0029\) & \(6.6678\pm 0.0060\) & \(6.7580\pm 0.0189\) & \(7.016\pm 0.011\) \\ R6 & \(6.5\pm 0.3\) & \(113.5\pm 1.5\) & \(0.7388\pm 0.0029\) & \(6.5699\pm 0.0055\) & \(6.6885\pm 0.0166\) & \(6.934\pm 0.010\) \\ R9 & \(30.1\pm 1.6\) & \(178.6\pm 2.4\) & \(0.6543\pm 0.0021\) & \(6.8409\pm 0.0053\) & \(6.7115\pm 0.0212\) & \(7.082\pm 0.010\) \\ R11 & \(10.4\pm 0.6\) & \(236.4\pm 2.7\) & \(0.6128\pm 0.0016\) & \(6.8517\pm 0.0053\) & \(6.8636\pm 0.0203\) & \(7.159\pm 0.011\) \\ R13 & \(14.8\pm 1.0\) & \(320.8\pm 3.0\) & \(0.5723\pm 0.0012\) & \(7.0735\pm 0.0050\) & \(7.1241\pm 0.0181\) & \(7.401\pm 0.010\) \\ R14 & \(11.9\pm 0.8\) & \(172.0\pm 2.7\) & \(0.6603\pm 0.0025\) & \(6.7984\pm 0.0070\) & \(6.7218\pm 0.0247\) & \(7.063\pm 0.012\) \\ \hline \end{tabular} \({}^{a}\) In units of \(10^{-18}\) erg/s/cm\({}^{2}\).
\({}^{b}\) In units of \(10^{4}\) K.
\({}^{a}\) Region near SN explosion.
\end{table}
Table 7: Ionic and total sulphur abundances derived by the direct method for the CNSFRs with measured [SIII]\(\lambda\) 6312 Å line intensities. The complete table is available online; here only a part is shown as an example.
\begin{table}
\begin{tabular}{c c c} \hline Region ID & S23 & 12+log(S/H) \\ \hline R1 & \(1.463\pm 0.011\) & \(7.029\pm 0.016\) \\ R2 & \(1.378\pm 0.014\) & \(6.963\pm 0.017\) \\ R3 & \(1.187\pm 0.015\) & \(6.806\pm 0.018\) \\ R4 & \(1.324\pm 0.016\) & \(6.920\pm 0.018\) \\ R5 & \(1.158\pm 0.015\) & \(6.781\pm 0.017\) \\ R6 & \(1.140\pm 0.013\) & \(6.765\pm 0.016\) \\ R7 & \(1.199\pm 0.017\) & \(6.817\pm 0.019\) \\ R8 & \(1.173\pm 0.018\) & \(6.794\pm 0.020\) \\ R9 & \(1.160\pm 0.016\) & \(6.782\pm 0.018\) \\ R10 & \(1.135\pm 0.015\) & \(6.761\pm 0.018\) \\ \hline \end{tabular}
\end{table}
Table 8: Sulphur abundances of the observed CNSFRs derived by empirical methods. The complete table is available online; here only a part is shown as an example.
\(\times 10^{51}\) photons s\({}^{-1}\), corresponding to logarithmic H\(\alpha\) luminosities between 37.36 and 39.40, on the lower side of the distribution found by Alvarez-Alvarez et al. (2015) for a large sample of CNSFRs, but similar to those of the disc HII regions analysed in Castellanos et al. (2002a). According to Mazzuca et al. (2008) disc HII regions in the outer side of the ring have H\(\alpha\) luminosities, and therefore a number of ionising photons, lower than the ring regions by about 2 orders of magnitude, implying a higher star formation rate SFR in the galaxy ring as compared to the disc.
On the other hand, the dimensionless ionisation parameter, u, as estimated from the [SII]/[SIII] ratio (Diaz et al., 1991), ranges from \(5.5\times 10^{-5}\) to \(1.1\times 10^{-3}\) for the regions within the ring, with a median value of \(2.2\times 10^{-4}\). This procedure could be used only for 39 out of 158 of the regions outside the ring. For the rest, the [SIII]\(\lambda\) 9069 A line could be measured with too large errors due to poor signal to noise, hence we have calculated the u values from its definition below (see Eq. 17).
The upper left panel of Figure 8 shows these results: HII regions within the ring, with values of Q(H\({}_{0}\)) larger than those in the outer side of it, show ionisation parameters centred at about log(u) = -3.5 with a relatively narrow distribution. For the regions outside the ring, the distribution is also centred at the same u value but looks wider extending to lower values. In a previous work analysing a large sample of disc HII regions in more than 200 nearby galaxies from the CALIFA sample, Rodriguez-Baras et al. (2019) found that inner disc regions showing a larger number of Lyman continuum photons showed ionisation parameters lower than their outer disc counterparts. They tentatively attributed this to a selection effect
Figure 5: Upper panels: Maps of the observed [OIII]\(\lambda\) 5007 Å (left) and [NII]\(\lambda\) 6583 Å (right) fluxes (in units of 10\({}^{-20}\) erg/s/cm\({}^{2}\))showing the nuclear environment of NGC 7742. Selected regions are labelled. Bottom panel: Spectrum of region (a) and Gaussian fits to the H\(\beta\), [OIII], H\(\alpha\) and [NII] emission lines.
due to the lack of spatial resolution close to the galaxy bulges. However, this was possibly due to the fact that they were missing the population of low H\(\alpha\) luminosity with very noisy spectra.
Some light can be shed on this issue by estimating the filling factors of the observed regions which can be done by comparing their sizes, as estimated using the definition of the ionisation parameter, with the actually measured ones. According to its definition:
\[u=\frac{Q(H_{0})}{4\pi cn_{e}R^{2}} \tag{17}\]
where R stands for the radius of the ionised nebulae, provided they have reached their maximum expansion (see Martin-Manjon et al. 2010).
Using the expressions from Castellanos et al. (2002a) we have estimated the angular radii of the observed ring HII regions that have derived electron densities larger than 50 cm\({}^{-3}\) as:
\[\phi=0.51\left(\frac{F(H\alpha)}{10^{-14}}\right)^{1/2}\left(\frac{u}{10^{-3}} \right)^{-1/2}\left(\frac{n_{e}}{40}\right)^{-1/2} \tag{18}\]
where \(\phi\) is the angular radius in arcsec. The ionisation parameter predicted radii together with the measured ones, are given in Table 9 and compared in Figure 9. As can be seen from the figure predicted and measured radii are in very good agreement within the errors and are found to be between 0.39 (corresponding to the element resolution) and 1.5 arcsec which correspond to linear values between 34 to 130 pc. This range of values is similar to those found by Diaz et al. (2000b) and Castellanos et al. (2002a) for disc HII regions and is also consistent with the ones calculated by Garcia-Vargas et al. (2013) from PopStar models.
The electron density can be calculated from the [SII]\(\lambda\) 6717 A / [SII]\(\lambda\) 6731 A ratio only for \(n_{e}\) > 50 cm\({}^{-3}\). For the regions within
Figure 6: Upper panel: the [OIII]/H\(\beta\) vs [NII]/H\(\alpha\) diagnostic diagram for the selected ring HII regions, colour coded according to their distance to the galaxy nucleus. Lower panel, left: the [SII]/H\(\alpha\) - [SII]/H\(\alpha\) diagnostic diagram, colour coded for metallicity. Lower panel, right: The [SII]/H\(\alpha\) - [SIII]/H\(\alpha\) diagnostic diagram, colour coded for ionisation parameter. Mean error bars are shown in the upper right corner of the panel. Over-plotted, derived separations between LINER/Seyfert (S+07, Schawinski et al. 2007) and HII regions (K+01 and K+03, Kewley et al. 2001; Kauffmann et al. 2003)
the ring for which only upper limits could be estimated, the electron density has been derived from the observed region sizes as:
\[\frac{n_{e}}{10}=\left(\frac{F(H\alpha)}{10^{-14}}\right)\left(\frac{10^{-3}}{u} \right)\left(\frac{1}{\phi}\right)^{2} \tag{19}\]
Once the ionisation parameter and the angular radius of each observed HII region have been estimated, the filling factor can be derived using the expression:
\[\epsilon=0.165\left(\frac{10^{-14}}{F(H\alpha)}\right)\left(\frac{u}{10^{-3}} \right)^{2}\left(\frac{1}{\phi}\right)\left(\frac{10}{D}\right) \tag{20}\]
(see Diaz et al. 1991). The Filling factors for the ring HII regions are low, ranging from \((7.7\pm 1.5)\times 10^{-4}\) to \(0.45\pm 0.15\), with a mean value of 0.043. These values are similar to those estimated for high metallicity disc HII regions (between 0.008 and 0.52 Diaz et al. 1991; Diaz et al. 2000b; Castellanos et al. 2002a) and CNSFRs (\(1\times 10^{-3}\) to \(6\times 10^{-4}\) Diaz et al. 2007).
Finally, the mass of ionised hydrogen, in solar masses, can be derived as (see Diaz et al. 1991):
\[M(HII)=2.69\times 10^{4}\left(\frac{u}{10^{-3}}\right)\phi^{2}\left(\frac{D}{ 10}\right)^{2} \tag{21}\]
These values range from \((1.77\pm 0.55)\times 10^{3}\) M\({}_{\odot}\) to \((1.08\pm 0.16)\times 10^{5}\) M\({}_{\odot}\), with a mean value of 3.07\(\times 10^{4}\) M\({}_{\odot}\) for the HII regions within the ring and are similar to what is found in disc HII regions.
The bottom panels of Figure 8 show, in panels from left to right, the distribution of electron density, filling factor and ionised hydrogen mass for the ring HII regions as compared with the ones outside the ring. In general, although the electron density shows similar distributions in the two HII region populations, regions within the ring seem to be more diffuse and showing lower filling factors than the regions outside the ring. Given that the size distribution of these latter regions is concentrated around a smaller mean value, this result is expected (Cedres et al. 2013). Exception is made of a population of outside ring HII regions with low H\(\alpha\) luminosity and high filling factor that may correspond to regions ionised by a single star (Vacca et al. 1996).
Tab. 9 shows the characteristics of each HII region within the ring and lists in column 1 to 9: (1) the region ID; (2) the extinction corrected H\(\alpha\) luminosity; (3) the number of hydrogen ionising photons; (4) the ionisation parameter; (5) the estimated angular radius; (6) the measured linear radius; (7) the electron density; (8) the filling factor; and (9) the mass of ionised hydrogen.
### Chemical abundances
#### 4.3.1 Sulphur abundance determinations
Reliable measurements of the weak electron temperature sensitive [SIII] line at \(\lambda\) 6312 A have been obtained for 38 ring HII regions out of the 88 originally selected; for these regions sulphur abundances have been derived by the direct method described in section 3.5 and their distribution can be seen in Figure11 as the histogram filled with oblique lines. Total sulphur abundances are between \(6.525\pm 0.007\) and \(7.50\pm 0.01\) in units of 12+log(S/H), that is, between 0.25 and 2.40 times the solar value with a median value of 7.01, slightly below solar. The two ionic species present, S\({}^{+}\) and S\({}^{++}\), contribute approximately 50% each to the total abundance.
For the rest of the regions, we have resorted to the empirical calibration by the S\({}_{23}\) parameter as described in section 3.5 which is single valued up to, at least, solar metallicity. This calibration is shown in Figure 10 where red and blue contours correspond to data for disc HII regions and HII galaxies respectively. Superimposed are the directly derived abundances for the 38 analysed regions with measurements of the [SIII]\(\lambda\) 6312 A line, shown by green stars, and data on regions classified as circumnuclear in the literature as labelled in the figure (Pastoriza et al. 1993; Gonzalez-Delgado et al. 1995, from NGC 3310, NGC 7714 respectively) and are all found to lie at the tip of the curve. The yellow triangle shows the position of HII region 11 from M83 observed by Bresolin et al. (2005). This region is also part of the sample analysed by Diaz & Zamora (2022) and its derived sulphur abundance is in full agreement with the results of the former authors although the ionisation structure is slightly different.
We have therefore considered jointly the sulphur abundances derived by the two methods. Their distribution can be seen in Figure 11 as the histogram filled with vertical lines. We can see that sulphur abundances derived by the direct method look higher than those calculated from S\({}_{23}\) parameter calibration. This might reflect the fact that, as in the case of the R\({}_{23}\) parameter, along the low branch of the calibration the intensity of the [SIII] nebular lines increases with metallicity reaching its maximum at the point where the cooling starts to be dominated by sulphur (about two times the solar value), where the calibration bends to lower values of S\({}_{23}\) starting to show its bi-valued nature (see Fig. of Perez-Montero & Diaz 2005). In those cases our empirically derived sulphur abundances could be somewhat underestimated.
#### 4.3.2 The S/O abundance
For 13 regions out of the 88 selected within the ring the [OII]\(\lambda\lambda\) 7320,30 A emission lines have been detected and measured, and 10 of them also show the [SIII]\(\lambda\) 6312 A line. For those 10 regions it was possible to derive the O\({}^{+}\)/H\({}^{+}\) ionic abundance. We have assumed an only zone in which \(T_{e}(O^{+})\approx T_{e}(S^{++})=T_{e}(\left[SIII\right])\), n\({}_{e}=100\) cm\({}^{-3}\) and we have used the following equation derived using the PyNeb package (Luridiana et al. 2015) and the atomic coefficients listed in Tab. 6:
Figure 7: Visual extinction in magnitudes, A\({}_{V}\) as a function of sulphur abundance. Average error bars for CNSFRs (right) and the rest of studied regions (left) are shown at the bottom right corner of the panel.
\[12+log\left(\frac{O^{+}}{H^{+}}\right)= log\left(\frac{I(\lambda 7320,30)}{I(H_{\beta})}\right)+6.952+ \tag{22}\] \[+\frac{2.433}{t_{e}([SIII])}-0.571\cdot log(t_{e}([SIII]))\]
The [OII] auroral line intensities might present some contribution by recombination emission that can be estimated as shown in Liu et al. (2001):
\[\left[\frac{I(\lambda 7320+\lambda 7330)}{H\beta}\right]_{R}=9.36\cdot t_{e}^{0.44} \frac{O^{++}}{H^{+}} \tag{23}\]
where \(t_{e}\) is the temperature of the O\({}^{+}\) ion in units of \(10^{4}\) K and takes values between 0.5 and 1.0 (T\({}_{e}\) = 5000 - 10000 K).
For our ring regions this contribution takes values from 0.00016 to 0.0011 and represents between 1.5% and 4.5 % of the emission line intensity, within measurement uncertainties.
The \(O^{++}/H^{+}\) abundance ratios have been calculated using the expression:
\[\begin{split} 12+log\left(\frac{O^{++}}{H^{+}}\right)=log\left(\frac{I( \lambda 4959+\lambda 5007)}{I(H_{\beta})}\right)+6.249+\\ +\frac{1.184}{t_{e}([OIII])}-0.708\cdot log(t_{e}([OIII]))\end{split} \tag{24}\]
also derived using PyNeb (Luridiana et al., 2015) and the atomic coefficients listed in Tab. 6.
We have assumed that the temperature of the region where the \(O^{++}\) ion is originating, T\({}_{e}([OIII])\), can be derived from
Figure 8: The different histograms in the figure show for the ring HII regions, in green, and regions outside, in purple, the distributions of: the number of hydrogen ionising photons (upper left), the ionisation parameter (upper right), the electron density (bottom left), the filling factor (bottom centre) and the mass of ionised hydrogen (bottom right).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Region ID & L(H\(\alpha\)) & O(H\({}_{\rm o}\)) & \(\phi\) & R & n\({}_{\rm e}\) & \(\log(\epsilon)\) & M(HII) \\ & (erg s\({}^{-1}\)) & (photons s\({}^{-1}\)) & log(u) & (arcsec) & (arcsec) & (cm\({}^{-3}\)) & \(\log(\epsilon)\) & (M\({}_{\rm o}\)) \\ \hline R1 & \((208.8\pm 4.1)\times 10^{97}\) & \((152.8\pm 3.0)\times 10^{49}\) & \(-2.761\pm 0.009\) & \(0.31\pm 0.03\) & \(0.69\pm 0.05\) & \(224\pm 48\) & -1.04 \(\pm 0.04\) & \((10.8\pm 1.6)\times 10^{44}\) \\ R2 & \((15.0\pm 1.2)\times 10^{38}\) & \((109.7\pm 3.0)\times 10^{49}\) & \(-2.920\pm 0.014\) & \(0.41\pm 0.05\) & \(0.67\pm 0.05\) & \(130\pm 29\) & -1.20 \(\pm 0.04\) & \((7.1\pm 1.1)\times 10^{4}\) \\ R3 & \((24.6\pm 2.0)\times 10^{38}\) & \((180.0\pm 7.1)\times 10^{49}\) & \(-3.518\pm 0.028\) & \(1.52\pm 0.20\) & \(1.17\pm 0.05\) & \(62\pm 16\) & -2.85 \(\pm 0.06\) & \((54.8\pm 5.8)\times 10^{3}\) \\ R4 & \((112.0\pm 9.0)\times 10^{77}\) & \((82.0\pm 2.8)\times 10^{49}\) & \(-3.575\pm 0.028\) & \(1.06\pm 0.12\) & \(0.85\pm 0.05\) & \(66\pm 15\) & -2.49 \(\pm 0.06\) & \((25.6\pm 3.4)\times 10^{3}\) \\ R5 & \((98.1\pm 7.8)\times 10^{37}\) & \((71.8\pm 2.4)\times 10^{49}\) & \(-3.144\pm 0.020\) & \(0.57\pm 0.07\) & \(0.68\pm 0.05\) & \(75\pm 18\) & -1.47 \(\pm 0.05\) & \((43.6\pm 6.8)\times 10^{3}\) \\ R6 & \((24.0\pm 2.0)\times 10^{37}\) & \((175.5\pm 7.5)\times 10^{48}\) & \(-3.125\pm 0.017\) & - & \(0.25\pm 0.05\) & \(89\pm 36\) & -0.39 \(\pm 0.09\) & \((6.3\pm 2.5)\times 10^{3}\) \\ R7 & \((13.5\pm 1.1)\times 10^{38}\) & \((98.7\pm 3.6)\times 10^{49}\) & \(-3.272\pm 0.025\) & \(-0.93\pm 0.05\) & \(39\pm 14\) & -2.00 \(\pm 0.06\) & \((61.4\pm 7.5)\times 10^{3}\) \\ R8 & \((18.1\pm 1.5)\times 10^{38}\) & \((132.8\pm 5.4)\times 10^{49}\) & \(-3.429\pm 0.031\) & \(1.11\pm 0.16\) & \(1.15\pm 0.05\) & \(70\pm 19\) & -2.53 \(\pm 0.07\) & \((64.7\pm 7.3)\times 10^{3}\) \\ R9 & \((23.2\pm 2.0)\times 10^{38}\) & \((170.1\pm 7.4)\times 10^{49}\) & \(-3.481\pm 0.030\) & - & \(1.17\pm 0.05\) & \(48\pm 13\) & -2.76 \(\pm 0.06\) & \((60.2\pm 6.6)\times 10^{3}\) \\ R10 & \((15.9\pm 1.3)\times 10^{38}\) & \((116.1\pm 4.2)\times 10^{49}\) & \(-3.311\pm 0.024\) & \(1.06\pm 0.13\) & \(0.96\pm 0.05\) & \(51\pm 12\) & -2.16 \(\pm 0.06\) & \((59.4\pm 7.0)\times 10^{3}\) \\ \hline \end{tabular}
\end{table}
Table 9: Characteristics of the observed CNSFRs. The complete table is available online; here only a part is shown as an example.
\(T_{e}\left(\left[SIII\right]\right)\). Figure 12 shows data from Diaz & Zamora (2022) for disc HII regions and HII galaxies (contours in red and blue respectively). Superimposed are different \(T_{e}\left(\left[OIII\right]\right)\) - \(T_{e}\left(\left[SIII\right]\right)\) relations proposed in the literature (see Garnett, 1992; Perez-Montero & Diaz, 2005; Hagele et al., 2006) as well as other circumnuclear regions with direct determination of these two temperatures (Pastoriza et al., 1993; Gonzalez-Delgado et al., 1995). For high abundances the difference between the plotted relations increases and for T\({}_{e}\)([OIII]) = 5000 K, T\({}_{e}\)([SIII]) varies from 2740 K to 5850 K for those given by Hagele et al. (2006) and Garnett (1992) respectively. Due to these differences and for consistency with the present work, we have decided to fit the data from Diaz & Zamora (2022) and we have obtained and used the following equation:
\[t_{e}(\left[SIII\right])=(0.928\pm 0.053)\cdot t_{e}(\left[OIII\right])+(0.04 2\pm 0.057) \tag{25}\]
The total abundance of oxygen has then been calculated as:
\[12+log\left(\frac{O}{H}\right)=12+log\left(\frac{O^{+}}{H^{+}}+\frac{O^{++}}{H ^{+}}\right) \tag{26}\]
Table 10 shows the oxygen ionic and total abundances and the relative sulphur to oxygen abundance for the 10 regions refered above and lists in column 1 to 8: (1) the region ID; (2,3) the [OII]\(\lambda\lambda\) 7320,30 A auroral lines fluxes and its recombination emission correction; (4) the temperature of the O\({}^{++}\) ion; (5,6) the ionic oxygen abundances; (7) the total oxygen abundance in units of 12+log(O/H); and (8) the relative sulphur to oxygen abundance in units of log(S/O). The derived total oxygen abundances, 12+log(O/H), are found to be between 8.16 and 9.5 (mean value of 8.84), corresponding to 0.30
Figure 11: Distribution of the total sulphur abundances for the ring HII regions. The dashed line corresponds to the solar value (12+log(S/H)\({}_{0}\)= 7.12, Asplund et al., 2009a).
Figure 12: The T\({}_{e}\)([SIII]) - T\({}_{e}\)([OIII]) relation derived by different authors (Garnett, 1992; Pérez-Montero & Díaz, 2005; Hagele et al., 2006, (G92), (PM&D05) and (H+06) respectively). Red and blue contours are from disc HII regions and HII galaxies data from Diaz & Zamora (2022) respectively. Triangles are electron temperatures measured for CNSFRs from Pastoriza et al. (1993, P+93) and Gonzalez-Delgado et al. (1995, GD+95).
Figure 10: The S\({}_{23}\)abundance calibration from Díaz & Zamora (2022). Red contours correspond to disc HII regions while blue contours correspond to HII galaxies. Green stars show the values found for the 38 HII regions within the galaxy ring with the [SIII]\(\lambda\) 6312 Å line measured. Observational errors for these data are inside the symbols in the graph. Circumnuclear regions from the works by Pastoriza et al. (1993, P+93), Gonzalez-Delgado et al. (1995, GD+95) and Bresolin et al. (2005, B+05) are also shown as labelled in the figure.
Figure 9: The ionisation derived angular radius against the angular radius measured from the HII region segmentation (see Sec. 3.2).
and 6.46 times solar (12+log(O/H)\({}_{\odot}\)= 8.69, Asplund et al. 2009b), this latter value having a rather high error (\(\pm\) 0.15). For region R42, that shows the highest directly derived sulphur abundance, the [OIII] auroral lines are not detected. The second highest sulphur abundance, 12+log(S/H) = 7.40 (\(\sim\) 2 times the solar value) is found for region R13 which also shows the highest value of the oxygen abundance. For all these regions very high values of the O\({}^{+}\)/O ionic fraction have been found, between 87% and 95% with a mean value of 92%. Similar ratios have been reported for other high metallicity objects: circumnuclear regions NGC 7714-A and NGC 7714-N110 with ratios \(\sim\) 92% (Gonzalez-Delgado et al. 1995) and region NGC 5236-R11 with a value of 92% (Bresolin et al. 2005). This could be partly due to the highest metallicity regions being ionised by metal rich stars, relatively cool, thus implying a certain lack of O\({}^{+}\) ionising photons decreasing the [OIII] emission line intensity and increasing the [OIII] one (e.g. Shields & Searle 1978).
In order to compare our results with other circumnuclear regions from the literature (Pastoriza et al. 1993; Gonzalez-Delgado et al. 1995; Diaz et al. 2007; Bresolin et al. 2005) we have used published emission line measurements and calculated their abundances following the analysis proposed in this work. The \(O^{+}/H^{+}\) has been determined from the [OII]\(\lambda\lambda\) 3727,29 A lines and the equation given in Diaz & Zamora (2022). For a few objects we could verify that O\({}^{+}\)/H\({}^{+}\) ratios calculated using the blue and red lines of [OII] are compatible within the errors.
Finally, we have calculated the S/O ratios that are plotted in Figure 13 as a function of both sulphur and oxygen abundance in the upper and lower panel respectively. Both graphs show a clear tendency: lower S/O ratios for higher metallicities, for abundances larger than solar. This effect has already been noticed in other works (Diaz et al. 1991; Christensen et al. 1997; Garnett 2002; Vermeij & van der Hulst 2002; Pilyugin et al. 2006; Diaz & Zamora 2022). This could be due to an overestimation of the derived oxygen total abundances. In fact, as shown by Stasinska (2005) for metallicities larger than solar, directly derived oxygen abundances using photoionisation models deviate greatly from input abundances. However, if this is not the case, the observed S/O lower values for high metallicity regions should be explained almost only by stellar nucleosynthesis (Tosi 1988).
### Ionising cluster properties
The temperature of the ionising stars can be mapped using the \(\eta\) parameter defined by Vilchez & Pagel (1988) as:
\[\eta=\frac{O^{+}/O^{++}}{S^{+}/S^{++}} \tag{27}\]
We can calculate this parameter for only 10 ring HII regions (see Sec. 4.3) and their logarithmic values are between 0.78 \(\pm\) 0.12 and 1.498 \(\pm\) 0.043, on the higher side of the distribution found by Gonzalez-Delgado et al. (1995). A direct relation seems to exist between this parameter and the metallicity for a given region: \(\eta\) is greater (and therefore the temperature of the ionising stars is lower) for regions with higher abundances. This behaviour has already been reported for a sample of HII galaxies by Kehrig et al. (2006, and references therein) and for a large sample of HII galaxies and HII regions of different metallicity by Diaz & Zamora (2022). On the theoretical side it was already introduced by McGaugh (1991) as a moderator in the empirical calibration of the R\({}_{23}\) parameter.
Fig. 14 shows the relationship between the ionic ratios S\({}^{+}\)/S\({}^{++}\) and O\({}^{+}\)/O\({}^{++}\) for our ring regions and for other circumnuclear regions
Figure 14: Relation between the ionic ratios of oxygen and sulphur for the objects included in Table 10 (green stars) and data from the literature. Red and blue contours correspond to disc HII regions and HII galaxies respectively from Díaz & Zamora (2022). Circumnuclear regions are from Pastoriza et al. (1993) (P+93), Gonzalez-Delgado et al. (1995) (GD+95), Bresolin et al. (2005) (B+05) and Díaz et al. (2007) (D+07).
Figure 13: S/O relation against the total abundances of oxygen (upper panel) and sulphur (lower panel) for regions included in Table 10 (green stars) and data from the literature as labelled (Pastoriza et al. 1993; Gonzalez-Delgado et al. 1995; Bresolin et al. 2005, (P+93), (GD+95) and (B+05) respectively). The black dashed line in each panel marks the solar S/O ratio.
from the literature (Pastoriza et al., 1993; Gonzalez-Delgado et al., 1995; Diaz et al., 2007; Bresolin et al., 2005). Superimposed are dotted diagonal lines which show the location of ionised regions with constant values of \(\eta\). These values have been correlated with stellar effective temperatures of stars using the Cloudy code (Ferland et al., 2013, log(u) = -4.0 - -2.5, Z\({}_{\odot}\), n\({}_{e}\) = 100 cm\({}^{-3}\)) with stellar atmospheres from Mihalas (1978, non-LTE models for B and O stars, log(g) = 4 and T\({}_{eff}\) from 30000 K to 55000 K). Our circumnuclear regions are located at the high end of S\({}^{+}\)/S\({}^{++}\) ratio distribution and show high values of the \(\eta\) parameter, corresponding to relatively low stellar temperatures similar to high metallicity disc HII regions. This location corresponds to effective temperatures between 34700 K and 40000 K.
The spectral energy distribution of the ionising radiation implied by the \(\eta\) parameter would correspond to an ionising star cluster equivalent temperature that can be estimated from the quotient between the number of helium and hydrogen ionising photons, Q(He\({}_{0}\))/Q(H\({}_{0}\)). We have calculated the number of ionising He\({}_{0}\) photons from the observed luminosity in the HeI\(\lambda\) 6678 A emission line using the expression:
\[Q(He_{0})=1.21\cdot 10^{49}\left(\frac{F(HeI6678)}{10^{-14}}\right)\left( \frac{D}{10}\right)^{2} \tag{28}\]
where F(Hel6678), the flux of the HeI\(\lambda\) 6678 A line, is expressed in erg s\({}^{-1}\) cm\({}^{-2}\) and D is the distance to NGC 7742 which has been taken as 22.2 Mpc (see Tab. 1). This equation has been derived using the recombination coefficient of HeI\(\lambda\) 6678 A line assuming a constant value of electron density of 100 cm\({}^{-3}\), a temperature of 10\({}^{4}\) K and case B recombination (Osterbrock & Ferland, 2006).
We have detected and measured the HeI\(\lambda\) 6678 A line in 63 ring regions. The corresponding values range from 2.8 \(\times\) 10\({}^{46}\) to 1.7 \(\times\) 10\({}^{48}\) with a mean value of 4.5 \(\times\) 10\({}^{47}\) photons s\({}^{-1}\).
Using the Cloudy (Ferland et al., 2013) code we have computed models with ionisation parameter values from -4.0 to -2.5, solar metallicity and a constant value of the electron density of 100 cm\({}^{-3}\). The nebula is ionised by stellar atmospheres from Mihalas (1978, non-LTE models for B and O stars, log(g) = 4 and T\({}_{eff}\) from 30000 K to 55000 K). For temperatures lower than 40000 K, the nebular zone of He\({}^{+}\) is smaller than that of H\({}^{+}\) and hence we can use the number of ionising hydrogen and helium photons to estimate the effective temperature of our ionising star clusters for which we have derived the following equation:
\[\begin{split} log\left(\frac{Q(H_{0})}{Q(He_{0})}\right)=(1.944 \pm 0.284)\cdot 10^{-8}\cdot T_{eff}^{2}\\ (-1.527\pm 0.199)\cdot 10^{-3}\cdot T_{eff}+(32.77\pm 3.46) \end{split} \tag{29}\]
This relation can be used only for logarithmic values of Q(H\({}_{0}\))/Q(He\({}_{0}\)) greater than 2.8 since, for higher temperatures, the ionisation zones of helium and hydrogen coincide and the relationship between them remains constant. Fig. 15 shows the number of He\({}_{0}\) ionising photons as a function of the number of H\({}_{0}\) ionising photons. Superimposed are the lines corresponding to different temperatures as obtained with the last equation. According to these models, we can deduce that the He\({}^{+}\) nebular zone is much smaller, approximately in a ratio r\({}_{He}\)/r\({}_{H}\)\(\sim\) 0.73, than that of H\({}^{+}\) in all these star clusters and they seem to have similar effective temperatures, around 34600 K, a result similar with that obtained with the \(\eta\) parameter.
The equivalent width (EW) of Balmer lines can be understood as an estimator of the age of a young cluster single stellar population (Dottori, 1981) reflecting the ratio between the present and past star formation rates. Following the notation used in Sec. 3.3, the equivalent width has been calculated as: \(EW(\lambda)=F_{\lambda}/(F_{\rm c}(\lambda)+A_{\rm c}(\lambda))\). The equivalent widths of the H\(\beta\) emission line for the selected ring HII regions are between 2.5 to 44.0 A with a mean
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Region ID & [OII]\(\lambda\lambda\) 7320,30/H\(\beta\)\({}^{a}\) & [I(\(\lambda\) 7320,30)/H\(\beta\)\({}_{R}^{a}\)] & \(\iota\)\(\iota\)([OIII])\({}^{b}\) & 12+log(O\({}^{\circ}\)/H\({}^{+}\)) & 12+log(O\({}^{\circ}\)/H\({}^{+}\)) & log(SO) \\ \hline R1* & 23.9 \(\pm\) 2.2 & 0.69 & 0.661 \(\pm\) 0.072 & 9.145 \(\pm\) 0.041 & 7.946 \(\pm\) 0.009 & 9.171 \(\pm\) 0.039 & -1.952 \(\pm\) 0.039 \\ R2 & 23.1 \(\pm\) 3.0 & 0.58 & 0.677 \(\pm\) 0.073 & 9.043 \(\pm\) 0.059 & 7.872 \(\pm\) 0.017 & 9.072 \(\pm\) 0.055 & -1.917 \(\pm\) 0.056 \\ R3 & 10.0 \(\pm\) 2.9 & 0.39 & 0.646 \(\pm\) 0.072 & 8.858 \(\pm\) 0.126 & 7.708 \(\pm\) 0.019 & 8.888 \(\pm\) 0.118 & -1.768 \(\pm\) 0.118 \\ R4 & 11.1 \(\pm\) 2.5 & 0.50 & 0.641 \(\pm\) 0.072 & 8.931 \(\pm\) 0.097 & 7.818 \(\pm\) 0.014 & 8.963 \(\pm\) 0.090 & -1.789 \(\pm\) 0.090 \\ R5 & 11.3 \(\pm\) 2.3 & 0.32 & 0.704 \(\pm\) 0.073 & 8.592 \(\pm\) 0.090 & 7.610 \(\pm\) 0.018 & 8.635 \(\pm\) 0.081 & -1.617 \(\pm\) 0.082 \\ R6 & 9.3 \(\pm\) 2.0 & 0.23 & 0.750 \(\pm\) 0.075 & 8.290 \(\pm\) 0.095 & 7.447 \(\pm\) 0.019 & 8.348 \(\pm\) 0.084 & -1.412 \(\pm\) 0.084 \\ R13 & 14.0 \(\pm\) 5.0 & 1.14 & 0.571 \(\pm\) 0.070 & 9.493 \(\pm\) 0.156 & 8.194 \(\pm\) 0.015 & 9.514 \(\pm\) 0.148 & -2.112 \(\pm\) 0.149 \\ R14 & 11.9 \(\pm\) 4.6 & 0.42 & 0.666 \(\pm\) 0.072 & 8.819 \(\pm\) 0.167 & 7.728 \(\pm\) 0.022 & 8.853 \(\pm\) 0.154 & -1.789 \(\pm\) 0.155 \\ R17 & 11.0 \(\pm\) 2.8 & 0.17 & 0.813 \(\pm\) 0.077 & 8.107 \(\pm\) 0.114 & 7.294 \(\pm\) 0.024 & 8.169 \(\pm\) 0.099 & -1.327 \(\pm\) 0.100 \\ R26 & 11.0 \(\pm\) 4.5 & 0.37 & 0.660 \(\pm\) 0.072 & 8.818 \(\pm\) 0.178 & 7.678 \(\pm\) 0.024 & 8.849 \(\pm\) 0.166 & -1.720 \(\pm\) 0.166 \\ \hline \end{tabular} \({}^{a}\) Values normalized to [H\(\beta\)] 10\({}^{-3}\).
\({}^{b}\) In units of 10\({}^{4}\) K.
\({}^{*}\) Region near SN explosion.
\end{table}
Table 10: Oxygen abundances and sulphur to oxygen ratios for the observed CNSFRs.
Figure 15: Relation between the logarithmic numbers of HeI and HI ionising photons (see text for details).
value of 10.9 A corresponding to regions of active star formation. Although the Balmer emission line luminosities are higher for ring regions, on average, the H\(\beta\) equivalent widths for regions outside the ring are comparable within the errors with a mean value of 12.4 A covering values from 2.03 to 140.5 A. This might seem to point out to HII regions outside the ring being on the same evolutionary stage with similar percentages of young populations, it should be noted that the H\(\beta\) equivalent width depends also on the underlying continuum and the area covered by the ring shows an additional blue population that could be affecting the results (see lower left panel of Fig. 1).
In principle, we would expect that the regions inside the ring, being closer to the galactic nucleus, were redder, with a higher metal content and with older ages (Rodriguez-Baras et al., 2018) than the regions outside the ring; however their r-i colours are comparable (see Sec. 3.4, Fig. 4). Thus, all the regions analysed in our sample seem to have similar ages and metallicities, in spite of its distance to the centre of the galaxy. In fact, this can be seen by looking at the radial r and i magnitude profiles shown by Wakamatsu et al. (1996) which follow each other.
A linear regression can be performed between the EW(H\(\beta\)) and the number of ionising photons in order to estimate the ionising masses of our circumnuclear HII regions. We have used single stellar population (SSP) PopStar models (Molla et al., 2009) to fit the following equation:
\[log\left[Q(H)/M_{\odot}\right]=a+b\cdot log\left[EW(H\beta)\right] \tag{30}\]
where \(Q(H)/M_{\odot}\) is the total number of ionising photons per unit mass. This relation is well established for ages under 10 Ma and metallicities between 0.004 and 0.02. Its linearity can be lost: (i) for higher metallicities, because the effective temperature decreases and hence stars of the same spectral type with more metals have fewer ionising photons; (ii) for lower metallicities, because there are more massive stars and clusters are hotter, showing a significant nebular continuum; and (iii) due to the presence of Wolf Rayet (WR) stars. The slope of the initial mass function (IMF) and the lower mass limit also affect the numbers of stars of different types (and therefore their number of ionising photons). We have used the different IMFs listed in Table 11. Finally, we have selected ages lower than 7 Ma since, as explained above, star clusters older than that do not produce significant ionising radiation.
The different relations defined in Tab. 11 by equation 30 are shown in Fig. 16. The linear regression slopes are very similar among them and also compatible with previous results obtained by Diaz (1998) using stellar populations synthesis models from Cervino & Mas-Hesse (1994), Garcia-Vargas et al. (1995) and Leitherer & Heckman (1995). However, we can see that the linear regression intercepts differ by a factor of 5 (\(\sim\) 0.7 dex) depending on the chosen IMF.
We have used the Salpeter IMF with \(\phi(m)=m^{-\alpha}\), \(\alpha=2.35\), \(m_{low}(M_{\odot})=0.85\) and \(m_{up}(M_{\odot})=120\) that seems the most suitable for our young regions. For regions within the ring we have obtained values between \(1.22\times 10^{4}\) (R76) and \(5.93\times 10^{5}\)\(M_{\odot}\) (R41). These results are only lower limits to the ionising masses since we are assuming that: (i) there is no dust absorption and reemission at infrared wavelengths and (ii) we are considering there is no photon escape from HII regions (but see Castellanos et al., 2002). Our derived values are lower than those obtained by Diaz et al. (2007) for CNSFRs and slightly higher than for HII regions (Diaz et al., 2000) both based on lower spatial resolution data (0.4" and 0.7" respectively).
The upper panel of Fig. 17 shows the distribution of ionising masses for the ring HII regions as compared with the ones outside the ring. We can see that masses for the latter ones are an order of magnitude lower with \(1.38\times 10^{4}\) and \(1.22\times 10^{5}\) as median values respectively. Around \(\sim 50\) % of regions outside the ring have masses lower than \(10^{4}\) M\({}_{\odot}\), hence the IFM might not be fully sampled (Garcia Vargas & Diaz, 1994; Villaverde et al., 2010). For
Figure 16: Linear regression between the EW(H\(\beta\)) and the number of ionising photons for different IMFs according to Tab. 11 as labelled.
Figure 17: Histograms of the distributions of ionising masses (upper) and the photometric masses (bottom) for the ring HII regions, in green, and outside regions, in purple. The dashed line corresponds to \(10^{4}\) M\({}_{\odot}\)(Garcia Vargas & Diaz, 1994, see text for details).
the used stellar population models and the chosen IMF, the ratio of ionising stellar masses and ionised hydrogen masses, M\({}_{ion}\)/M(HII), takes a value of 28.
Using the same models, we have derived the photometric masses of our CNSFRs from their r-magnitudes. In this case, we cannot establish an analytical equation due to the non linearity between log[EW(H\(\beta\))] and M\({}_{r}\)+2.5-log(M\({}_{\odot}\)). The r magnitude for 1 M\({}_{\odot}\) seems to be constant for a chosen IMF although there are variations at ages between 3.5 and 6.5 Ma induced by the presence of WR and red super giants stars (RSG). For the regions within the ring we have obtained values between \(2.90\times 10^{4}\) and \(1.10\times 10^{6}\)\(M_{\odot}\) corresponding to the two regions mentioned above at the extremes of ionising star masses. As expected, these latter ones show photometric masses an order of magnitude lower than the former since photometric masses follow ionising star masses in a constant proportion of about 3. An exception to this is the case of R40, that shows the highest value of the photometric mass (7.77 in the log). The physical properties of the ionising cluster are fully compatible with the rest of the ring observed regions. However, the extracted aperture encloses a non ionising star cluster. The bottom panel of Fig. 17 shows the distribution of the photometric masses for the ring HII regions as compared with the ones outside the ring.
Table 12 shows the ionising cluster properties for each HII region within the ring and lists in columns 1 to 6: (1) the region ID, (2) the logarithmic \(\eta\) parameter, (3) the number of helium ionising photons, (4) the measured equivalent width of H\(\beta\) line, (5) the ionising mass and (6) the photometric mass.
### CNSFR evolutionary stage
Up to now, we have assumed in our analysis the presence of a single population. We have checked this hypothesis by looking for the presence of a non-ionising population. In order to do that, we have constructed a pixel-to-pixel intensity profile from the 5400 and 8150 A continuum maps (see Fig. 1) as can be seen in Fig. 18. Two populations can be clearly identified in the ring region enclosed by blue vertical lines. One of them shows up very prominently at the blue continuum wavelength. We identify this flux excess with a young non-ionising population. This is accompanied by a moderate excess of continuum flux at the redder wavelengths that might correspond to the presence of red supergiant stars. To isolate the ionising clusters' contribution, we have corrected the integrated extracted spectra for the presence of the underlying non-ionising cluster population and the galaxy disc underneath that can be fitted by a Sersic light profile. Once this has been accomplished, we have recalculated the H\(\beta\) equivalent width and the r and i magnitudes assuming for the stellar population the same extinction as that of the gas.
Fig. 19 shows the relation between the logarithm of the equivalent width of the H\(\beta\) line, logEW(H\(\beta\)), and the r-i colour. Super-imposed are single stellar population models from PopStar (Molla et al., 2009, Salpeter's IMF, m\({}_{low}\) = 0.15 M\({}_{\odot}\), m\({}_{up}\) = 100 M\({}_{\odot}\)). EW(H\(\beta\)) can be related to the time scale of the evolution of ionising star clusters, i.e. up to 10 Ma and, for a single stellar population, decreases with age. On the other hand, the r-i colour samples a
Figure 19: The relation between the equivalent width of the H\(\beta\) emission line and the r-i colour. The solid line has been calculated with PopStar models (Molla et al., 2009). The beginning and end of the line correspond to ages of 0.1 and 8.5 Ma. Observational errors are inside the symbols in the graph.
Figure 18: Observed continuum fluxes in individual spaxels as a function of radius in the blue (5400Å ) and red (8150Å) spectral ranges. The ring limits are marked with blue vertical lines.
\begin{table}
\begin{tabular}{c c c c c c} \hline IMF & Reference & m\({}_{low}\) & m\({}_{up}\) & & \\ & & (M\({}_{\odot}\)) & (M\({}_{\odot}\)) & a & b \\ \hline Salpeter1 & Salpeter (1955) & 0.85 & 120 & 44.561\(\pm\) 0.025 & 0.865\(\pm\) 0.012 \\ Salpeter2 & Salpeter (1955) & 0.15 & 100 & 44.296\(\pm\) 0.024 & 0.833 \(\pm\) 0.012 \\ Ferrini & Ferrini et al. (1990) & 0.15 & 100 & 43.887\(\pm\) 0.017 & 0.762 \(\pm\) 0.009 \\ Kroupa & Kroupa \& Boily (2002) & 0.15 & 100 & 44.092\(\pm\) 0.021 & 0.781\(\pm\)0.010 \\ Chabrier & Chabrier (2003) & 0.15 & 100 & 44.461\(\pm\)0.024 & 0.835\(\pm\) 0.012 \\ \hline \end{tabular}
\end{table}
Table 11: Ionising masses fitting.
Figure 17: Observed continuum fluxes in individual spaxels as a function of radius in the blue (5400Å ) and red (8150Å) spectral ranges. The ring limits are marked with blue vertical lines.
longer time scale ( \(\geq 300\) Ma), becoming redder with age (see Diaz et al. 2000c). For this reason, this graph can be understood as an age balance between the old and young population present in our clusters.
Our observed ring regions, taken at face value (square symbols in the graph), lie to the red from the line defined by single population stellar evolution models and show rather low values of logEW(H\(\beta\)). However, relaxing the assumption of a single stellar population and correcting for the underlying disc population and young non-ionising population identified above, the data points move up and to the left in the diagram, indicating younger ages for the ionising clusters. Regarding colour correction, we can see that the regions in the galaxy disc (square symbols on the graph) have the reddest colours while both the ring non-ionising population (diamondmond symbols) and the isolated young ionising clusters (solid circle symbols) show similar r-i values. According to the PopStar models we are using, these colour corresponds to stellar populations of about 300 Ma. The red colours shown by the youngest stellar populations are due to the contribution by a nebular continuum of up to 50 %. On the other hand, the EW(H\(\beta\)) of our ring regions, taken at face value, indicate mean ages of 6.2 Ma while the isolated young ionising population indicates younger ages with a mean value of 4.7 Ma. These latter ages are more consistent with model results than the former ones, since star clusters older than 5.2 Ma do not produce a detectable emission-line spectrum (Martin-Manjon et al. 2010). Composite young stellar populations have also been derived for CNFFRs in selected galaxies using FUV observations and a different methodology to the one described in our work (Siressi et al. 2022).
Additionally, we have detected carbon Wolf-Rayet (WRC) star features in the spectra of some of the analysed ring HII regions. The upper panel of Fig. 20 shows the location of these regions, 15 in total, (regions R3, R4, R21, R26, R35, R36, R40, R50, R55, R56, R58, R70, R72, R78, R85; this last one was removed due to its position on the BPT diagram). The lower panel of Fig. 20 shows an example of the R3 spectrum showing the CIV\(\lambda\lambda\) 5801,12 A and CIII4 5696 A lines (see Massey et al. 1992). The presence of these features places the age of the regions between 3.2 and 5.25 Ma according to the PopStar models described above.
In order to better constrain the evolutionary stage of our CNSFRs, we have studied the evolution of the [SII]\(\lambda\lambda\) 6717,31 A / [SIII]\(\lambda\lambda\) 9069,9532 A ratio with the age of our ionising regions. The [SII]/[SIII] ratio is a good indicator of ionisation parameter and depends on ionising mass for zero age stellar populations and
Figure 20: Upper panel: Map of the observed H\(\alpha\) flux. HII regions and WR candidates are plotted with black circles and stars respectively. Orientation is North up, East to the left. The limits of the ring are marked by blue circles. Lower panel: Emission lines of WR stars in region R3 (see Massey et al. 1992). Dashed and solid lines correspond to the integrated spectrum and the spectrum of areas showing the WR features respectively.
\begin{table}
\begin{tabular}{c c c c c c} \hline Region ID & log(\(\eta\)) & \multicolumn{2}{c}{Q(He\({}_{0}\))} & EW(H\(\beta\)) & M\({}_{ion}\) & M\({}_{phor}\) \\ & & (photons s\({}^{-1}\)) & (Å) & (M\({}_{\odot}\)) & (M\({}_{\odot}\)) \\ \hline R1* & \(1.498\pm 0.043\) & \((174.0\pm 6.9)\times 10^{46}\) & \(43.99\pm 1.60\) & \((15.9\pm 1.3)\times 10^{4}\) & \((26.3\pm 2.2)\times 10^{4}\) \\ R2 & \(1.383\pm 0.064\) & \((132.7\pm 7.7)\times 10^{46}\) & \(31.80\pm 1.31\) & \((15.1\pm 1.3)\times 10^{4}\) & \((25.9\pm 2.5)\times 10^{4}\) \\ R3 & \(0.992\pm 0.130\) & \((14.7\pm 2.1)\times 10^{47}\) & \(19.30\pm 0.83\) & \((38.2\pm 3.3)\times 10^{4}\) & \((66.1\pm 8.6)\times 10^{4}\) \\ R4 & \(0.918\pm 0.100\) & \((75.4\pm 8.2)\times 10^{46}\) & \(20.07\pm 0.76\) & \((16.8\pm 1.4)\times 10^{4}\) & \((28.4\pm 3.6)\times 10^{4}\) \\ R5 & \(1.072\pm 0.093\) & \((71.3\pm 6.2)\times 10^{46}\) & \(23.64\pm 0.99\) & \((12.8\pm 1.1)\times 10^{4}\) & \((22.8\pm 2.4)\times 10^{4}\) \\ R6 & \(0.961\pm 0.099\) & \((19.6\pm 1.6)\times 10^{46}\) & \(28.25\pm 1.72\) & \((26.8\pm 2.6)\times 10^{4}\) & \((43.2\pm 4.4)\times 10^{4}\) \\ R7 & - & \((9.0\pm 1.1)\times 10^{47}\) & \(17.01\pm 0.63\) & \((23.4\pm 1.9)\times 10^{4}\) & \((38.5\pm 5.5)\times 10^{4}\) \\ R8 & - & \((12.7\pm 1.7)\times 10^{47}\) & \(16.04\pm 0.63\) & \((33.1\pm 2.8)\times 10^{4}\) & \((52.4\pm 8.6)\times 10^{4}\) \\ R9 & - & \((13.4\pm 2.0)\times 10^{47}\) & \(15.97\pm 0.67\) & \((42.6\pm 3.7)\times 10^{4}\) & \((6.2\pm 1.0)\times 10^{5}\) \\ R10 & - & \((8.7\pm 1.1)\times 10^{47}\) & \(18.21\pm 0.69\) & \((25.9\pm 2.2)\times 10^{4}\) & \((42.4\pm 5.8)\times 10^{4}\) \\ \hline \end{tabular}
\end{table}
Table 12: Ionising cluster properties. The complete table is available online; here only a part is shown as an example.
then decreases as the cluster evolves due to the increasing loss of ionising photons.
Fig. 21 shows the relation between the equivalent width of the H\(\beta\) emission line and the [SII]/[SIII] ratio. Superimposed are the same solar metallicity PopStar models used before. A trend between the degree of evolution and the degree of ionisation of the nebula seems to exist in the galaxy disc population (magenta squares in the graph). This effect was already noticed by Hoyos & Diaz (2006) being explained by the different contributions of continuum light from underlying populations. The IFU data analysed here allow the subtraction of the disc and the young non-ionising stellar populations. Once this has been done, this trend is lost and the isolated young ionising clusters appear to cover the area occupied by the models. Furthermore, the CNFRs with WR features are concentrated in a narrow range of ages around 3.5 Ma in agreement with the single stellar population models used.
This correction also affects our initially derived ionising cluster masses and the photometric masses quoted in Section 4.4, through the number of ionising photons per unit solar mass which depends on EW(H\(\beta\)) for the former and the r-magnitude for the latter. The corrected values of the ionising cluster masses have a mean of \(3.5\times 10^{4}\) M\({}_{\odot}\), a factor of about 4.5 smaller than uncorrected ones; in the case of the photometric masses the corrected mean value is \(1.8\times 10^{5}\) M\({}_{\odot}\). This gives a ratio of ionising cluster mass to photometric mass of about 19 %.
## 5 Conclusions
In this work we present a study of the physical properties of the CNFRs in the ring of the face-on spiral NGC 7742 using MUSE observations publicly available and the full spectral region observed, from 4800 to 9300 A. The work is centred in the study of the individual ionising clusters that power the HII regions populating the ring of the galaxy. We have used the data cubes from the ESO Science Archive to produce 2D maps in the H\(\alpha\) and H\(\beta\) emission lines obtaining the spatial distribution of the visual extinction necessary for the nebular analysis. Additionally, two continuum maps at central wavelengths 5400 and 8150 A and the line maps of [OIII] and [NII] are as presented. A map of the EW(H\(\alpha\)) emission shows the circumnuclear regions within the ring, object of this study, having EW(H\(\alpha\)) \(>\) 20 A, consistent with the presence of star formation occurred less than 10 Ma ago. We have delimited the ring from the radial distribution of the observed H\(\alpha\) flux, as having an inner radius of 6 arcsec (0.75 kpc) and an outer radius of 13 arcsec (1.63 kpc). The observed H\(\alpha\) flux map has been used to select the ring ionised regions and also a set of HII regions external to it for comparison purposes. At the end of the procedure, we have obtained a total of 88 HII regions in the ring and 158 regions outside. The emission line ratios of the HII regions within the ring are consistent with the predictions of star forming models. However, maps of the central part of NGC 7742 in the [OIII] and [NII] emission line ratios allow the identification of a small circumnuclear ring at about 200 pc from the galaxy nucleus that seems to be dominated by shocks or an AGN non-thermal component of low activity. Three of our segmented ring regions to the South-East, R85, R86 and R87 may be somewhat affected by the radiation from the galaxy nucleus and consequently, they have not be considered in our analysis.
In order to study the properties of the selected CNFRs, we have measured the most prominent emission lines in their spectra: H\(\beta\) and H\(\alpha\) Balmer lines; [OIII]\(\lambda\lambda\) 4959,5007 A, [NII]\(\lambda\lambda\) 6548,84 A,[SII]\(\lambda\lambda\) 6716,31 A, [ArIII]\(\lambda\) 7136 A and [SIII]\(\lambda\) 9069 A forbidden lines and also the weaker lines of [SIII]\(\lambda\) 6312 A, HeL 6678 Aand [OII]\(\lambda\lambda\) 7320,30 A. We have calculated as well integrated fluxes inside the Sloan Digital Sky Survey (SDSS) filters for each selected region by convolving the appropriate filter transmission with their spectral energy distributions. A colour-magnitude diagram r-i vs M\({}_{r}\) shows the CNFRs to have a rather constant value.
For our observed ring HII regions we have derived: (1) the number of Hydrogen ionising photons per second, Q(H\({}_{0}\)); (2) the electron density of the emitting gas per cubic centimeter, n\({}_{e}\); (3) the ionisation parameter, u; (4) the corresponding angular radius in arcsec; (5) the filling factor; and (6) the mass of ionised hydrogen in solar masses, M(HII). All these values are consistent with those found in other studies of similar regions. Q(H\({}_{0}\)) is between \(2.57\times 10^{49}\) and \(1.51\times 10^{51}\) which points to these regions being ionised by star clusters; the electron density of the ionised gas is well below the critical one for collisional deexcitation; the ionisation parameter is inside a narrow range centred around log(u) \(\simeq\) -3.5; the estimated angular radii are in very good agreement with the measured ones, all of them spatially resolved, and show linear values between 34 and 130 pc; filling factors are low, with a mean value of 0.043, similar to the ones estimated for high metallicity disc HII regions; and, finally, the ionised hydrogen mass has a mean value of \(3.07\times 10^{4}\) M\({}_{\odot}\).
We have used sulphur as a tracer for chemical abundances of the selected HII regions using the methodology developed in Diaz & Zamora (2022), very adequate to the spectral characteristics of the MUSE spectroscopic data: 4800-9300 A wavelength range and spectral dispersion of 1.25 A/pix and the expected abundances of the studied regions, in the high metallicity range. The weak, temperature sensitive [SIII]\(\lambda\) 6312 A line has been measured with a S/N higher than 1 in \(\sim\)45 % (38 out of 88) of the HII regions within the ring. For these regions total sulphur abundances have been derived by the so called "direct method". For the rest of the regions we had to rely on empirical calibrations to derive their sulphur abundances, which has been done through the use of the S\({}_{23}\) parameter that has little dependence on reddening effects or calibration uncertainties since the lines involved can be measured relative to nearby hydrogen recombination lines. Also, the lines are observable even at over-solar abundances given their lower dependence with elec
Figure 21: The relation between the equivalent width of the H\(\beta\) emission line and the [SII]\(\lambda\lambda\) 6717,31 Å / [SIII]\(\lambda\lambda\) 9069,9532 Å ratio. Solid and dashed lines are from PopStar models with Cloudy and metallicity z = 0.02 (Martin-Manjón et al., 2010; García-Vargas et al., 2013). Mean error bars for CNSFRs are shown at the bottom right corner of the panel.
tron temperature. Derived sulphur abundances are between \(6.53\leq 12\)+log(S/H) \(\leq 7.50\), that is between 0.25 and 2.4 times the solar value, with most regions showing values slightly below solar. For a few ring HII regions we derived the total oxygen abundances using the [OII]\(\lambda\lambda\) 7220,30 A to calculate the O\({}^{+}\) contribution. For three of the analysed regions oxygen abundances are found to be high, of the order of 12+log(O/H) around 9.0 (2 times solar) with a contribution by O\({}^{+}\) to the total abundance as high as 90 %. These values reflect in very low S/O ratios. Similar values have been found for other high metallicity regions by different authors.
The final part of this work concerns the properties of the CNSFR ionising clusters. For the regions presenting the [OII]\(\lambda\lambda\) 7220,30 A lines we derived the \(\eta\) parameter that can be related to the effective temperature of the ionising radiation, finding values close to log(\(\eta\)) around 1.0, which implies low effective temperatures. An equivalent temperature of the ionised clusters can be estimated from the ratio of helium to hydrogen ionising photons, Q(He\({}_{0}\))/Q(H\({}_{0}\)). For 63 regions we could derive this ratio using the HeI\(\lambda\) 6678 A line finding a rather constant value of around \(10^{-3}\), corresponding to an equivalent temperature below 40000 K. The masses of ionising clusters, once corrected for the contribution of underlying non-ionising populations, were derived using PopStar models and are found to have a mean value of \(3.5\times 10^{4}\) M\({}_{\odot}\), comparable to the mass of ionised gas and about 19 % of the corrected photometric mass. The young stellar population of the CNSFRs has contributions of ionising and non-ionising populations in a ratio 0.24 with ages around 5 Ma and 300 Ma respectively.
The homogeneity of abundances and continuum colours, together with the kinematics and counter rotating nature of the ring fits the minor merger scenario proposed by previous works. This merger would have triggered the star formation in the ring producing massive star clusters showing at present a young stellar population 300 Ma old accompanied by a subsequent young ionising population involving around 20 % of the integrated cluster masses. Satellite accretion and major or minor mergers have also been suggested as the origin of the galaxy clumps observed at intermediate redshift (see e.g. Elmegreen & Elmegreen 2005). However, a recent study of clumps and accreted satellites in 53 star forming galaxies at z \(\sim\) 1-3 (Zanella et al. 2019) shows that, although the more extended clumps are probably formed in merger processes, the identified compact clumps formed in _in situ_ show physical properties: sizes (\(\sim\) 1-2 Kpc), masses (\(\sim\) 10\({}^{7}\)-10\({}^{8}\) M\({}_{\odot}\)), ages \(\leq\) 10 Ma and metallicities (12+log(O/H) \(\simeq\) 8.56), more compatible with the total values found for the ensemble of ionising clusters studied here, as it would be observed at the quoted redshift. Obviously, more data, preferably at low to intermediate redshift, are needed in order to distinguished between these two hypotheses.
## Acknowledgements
This research has made use of the services of the ESO Science Archive Facility and NASA's Astrophysics Data System Abstract Service. It is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 60.A-9301(A) and data products created thereof. Also we have used observations obtained with the NASA/ESA HST and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA), and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
This work has been supported by Spanish grants from the former Ministry of Economy, Industry and Competitiveness through the MINECO-FEDER research grant AYA2016-79724-C4-1-P, the present Ministry of Science and Innovation through research grant PID2019-107408GB-C42 and the National Research Agency through research grant AEI/10.13039/501100011033.
S.Z. acknowledges the support from contract: BES-2017-080509 associated to the first of these grants.
## Data Availability
The original data on which this article is based can be found in the ESO Science Archive Facility from ESO telescopes at La Silla Paranal Observatory.
|
2307.14783 | Emotion4MIDI: a Lyrics-based Emotion-Labeled Symbolic Music Dataset | We present a new large-scale emotion-labeled symbolic music dataset
consisting of 12k MIDI songs. To create this dataset, we first trained emotion
classification models on the GoEmotions dataset, achieving state-of-the-art
results with a model half the size of the baseline. We then applied these
models to lyrics from two large-scale MIDI datasets. Our dataset covers a wide
range of fine-grained emotions, providing a valuable resource to explore the
connection between music and emotions and, especially, to develop models that
can generate music based on specific emotions. Our code for inference, trained
models, and datasets are available online. | Serkan Sulun, Pedro Oliveira, Paula Viana | 2023-07-27T11:24:47Z | http://arxiv.org/abs/2307.14783v1 | # Emotion4MIDI: a Lyrics-based Emotion-Labeled Symbolic Music Dataset
###### Abstract
We present a new large-scale emotion-labeled symbolic music dataset consisting of \(12k\) MIDI songs. To create this dataset, we first trained emotion classification models on the GoEmotions dataset, achieving state-of-the-art results with a model half the size of the baseline. We then applied these models to lyrics from two large-scale MIDI datasets. Our dataset covers a wide range of fine-grained emotions, providing a valuable resource to explore the connection between music and emotions and, especially, to develop models that can generate music based on specific emotions. Our code for inference, trained models, and datasets are available online.
Keywords:Sentiment analysis Symbolic music Emotion classification Music dataset
## 1 Introduction
Music has long been a powerful medium for emotional expression and communication [16]. The emotional response that music elicits has been studied by scholars from various fields such as psychology [19], musicology [15], and neuroscience [17]. Especially with the advent of deep learning, there has been an increasing interest in developing machine learning algorithms to automatically analyze and generate music that can evoke specific emotions in listeners [3].
Symbolic music - or MIDI (Musical Instrument Digital Interface) as it is used interchangeably - is represented as a sequence of notes and is a popular choice for machine learning models due to its compact and structured representation. Large raw MIDI datasets [30, 31] enable unsupervised training of deep neural networks to automatically generate symbolic music. Similar to language modeling, these networks learn to predict the next token i.e. the next note, and at inference time, generate output autoregressively, one token at a time.
However, a human composer's creative process does not simply involve mechanically writing one note after another; it often includes high-level concepts
such as motifs, themes and, ultimately, emotions [24]. To train deep neural networks to generate music based on emotions, large datasets of symbolic music annotated with emotional labels are required. Although there are some publicly available datasets with emotional labels, they are relatively small and do not cover a wide range of emotional states [33].
To address this issue, we present a new large-scale emotion-labeled symbolic music dataset created by analyzing the lyrics of the songs. Our approach leverages the natural connection between lyrics and music, established through emotions. To this end, we first trained models for emotion classification from text on GoEmotions [5], one of the largest text datasets with 28 fine-grained emotion labels. Using a model that is half the size of the baseline model, we obtained state-of-the-art results on this dataset. Later, we applied this model to the lyrics of songs from two of the biggest available MIDI datasets, namely Lakh MIDI dataset [30] and Reddit MIDI dataset [31]. Ultimately, we created a symbolic music dataset consisting of \(12k\) MIDI songs labeled with fine-grained emotions. We hope that this dataset will encourage further research in the field of affective algorithmic composition and contribute to the development of intelligent music systems that can understand and evoke specific emotions in listeners.
The remaining of this paper has the following structure: after having introduced our aim and the overall results in Section 1, Section 2 presents the current state of the art on the most relevant topics for this work, namely text emotion classification and the existing emotion-labeled symbolic music datasets. Section 3 will delve into the proposed solution describing all the implemented steps, while results are presented and discussed in Section 4. Finally, we conclude by pointing out some possible future work in Section 5.
## 2 Related work
### Text emotion classification
Emotion classification from text - or sentiment analysis, as used interchangeably in the machine learning literature - allows us to automatically identify and/or quantify the emotion expressed in a piece of text, such as a review, social media post, or customer feedback [23]. Identifying the underlying emotion in text is useful in various fields such as customer service [10], finance [25], politics [14], and entertainment [1].
Machine learning methods have significantly advanced the state of the art in text emotion classification for the past two decades. However, the earliest works in this field relied on hand-crafted features, such as frequently used n-grams [27], or adjectives and adverbs that are associated with particular emotions [35]. Nonetheless, the advent of deep learning has made it computationally feasible to process raw inputs without extracting features manually, leading to better performance [18]. Recurrent Neural Networks and their improved variants such as Long Short-Term Memory were initially used [22] but were later replaced by the transformer model [34], which is the current state of the art in natural language processing (NLP) tasks.
Fine-tuning pretrained models on specific tasks has been shown to produce better performance. The GPT (generative pretraining) model is a large transformer that was pretrained on the task of next token prediction and then was fine-tuned on specific NLP tasks, resulting in state-of-the-art performance [29]. The BERT (Bidirectional Encoder Representations from Transformers) model improved upon these results by employing masked token prediction as its pretraining task [6].
### Emotion-labeled symbolic music datasets
MIDI (Musical Instrument Digital Interface) is a symbolic music format widely used to represent musical performances and compositions in the digital domain. MIDI files contain only the musical information, such as the notes, tempo, and dynamics, without the sound itself, like a "digital music sheet". Compared to audio formats, MIDI files have a smaller size and dimensionality, which makes them more manageable and suitable for modeling with deep neural networks [3].
The majority of existing literature on symbolic music generation relies on a non-conditional approach. In other words, these methods are trained on raw MIDI data without any explicit labels, allowing them to generate new music that is similar to the examples in the training dataset [12]. Some approaches, however, leverage low-level features within the data to create music in a conditional way [11]. For instance, they might use short melodies, chords, or single-instrument tracks as a basis for generating corresponding melodies. While such methods could be considered as "conditional", they do not make use of specific labels and are thus unable to capture high-level factors such as emotions or genres.
Using emotion as the specific high-level condition gives rise to the field of "affective algorithmic composition" (AAC) [36]. However, the development of machine learning AAC models is currently limited by the lack of large-scale symbolic music datasets with emotion labels. Some existing datasets include VGMIDI, which contains 204 piano-based video game soundtracks with continuous valence and arousal labels [8], Panda et al., which includes 193 samples with discrete emotion labels [26], and EMOPIA, which consists of 387 piano-based pop songs with four emotion labels [13]. Unfortunately, due to their small sizes, these datasets are insufficient for training deep neural networks with millions of parameters. Sulun et al. addressed this issue by labeling \(34k\) samples with continuous valence and arousal labels [33]. Though initially designed for audio samples, these labels were matched to their corresponding MIDI files to train emotion-based symbolic music generators that produced output music with emotional coherence. While this study exploited the correspondence between audio and symbolic music, there has been no utilization of the correspondence between lyrics and symbolic music to acquire high-level semantic labels.
## 3 Methodology
This section outlines the steps we followed to achieve our goal of creating a symbolic music dataset with emotion labels. Specifically, we begin by describing
the model utilized for emotion classification, followed by a discussion of the training process, and conclude with an overview of how the model was applied to song lyrics to extract the corresponding emotion labels.
### Model
We employ DistilBERT as the backbone of our model [32], which is a condensed and compressed variant of the BERT (Bidirectional Encoder Representations from Transformers) model [6], achieved through knowledge distillation [4, 9]. DistilBERT utilizes fewer layers than BERT and learns from BERT's outputs to mimic its behavior. Our model consists of 6 layers, with each layer containing 12 attention heads and a dimensionality of 768, yielding a total of \(67M\) parameters. To facilitate multi-label classification, we have customized the output layer while adding a sigmoid activation layer at the end. The output layer's size is determined by the number of labels present in the training dataset, which can be either 7 or 28.
### Training
The first step towards our aim of building an emotion-labeled symbolic music dataset is to train the model to perform multi-label emotion classification based on text input.
#### 3.2.1 Dataset
We trained our model using the GoEmotions dataset [5]. This dataset consists of English comments from the website _reddit.com_, which were manually annotated to identify the underlying emotions. It is a multi-label dataset, which means that each comment can have more than one emotion label. The dataset comprises 27 emotions and a "neutral" label. The labels are further grouped into 7 categories, including the six basic emotions identified by Ekman (joy, anger, fear, sadness, disgust, and surprise) as well as the "neutral" label [7]. The dataset has a total of \(58k\) samples, which were split into training, validation, and testing sets in the ratio of 80%, 10%, and 10%, respectively. Given the number of labels and its size, GoEmotions is one of the largest emotion classification datasets and has the highest number of discrete emotion labels [20].
#### 3.2.2 Training and evaluation metrics
We trained our models using binary cross-entropy loss. For evaluation, we used precision, recall, and F1-score, with macro averaging. The decision cutoff was set at 0.3, meaning that predictions with a value of 0.3 or greater are considered positive predictions and others negative.
#### 3.2.3 Implementation details
We trained two models to classify a given text into 7 and 28 labels. We used a dropout rate of 0.1 and a gradient clipping norm of 1. The batch size was set to 16 for the model with 7 output labels and to 32 for the model with 28 output labels. We applied a learning rate of \(5e-5\) for
the former and \(3e-5\) for the latter. We used early stopping considering the F1-score on the validation dataset, which corresponded to training for 10 epochs for both models. We implemented the models using Huggingface library [37] with Pytorch backend [28] and trained them using a single Nvidia GeForce GTX 1080 Ti GPU.
### Inference
After training the model for text-based emotion classification, we used it in inference mode, using the song lyrics from the MIDI files as inputs. This allowed us to create a MIDI dataset labeled with emotions.
#### 3.3.1 Datasets
We used two MIDI datasets that are publicly available and were created by gathering MIDI files from various online sources: the Lakh MIDI dataset consisting of \(176k\) samples [30] and the Reddit MIDI dataset containing \(130k\) samples [31]. We filtered the datasets by selecting MIDI files that contain lyrics in the English language with at least 50 words. This filtering process resulted in a total of 12509 files, consisting of 8386 files from the Lakh MIDI dataset and 4123 files from the Reddit MIDI dataset. During inference, we utilized the two pretrained models, feeding the entire song's lyrics, using a truncation length of 512.
## 4 Results
In this section, we will first present the emotion classification performance of our trained models. Then, we will introduce the emotion-labeled MIDI dataset, which we created by analyzing the sentiment of the song lyrics using our trained models.
### Emotion classification on the GoEmotions dataset
We evaluated the performance of our trained models on the test split of the GoEmotions dataset and compared our results with the baseline presented in the original paper [5]. Similar to the original paper, we report our results for scenarios using two sets of labels, with 7 and 28 emotions. For each label, we reported the precision, recall, and F1-scores along with the macro-averages. It is important to mention that, as the dataset is imbalanced, macro-averaging is more appropriate than micro-averaging, as it was also used in the original paper. We note that the baseline model is BERT and has twice the size of our model [6].
The trade-off between precision and recall is determined by the cutoff value. Therefore, we emphasize higher F1-scores because they provide a more balanced perspective by taking the harmonic mean of precision and recall, and are much less sensitive to the cutoff value. Although the original paper did not state the
cutoff value, we achieved the best F1-score and similar performance to the original paper on the 7-label dataset using a cutoff value of 0.3. For consistency, we used the same value for the 28-label dataset. We present our results on the dataset with 7 and 28 labels in Tables 1 and 2, respectively.
worse on 2 labels, and the same on 3 labels, as well as for the macro-average. On the 28-label dataset, our model surpasses the baseline with only a lower performance on 2 labels, equal performance on 4 labels, and better performance on the remaining 22 labels. Furthermore, our model demonstrates an improvement of 0.04 in terms of the macro-average.
We hypothesize that a smaller model, such as ours (DistilBERT), may perform better than a larger baseline model (BERT) in certain settings, such as when there are a limited number of training samples or a high output/target dimensionality, as in the case of the 28-label dataset. In these scenarios, models are more prone to overfitting, as has been previously observed [38]. Additionally, the original paper [32] demonstrates that the DistilBERT model outperforms BERT on the Winograd Natural Language Inference (WNLI) dataset [21].
### Labeled MIDI dataset
We used our trained models to analyze the song lyrics of the Lakh and Reddit MIDI datasets, resulting in an augmented dataset that contains the file paths to 12509 MIDI files and their corresponding predicted probabilities for emotion labels. To provide more flexibility to the users, we did not apply a threshold to the predicted probabilities, allowing the entire dataset to be used as is. We generated two CSV (comma-separated values) files containing the 7 and 28 emotion labels as columns, with the 12509 MIDI file paths as rows. Our code for inference, trained models, and datasets are available online.4
Footnote 4: [https://github.com/serkansulun/lyricsemotions](https://github.com/serkansulun/lyricsemotions)
For demonstration purposes, we provide transposed versions of the tables, using only 3 samples, shown in Tables 3 and 4. We note that the values do not necessarily add up to one, due to the nature of multi-label classification.
having predicted probabilities higher than 0.1 in descending order. It is noteworthy that having a dataset with 28 emotion labels allows for a more nuanced representation of emotions. For instance, when we examine this dataset, the song "Imagine" is predicted to have "optimism" as its top emotion, whereas "Take a Chance on Me" is predicted to have "caring" as its top emotion. However, both songs are predicted to have "joy" as their top emotion in the dataset with only seven labels.
We also present the number of samples containing each emotion in our datasets in Figure 1. In these figures, we excluded the "neutral" label and considered emotions with a prediction value higher than 0.1 as positive labels.
## 5 Conclusion and future work
In this work, we first trained models on the largest text-based emotion classification dataset, GoEmotions, in both 7-label and 28-label variants [5]. We achieved state-of-the-art results using a model half the size of the baseline. We then used these trained models to analyze the emotions of the song lyrics from the two largest MIDI datasets, Lakh MIDI dataset [30] and Reddit MIDI dataset [31]. This analysis resulted in an augmented dataset of 12509 MIDI files with emotion labels in a multi-label format, using either 7 basic-level or 28 fine-grained emotions. We made the datasets, inference code, and trained models available for
\begin{table}
\begin{tabular}{l c c c} & John Lennon & ABBA - Take & Elvis Presley - Are \\ & - Imagine & a Chance on Me & You Lonesome Tonight \\ \hline admiration & 0.0021 & 0.0091 & 0.0048 \\ amusement & 0.0051 & 0.0012 & 0.0027 \\ anger & 0.0025 & 0.0018 & 0.0053 \\ annoyance & 0.0024 & 0.0020 & 0.0075 \\ approval & 0.0026 & 0.0809 & 0.0072 \\ caring & 0.0067 & **0.6169** & 0.0601 \\ confusion & 0.0070 & 0.0035 & 0.1029 \\ curiosity & 0.0332 & 0.0141 & **0.6502** \\ desire & 0.0482 & 0.0472 & 0.0055 \\ disappointment & 0.0044 & 0.0016 & 0.0199 \\ disapproval & 0.0019 & 0.0030 & 0.0048 \\ disgust & 0.0007 & 0.0003 & 0.0009 \\ embarrassment & 0.0006 & 0.0002 & 0.0045 \\ excitement & 0.0130 & 0.0049 & 0.0011 \\ fear & 0.0026 & 0.0026 & 0.0035 \\ gratitude & 0.0007 & 0.0017 & 0.0059 \\ grief & 0.0008 & 0.0016 & 0.0085 \\ joy & 0.0025 & 0.0040 & 0.0018 \\ love & 0.0021 & 0.1079 & 0.0193 \\ nervousness & 0.0007 & 0.0017 & 0.0094 \\ neutral & 0.2954 & 0.4288 & 0.0757 \\ optimism & **0.7554** & 0.1423 & 0.0060 \\ pride & 0.0010 & 0.0013 & 0.0006 \\ realization & 0.0023 & 0.0040 & 0.0045 \\ relief & 0.0004 & 0.0033 & 0.0011 \\ remorse & 0.0005 & 0.0012 & 0.1491 \\ sadness & 0.0011 & 0.0027 & 0.1767 \\ surprise & 0.0107 & 0.0005 & 0.0020 \\ \end{tabular}
\end{table}
Table 4: Sample entries from the 28-label dataset.
Listing 1.1: Sample entries with excerpts from lyrics, and emotions with a predicted value higher than 0.1.
File path: lakh/5/58c076b72d5115486c09a7d9e6df1029.mid Artist - Title: John Lennon - Imagine Lyrics: Imagine there's no heaven. It's easy if you try. No hell below us. Above us, only sky. Imagine all the people. Livin' for today.
7-label predictions: joy: 0.8072 neutral: 0.1953 28-label predictions: optimism: 0.7554 neutral: 0.2954
File path: reddit/A/ABBA.Take achanceon me K.mid Artist - Title: ABBA - Take a Chanceon Me Lyrics: If you change your mind, I'm the first in line. Honey, I'm still free. Take achanceon me. If you need me, let me know, gonnabe around. If you've got no place to go, if you're feeling down.
7-label predictions: joy: 0.8948 neutral: 0.1420
28-label predictions: caring: 0.6169 neutral: 0.4288 optimism: 0.1423 love: 0.1079
File path: reddit/P/PRESLEY.Are you lonesome tonight K.mid Artist - Title: Elvis Presley - Are You Lonesome Tonight Lyrics: Are you lonesome tonight? Do you miss m entonight? Are you sorry we drifted apart? Does your memory stray to a bright summer day, When I kissed you and called you sweetheart?
7-label predictions: sadness: 0.7372 surprise: 0.5465
28-label predictions: curiosity: 0.6502 sadness: 0.1767 remorse: 0.1491 confusion: 0.1029
researchers to use in various tasks, including symbolic music processing, natural language processing, and sentiment analysis.
In our future work, we plan to further narrow the considerable gap between symbolic music and emotion. In particular, we aim to create superior models that can automatically compose music that is based on emotions or user-provided input. We believe that incorporating emotions is vital in composing music, hence it can help to push the boundaries of computational creativity, bringing it one step closer to human-like performance.
|
2302.11025 | Asteroseismology of $δ$ Scuti stars: emulating model grids using a
neural network | Young $\delta$ Scuti stars have proven to be valuable asteroseismic targets
but obtaining robust uncertainties on their inferred properties is challenging.
We aim to quantify the random uncertainties in grid-based modelling of $\delta$
Sct stars. We apply Bayesian inference using nested sampling and a neural
network emulator of stellar models, testing our method on both simulated and
real stars. Based on results from simulated stars we demonstrate that our
method can recover plausible posterior probability density estimates while
accounting for both the random uncertainty from the observations and neural
network emulation. We find that the posterior distributions of the fundamental
parameters can be significantly non-Gaussian, multi-modal, and have strong
covariance. We conclude that our method reliably estimates the random
uncertainty in the modelling of $\delta$ Sct stars and paves the way for the
investigation and quantification of the systematic uncertainty. | Owen J. Scutt, Simon J. Murphy, Martin B. Nielsen, Guy R. Davies, Timothy R. Bedding, Alexander J. Lyttle | 2023-02-21T21:47:34Z | http://arxiv.org/abs/2302.11025v2 | # Asteroseismology of \(\delta\) Scuti stars: emulating model grids using a neural network
###### Abstract
Young \(\delta\) Scuti stars have proven to be valuable asteroseismic targets but obtaining robust uncertainties on their inferred properties is challenging. We aim to quantify the random uncertainties in grid-based modelling of \(\delta\) Sct stars. We apply Bayesian inference using nested sampling and a neural network emulator of stellar models, testing our method on both simulated and real stars. Based on results from simulated stars we demonstrate that our method can recover plausible posterior probability density estimates while accounting for both the random uncertainty from the observations and neural network emulation. We find that the posterior distributions of the fundamental parameters can be significantly non-Gaussian, multi-modal, and have strong covariance. We conclude that our method reliably estimates the random uncertainty in the modelling of \(\delta\) Sct stars and paves the way for the investigation and quantification of the systematic uncertainty.
keywords: asteroseismology - stars: variables: Scuti - stars: fundamental parameters - methods: data analysis - methods: statistical
## 1 Introduction
Stellar ages for individual stars are notoriously difficult to measure (Soderblom, 2010). One method is to model a cluster with isochrones, which is particularly sensitive to high-mass stars at the main-sequence (MS) turn-off (Lipatov et al., 2022). Other techniques, such as the lithium depletion boundary (e.g. Galindo-Guil et al., 2022) or kinematics (Miret-Roig et al., 2022; Zerjal et al., 2023), are able to use low-mass stars, which are much more abundant. However, methods that utilize intermediate-mass stars for measuring stellar ages have been lacking.
Asteroseismology - the study of stellar oscillations - is highly sensitive to age and has long held promise as an independent method for age determination (e.g., Aerts, 2015). Like other techniques, asteroseismology is model-dependent, but the physics of those models is generally different from the high- and low-mass stars (Soderblom, 2010), hence the techniques are highly complementary (Kerr et al., 2022, 2022). Until recently, however, asteroseismology of intermediate-mass stars (the so-called \(\delta\) Scuti variables) has been hampered by the difficulties in identifying which modes are excited. The discovery of regular patterns in the pulsation mode frequencies of some \(\delta\) Sct stars (Bedding et al., 2020) has opened up a pathway to determine their masses, ages, and metallicities, without the requirement that the star resides in a cluster or association.
In recent years, oscillations in large numbers of \(\delta\) Sct stars have been measured using white-light photometry from space telescopes such as _CoRoT_(e.g., Paparo et al., 2013; Michel et al., 2017; Barcelo Forteza et al., 2018), _Kepler_(e.g., Uytterhoeven et al., 2011; Balona et al., 2015; Garcia Hernandez et al., 2017; Bowman and Kurtz, 2018; Guzik, 2021) and _TESS_(e.g., Antoci et al., 2019; Hasanzadeh et al., 2021; Barac et al., 2022; Chen et al., 2022). Observed oscillation frequencies can be compared against grids of model frequencies to find a best-fitting set of parameters (Murphy et al., 2021, 2022). It is somewhat more challenging to understand the resulting uncertainties, which are not uniquely determined by the spacing of the model grid (Pedersen, 2020), and instead depend more strongly on the underlying physics (Steindl et al., 2021). Part of the challenge is that models can be computationally expensive and calculating new evolutionary tracks on-the-fly for Monte Carlo sampling is prohibitive.
In order to treat the uncertainties more robustly, we aim to convert a discrete grid of stellar models into a continuous function. We use a neural network to emulate a grid of stellar models that has been pre-computed over the range of expected stellar parameters. We combine the trained neural network with a Bayesian sampler to formally treat random uncertainties in the observables. This yields estimates for the posterior probability density of the fundamental properties which quantifies their uncertainties. It also allows us to infer viable frequencies for modes that were not detected, but which might exist in the data at low signal-to-noise.
In the following section we describe the grid of stellar models on which the neural network is trained, and in Sec. 3 we discuss the details of the network architecture and training method. In Sec. 4 we present the method used to perform the Bayesian inference, and show results for a selection of simulated and real sets of observations (Sec. 5).
## 2 The Stellar Model Grid
We used the model grid of Murphy et al. (in prep), consisting of evolutionary tracks computed with MESA (r15140; Paxton et al., 2011, 2013, 2015, 2018, 2019) and pulsation models calculated with GYRE (v6.0.1; Townsend & Teitler, 2013). Provisional versions of this grid have already been used to model the pulsations of \(\delta\) Sct stars (Murphy et al., 2022; Kerr et al., 2022a,b; Currie et al., 2022), and the physics of the models are described in Murphy et al. (2022).
A well-sampled grid was needed to train the neural network emulator. Here, evolutionary tracks were spaced by \(0.02\,\mathrm{M}_{\odot}\) in mass \(M\) and \(0.001\) in initial metallicity \(Z_{\mathrm{in}}\). For \(Z_{\mathrm{in}}>0.010\), the spacing was increased to \(0.002\). The grid is shown in mass-metallicity space in Fig. 1. A common problem in MESA is that pre-MS models sometimes fail to converge and the evolution is terminated (see, e.g., Steindl et al., 2021). In such cases, we attempted to re-calculate the track with a slightly increased mass (\(M\)+= 0.001) up to five times before abandoning that track. Abandoned tracks appear as gaps in the grid in Fig. 1.
It is also important to ensure the tracks are sampled well in age. Computational errors are minimised by keeping the time interval small throughout the evolution, even if not all time steps are saved as outputs. The internal sampling is described in Murphy et al. (in prep). For outputs, we saved evolutionary and pulsation models every \(0.05\,\mathrm{Myr}\) from \(2\,\mathrm{Myr}\) until \(10.5\,\mathrm{Myr}\), in order to adequately sample the rapid evolutionary changes that occur on the pre-MS. After this the evolution is somewhat slower, and sampling of \(3\,\mathrm{Myr}\) was deemed adequate up to ages of \(40\,\mathrm{Myr}\). Beyond this, the tracks were instead sampled according to changes in position on the HR diagram (limits of \(\Delta\log T_{\mathrm{eff}}=0.0006\) and \(\Delta\log L=0.002\)), with an upper limit of \(100\,\mathrm{Myr}\) between samples. Where large gaps occurred in the grid, or when the specific \(M\)-\(Z_{\mathrm{in}}\) combination demanded it, we manually recalculated tracks with finer sampling. This explains the variations in the number of samples per track in Fig. 1.
For each pulsation model, we computed the frequencies of radial modes (spherical degree \(\ell=0\)) having radial orders \(n\) from \(1\) to \(11\), and dipole (\(\ell=1\)) modes having \(n\sim 1\)-\(10\). This encompasses the range of radial orders observed for real stars (e.g. Bedding et al., 2022). We calculated the mean frequency separation between radial orders (\(\Delta\nu\)) using the radial modes having \(n=5\)-\(9\), by fitting a straight line to the mode frequencies as a function of \(n\)(see White et al., 2011), using
\[\nu=\Delta\nu(n+\ell/2+\epsilon). \tag{1}\]
The variable \(\epsilon\) is the intercept of that line with the y-axis, and describes the distance of the radial mode ridge from the y-axis in an echelle diagram. In addition to the individual mode frequencies, we stored the values of \(\Delta\nu\) and \(\epsilon\) for each model in the grid, since these asteroseismic quantities relate to astrophysical quantities (Murphy et al. in prep.).
To reduce the effect of the strong covariance between stellar age \(\tau\) and mass \(M\), and ease the training of the neural network, we used the assumption that the MS lifetime is approximately proportional to \(M^{-3.2}\) and defined the scaled age (e.g., Davies & Miglio, 2016)
\[\mathcal{K}=10^{-4}\,\tau\,(M/\mathrm{M}_{\odot})^{3.2}. \tag{2}\]
This scaled age serves as an estimate of the fractional MS age of our models.
## 3 Constructing the Neural Network
To overcome the discretely sampled nature of the model grid, we used a neural network consisting of a series of fully connected dense layers in place of standard interpolation for continuous stellar model emulation. The network was trained on the model grid, learning to predict observable parameters given stellar model input parameters. This way, the network learned the map from observables to model parameters and could be used for likelihood estimation during inference. To this end, we used the fundamental parameters \(M\), \(Z_{\mathrm{in}}\) and scaled age (\(\mathcal{K}\)) as inputs for parameter augmentation and network training. Outputs consist of the classical observables (\(L\) and \(T_{\mathrm{eff}}\)); asteroseismic quantities (\(\Delta\nu\) and \(\epsilon\)); and \(11\) radial and \(10\) dipole mode frequencies. We refer to these \(25\) outputs collectively as the 'observable parameters'.
Once the input and output parameters were defined, we carried out dataset-wide parameter augmentation to improve the training of the network. We converted all parameters (excluding \(\epsilon\)) to the decimal logarithm and applied a Z-score standardisation to all parameters (including \(\epsilon\)). Both of these operations restricted all parameters to similar ranges, to avoid the neural network assigning erroneously high importance to parameters spanning several orders of magnitude during training. We found the combination of the two operations to be optimal for this investigation.
To further simplify the training process, we performed principal component analysis on the observable parameters in the model grid, as follows. For all models, we calculated the covariance matrix of all \(25\) observable parameters. The eigenvectors of the resulting covariance matrix, or 'principal components', were ranked in order of descending eigenvalue, returning a list of principal components explaining the most to the least variance in the observable parameters. We determined how many principal components to include using the explained variance ratio, which describes the percentage of the variance of the observable space present in just the chosen principal components. We found that \(9\) principal components explained all but \(10^{-4}\) per cent of the total variance. This sufficiently explained the covariance of the \(25\) parameters in the full observable space.
The use of principal components presents the neural network with a simpler map to learn--replacing the fundamental parameters by the reduced dimensions of the 'latent parameters'--and also removes covariance information from the observables, which is redundant for the neural network. We then added a custom non-trainable layer to the
Figure 1: The model grid used in this work, where each symbol represents an evolutionary track with a particular metallicity (\(Z_{\mathrm{in}}\)) and mass. Colour-coding indicates the number of models along each track for which pulsation frequencies were calculated (see Sec. 2).
neural network, which projects the latent parameters back into the full observable parameter space before the network outputs predictions.
Finally, we split the model grid into a training and a testing set for the neural network. The training set was randomly selected to comprise 80 per cent of the model grid, to be seen by the network during training. The test set, composed of the remaining 20 per cent of the grid, was unseen by the network during the training process and was used solely for evaluation of network prediction performance. This served as a check that the network is capable of model grid interpolation -- the training set became a sparser representation of the original model grid, with the test set providing models guaranteed to hold combinations of parameters previously unknown to the network.
In addition to the data augmentation prior to training, the hyperparameters of a neural network can be tuned to promote faster and more stable learning. To quantify network performance for comparison between different hyperparameter permutations, we compared their validation loss profiles over multiple network training sessions. We adopted a 'grid search' method for testing potential combinations of network hyperparameters. This involved creating a grid of potential values for the number of fully connected dense layers (ranging from 3 to 8 in steps of 1), activation functions, optimizers, learning rates, loss functions, and batch sizes. We populated a grid with these hyperparameters, and then tested the resulting network at each position in the hyperparameter grid for successful validation loss minimisation.
We found the optimal network consisted of 6 fully-connected dense layers of 64 neurons, each using an exponential linear unit activation function (Clevert et al., 2015), followed by the custom layer for projection from latent to observable parameters, and a final dense output layer with linear activation function. We used the Adam optimizer (Kingma and Ba, 2014) with a learning rate of \(10^{-4}\), and the mean-squared-error loss function. A batch size of \(6\times 10^{4}\) models provided a good compromise between speed and training stability. We used a validation split of 25 per cent of the training set -- where the test set is used to evaluate neural network success after training, a 'validation set' is used for evaluation of neural network success during training. Once primary training was complete with the learning rate above, we saved and recompiled the network with a slower but less volatile learning rate of \(7\times 10^{-5}\), and restarted training until no validation loss reduction was observed for \(10^{4}\) training epochs. The network and custom latent-to-observable projection layer were constructed using the TensorFlow sequential API (Abadi et al., 2015).
Once the optimal network from the grid search was trained, we evaluated the network performance across the full observable parameter space. Using the test set previously removed from the model grid, we plotted distributions of the decimal logarithm prediction residuals for each parameter. This allowed us to visualize any bias and uncertainty inherent in the neural network predictions. We used the median absolute deviation of these prediction residual distributions, shown in Fig 2, to quantify network prediction uncertainty for observable parameters. We found an uncertainty in network predictions of \(8\times 10^{-4}\) dex and \(2\times 10^{-4}\) dex for log \(L\) and \(\log T_{\text{eff}}\), respectively, and a mean uncertainty of \(\sim 3\times 10^{-4}\) dex for the mode frequencies.
## 4 Inferring the fundamental stellar parameters
To perform the Bayesian inference on the input model parameters, \(\theta=(M,\log Z_{\text{in}},\log\mathcal{K})\), for a given observed set of mode frequencies, we sampled the posterior distribution
\[P(\theta|D)=\frac{\mathcal{P}(\theta)\mathcal{L}(D|\theta)}{\mathcal{E}(D)}. \tag{3}\]
Here, \(\mathcal{P}(\theta)\) is the prior on the input model parameters, and \(\mathcal{L}(D|\theta)\) is the likelihood of observing a set of parameters (\(D\)) given the model parameters. The evidence, \(\mathcal{E}(D)\), is calculated at each step during the sampling.
In addition to the input model parameters, we also included a variable offset term, \(\Delta n\), as input to account for the possible ambiguity in assigning the radial orders of the observed modes. This ambiguity arises because the radial orders of a set of modes cannot always be determined from the observed mode frequencies alone, and are typically decided by comparison to stellar models. Including \(\Delta n\) in the sampling allows us to marginalize over this uncertainty when estimating the posterior distribution of the input model parameters. We expect this uncertainty to only lead to an error of \(\pm 1\) radial order, and so we chose the prior on \(\Delta n\) to be a set of \(\delta\)-functions at \(\Delta n=-1,0\) and \(1\).
Table 1 lists a summary of the prior functions. For the priors on \(M\), log \(Z_{\text{in}}\) and log \(\mathcal{K}\), we chose to use \(\beta\) distributions, since they can be bounded to match the limits of the stellar model grid. In addition, the shape parameters of the \(\beta\) distributions may be chosen such that the priors reflect our expectation of the distribution of real observations of \(M\), \(Z_{\text{in}}\) and \(\mathcal{K}\).
Our choice of range and shape of the prior on \(\mathcal{K}\) is motivated by the age range and distribution we expect to target, and also be able to observe. Mode identification for \(\delta\) Sct stars is currently possible up to approximately one third of the MS age, after which the coupling between the buoyancy dominated and acoustic modes spoils the regular mode frequency patterns. Furthermore, at older ages the physics of mixing and overshooting become more important, and those were not treated as variables in the model grid. Hence, the prior on age extends to approximately one third of the expected MS lifetime. The lower limit on the prior on the scaled age was chosen because stars in our mass range of interest (see below) do not evolve to cross the \(\delta\) Sct instability strip until ages \(\geq 2\) Myr. Due to the motion of stars through the \(\delta\) Scitnstability strip, the age prior is biased toward lower ages, with a fall-off in the age prior distribution toward older stars.
The prior on \(Z_{\text{in}}\) ranges from approximately 0.07 to 1.5 times the solar metal mass fraction of 1.42 per cent used in the models (Asplund et al., 2009), which covers the metallicity distribution of stars forming within approximately 1 kpc of the Sun at the current age of the Galaxy (Hayden et al., 2020). The existence of metal-poor \(\delta\) Sct stars in modern star-forming regions (e.g. HD 139614 in Upper Centaurus Lupus; Murphy et al., 2021) suggests that slightly sub-solar metallicities are more common than slightly super-solar metallicities in young \(\delta\) Sct stars. We therefore skewed the prior probability density toward sub-solar values.
Finally, the mass range was chosen to ensure that models exist both within and on either side of the instability strip (Dupret et al., 2004; Murphy et al., 2019). Our slight skew towards lower masses accounts for the similar skew present in the stellar initial mass function (Krumholz, 2014).
Figure 3 shows samples drawn from these prior density distributions, both in terms of the sampled variables and those transformed to \(M\), \(Z_{\text{in}}\) and age. These priors are applied to the inference performed for all targets (see below). Additional priors may be applied on a target-by-target basis if, for example, the mass can be constrained by other sources such as orbiting companions, or limits can be placed on the metallicity by spectroscopy.
For each of the samples drawn from the prior distributions, the
neural network produces the following outputs: a set of mode frequencies, the effective temperature, and the luminosity. Given a set of outputs we then evaluated the likelihood of the observations by
\[\log\mathcal{L}\left(D|\theta\right)=\log\mathcal{L}(D_{\mathrm{S}}|\theta)+\log \mathcal{L}(D_{\mathrm{C}}|\theta). \tag{4}\]
We separate the log-likelihood into the seismic variables, \(D_{\mathrm{S}}\), and the classical (non-seismic) variables, \(D_{\mathrm{C}}\). The contribution to the likelihood of the mode frequencies is given by
\[\log\mathcal{L}(D_{\mathrm{S}}|\theta)=\sum_{i}\log\mathcal{N}\left\langle v_{ i}^{\mathrm{obs}},\sqrt{\sigma_{v_{i}^{\mathrm{obs}}}^{2}+\sigma_{v_{i}^{ \mathrm{SN}}}^{2}}\right\rangle, \tag{5}\]
and that of the classical observables is given by
\[\log\mathcal{L}(D_{\mathrm{C}}|\theta)= \log\mathcal{N}\left(\log L^{\mathrm{obs}},\sqrt{\sigma_{L^{ \mathrm{obs}}}^{2}+\sigma_{L^{\mathrm{SN}}}^{2}}\right)+\] \[\log\mathcal{N}\left(T_{\mathrm{eff}}^{\mathrm{obs}},\sqrt{ \sigma_{T^{\mathrm{obs}}}^{2}+\sigma_{T^{\mathrm{SN}}}^{2}}\right). \tag{6}\]
The width of the probability densities used in the inference is given by two terms that specify the observational uncertainty (superscript 'obs'), and the noise due to the precision of the neural network's ability to emulate the model grid (superscript 'NN'). Based on the spread of the residuals presented in Fig. 2, this emulation uncertainty is approximately \(4\times 10^{-4}\) dex, which equates to a relative uncertainty of \(\approx 0.1\) per cent on the output parameters. This additional uncertainty was added in quadrature to the uncertainty of the observed mode frequencies.
In the following we will use simulated frequencies corresponding to those obtained from 2 sectors of data from the _TESS_ mission (Ricker et al., 2015). We therefore adopted an uncertainty on the mode frequencies of \(0.02\mathrm{d}^{-1}\), which is the frequency resolution of the resulting power spectra. The uncertainties on \(\log L\) and \(\log T_{\mathrm{eff}}\) depend on the target in question, but for the simulations shown below these were fixed to \(\sigma_{L}=0.05\) dex and \(\sigma_{T}=200\)K.
The neural network residuals also showed a bias of \(\sim 10^{-5}\) dex, which equates to an offset of \(0.01\) per cent on each of the output parameters. This offset is small compared to the combination of the observed and neural network uncertainties, and so we did not consider it in the analysis. However, if either of these sources of uncertainty were decreased by, for example, improving the estimates of the observed mode frequencies, the importance of this bias would need to be re-evaluated.
We performed the sampling using the nested sampling method from the Dynesty Python package (Skilling, 2004; Speagle, 2020). Nested sampling determines iso-likelihood contours in the input parameter space, which were iteratively redefined until samples were consistently drawn around the global likelihood maximum. In the Dynesty package, this process is terminated when the change in model log-evidence \(\Delta\log\mathcal{E}\) is less than a predefined value chosen according to the Dynesty documentation1. The method presented above is not restricted to using Dynesty, and other sampling methods
\begin{table}
\begin{tabular}{c c} \hline Parameter & Prior function \\ \hline \(M\left[M_{\odot}\right]\) & \(\beta_{\mathrm{S}}^{2}(1.3,2.3)\) \\ \(\log Z_{\mathrm{in}}\) & \(\beta_{\mathrm{S}}^{2}(-3.1,-1.6)\) \\ \(\log\mathcal{K}\) & \(\beta_{\mathrm{L,2}}^{2}(-3,-0.3)\) \\ \(\Delta n\) & \(\epsilon_{R}^{2}\left\{-1,0,1\right\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Prior density functions used in Eq. 3. The priors on \(M\), \(\log Z_{\mathrm{in}}\) and \(\mathcal{K}\) are given by \(\beta_{\mathrm{b}}^{\alpha}\), where \(a\) and \(b\) are the shape parameters of the \(\beta\)-distributions, and the prior on \(\Delta n\) is a series of \(\delta\)-functions at integer values. In all cases the arguments to the distribution functions denote lower and upper limits.
Figure 2: Residual distributions of predictions from the neural network. The residuals are that of the decimal logarithms of \(T_{\mathrm{eff}}\), \(L\) (blue), and mode frequencies (\(n\), \(\ell\)) (orange). The boxes show a central line at the median value of the distribution, with edges at the lower and upper quartiles. Whiskers extend to the 5th and 95th percentile range. The dashed line indicates complete agreement between the network predictions and model grid values.
may be used, such as MultiNest(Buchner et al., 2014) or EMCEE (Foreman-Mackey et al., 2013).
## 5 Results
### Simulated stars
In order to test our methodology and validate the accuracy of the neural network emulator, we have performed tests based on 25 simulated stars in a 'hare-and-hounds' exercise. To produce these simulated stars, we proceeded as follows. Values of stellar mass and initial metallicity were selected to lie in between values in the grid, but still within the defined parameter range of the grid. We calculated stellar models and pulsation frequencies using MESA and GYRE, using the same settings as for the grid. We selected ages from the newly calculated tracks, which then defined the truth values for mass, metallicity and age for our simulated stars and their associated 'true' observables. We only selected a subset of the calculated modes, to better reflect typical observations of \(\delta\) Sct stars. We selected modes at four consecutive radial orders for each degree, within the bounds of \(n=1\)-\(8\).
To simulate noisy observations, we added noise to the observable parameters of the simulated stars. These random offsets were drawn from a normal distribution, with mean of zero and a standard deviation of: \(0.02\,\mathrm{d}^{-1}\) for the mode frequencies, \(200\,\mathrm{K}\) for the effective temperatures, and \(0.05\,\mathrm{dex}\) for the log-luminosity.
#### 5.1.1 Exemplar simulated star
Figure 4 shows the posterior probability estimates for a simulated star with one of the best results. The figure demonstrates our ability to quantify random uncertainty on our inferred properties from the posterior and shows that the true properties of this simulated star lie comfortably with the posterior distribution. In this case our method is performing as required but it is worth noting that, even for this exemplar, the posterior still contains significant correlation between the parameters.
We see that the posterior distribution is not well described by a series of separable 1-D normal distributions. Instead, there are strong covariances between inferred parameters, which are to be expected from stellar evolution theory. However, the 1-D marginal posterior distributions show evidence of not being normally distributed and, in the case of the stellar age, even somewhat multi-modal. To examine the degree of accuracy and precision of the results we will study the summary statistics of the posterior distribution. This does not capture all the detail that is of value, but is nonetheless useful as a test of our methods.
#### 5.1.2 Results for 25 simulated stars
While examining a single simulated star is useful, it is hard to draw conclusions on the validity of our approach because we are looking at a single realisation of noise on the observables. We now consider all 25 simulated stars, including the exemplar above (simulated star index 5), to look at the statistics of our posterior probability distributions when compared to the truth values of the input properties. As part of our method, we fitted a parameter to account for our uncertainty in our assumption of the radial order label \(n\). In our tests on simulated stars, we recovered the correct radial order in all cases with no meaningful uncertainty on the posterior of the radial order labels.
For each parameter of each star, we computed the difference between the truth value and the inferred value (the mean of the posterior samples for that parameter) divided by the uncertainty (the standard deviation of the posterior samples for that parameter). If our inference is perfect, and our posterior distributions are well behaved, then this metric should be drawn from a normal distribution with zero mean and unit variance. However, as observed above in our exemplar, multi modality, non-Gaussianity, and other pathological behaviour in the posterior can bias our metrics away from our assumed normal distribution.
Figure 5 shows the metric for each simulated star and each input property of the star. It is clear that the majority of our simulated star results are consistent with the truth value given the uncertainty. And broadly, the numbers of metrics at the 1 and 2 sigma levels are consistent with expectations. There are however some outliers or results which we will discuss. The index for the most significant outlier in terms of metallicity is 14 and an interesting behaviour in the age distribution is observed in simulated star index 21.
#### 5.1.3 Further tests of simulated star 14
Simulated star 14 appears as an outlier in metallicity by \(\sim 2.5\sigma\). To examine this behaviour we have produced 10 more realisations of this simulated star. That is, we have taken the same truth values as inputs, but redrawn the simulated noise on each observable parameter using the same noise distributions.
Figure 6 shows the posterior samples for the original and subsequent runs for simulated star 14. Firstly, it is clear that the posterior for \(Z_{\mathrm{in}}\)is multi-modal and contains significant covariance. Secondly, there is a bias of the posterior distributions away from the truth value in both metallicity and age that cannot be explained simply as a result of the realisation noise on the observables.
We have checked for the possible origins of this bias. We examined the prior probability distribution, but found it to be smooth and nearly flat over the region of the posterior. We have performed multiple realisations of the noise and still observed this bias and therefore also exclude the noise or likelihood as the source of the bias.
A possible source of error is in the neural network emulation producing differences in the predicted mode frequencies. While these errors are typically small, of order \(3\,\times\,10^{-4}\) dex, the noise from the neural network is not random noise that would be expected to reduce with more realisations of the observables. Instead, the error is systematic and will produce a bias. The systematic error will always be present in emulation and this will lead to a bias, but it is the magnitude of this bias that is interesting. For this simulated star, the bias is similar to the reported uncertainty, which is about \(1.5\,\mathrm{Myr}\). However, this error can be reduced by extending the training time of the neural network, or increasing the grid search density around the optimal neural network architecture.
#### 5.1.4 Further tests of simulated star 21
Simulated star 21 shows an interesting behaviour in the age posterior. Figure 7 shows the posteriors for the original simulated star 21 and for 10 more realisations, as we did for simulated star 14 above. No meaningful bias is observed in the posteriors for mass or metallicity, given the priors we apply.
The true age of the simulated star was \(12.96\,\mathrm{Myr}\), which corresponds to the pre-MS evolution stage. The age posterior is clearly bimodal, with solutions around \(10\,\mathrm{Myr}\) and \(160\,\mathrm{Myr}\). This behaviour is observed in all the realisations, lending confidence that this is not a
result of the noise being added. In fact, this bi-modality is consistent with our understanding of the evolution of these stars and illustrates the difficulty of distinguishing the phase of the MS evolution where the track crosses its pre-MS evolution in the HR diagram.
### Application to HD 99506
We applied our methods to HD 99506, which is one of the high-frequency \(\delta\) Sct stars discussed by Bedding et al. (2020). We used the following inputs: \(T_{\rm eff}=7970\pm 250\) K and \(L\)/L\({}_{\odot}=7.58\pm 0.37\) (taken from Table 1 of Bedding et al., 2020), and the mode frequencies that we have measured and listed in Table 2. We chose only the mode frequencies that were obvious, leaving out tentatively identified modes such as the \(n=4\) and \(n=9\) radial modes, and the \(n=1\) and \(n=8\) dipole modes. The identified modes span two radial orders more than any of the simulation and so, despite the gaps at some orders, they provide tighter constraints. The resulting posteriors on \(M\), \(Z_{\rm in}\) and age are unimodal and indicate a percent-level random uncertainty (Fig. 8). The inferred age (\(9.71\pm 0.31\) Myr) corresponds to the pre-MS phase, before the onset of pp-chain H-burning but after the temporary pre-MS CNO burning phase.
The calculation of well-sampled posteriors for uncertainty estimates is a marked improvement on what is possible using discrete grid points and \(\chi^{2}\) minimisation (e.g. Kerr et al., 2022), where an arbitrary threshold in \(\chi^{2}\) needs to be adopted. It is especially useful that the posteriors are marginalized, given the aforementioned correlation in astrophysical parameters demonstrated with the simulated stars.
The neural network is also able to generate posterior predictions for
Figure 4: Samples drawn from the posterior distribution of simulated star 5. For clarity we transform the initial metallicity to % and \(\mathcal{K}\) to stellar age \(\tau\). The model input values used to generate the simulated star 5 data are shown in blue.
Figure 3: Left: Samples drawn from the one-dimensional prior distributions of the sampled parameters. The diagonal frames show the marginalized distributions (black) and the functions used to draw the samples (orange). The prior used for \(\Delta n\) are \(\delta\)-functions at \(\Delta n=-1,0\), and \(1\). The off-diagonal frames show the two-dimensional distributions of the input parameters. Right: The samples from the left frames (black) transformed to the same units as the output from the stellar model grid (blue).
each mode frequency, using the posterior samples as inputs. This can be useful for estimating the validity of uncertain mode identifications. In Fig. 9, we see that the leftmost (lower frequency) of two close peaks at the \(n=4\) radial mode is a good match and could perhaps be identified. The weak peak at \(n=9\) would also have fitted well. Inclusion of these would have resulted in tighter posteriors. On the other hand, none of the missing dipole mode frequencies, nor the \(n=1\) or 2 radial modes, would have been good additions. If we had supplied those modes as input, the posteriors would have broadened markedly.
## 6 Conclusions
We have presented a method for performing Bayesian inference on fundamental stellar properties of \(\delta\) Sct stars using a neural network. This method emulates the stellar model and oscillation codes, MESA and GYRE, by learning from a grid of models that encompasses the physical properties of stars in or near the \(\delta\) Sct instability strip. We used a nested sampling method to estimate the posterior distribution of the fundamental stellar properties, given a set of mode frequencies and classical observables. The resulting posterior distribution reflects the random observational uncertainty as well as the uncertainty of the neural network. By providing samples from the posterior probability density, which might be multimodal, non-Gaussian, and strongly covariate, we formally quantified the statistical uncertainty in the fundamental stellar properties. This improves our ability to investigate the systematic uncertainty in the stellar models.
We used a test set that was initially unseen by the training algorithm to evaluate the performance of the trained neural network. We found that the neural network is capable of reaching an average frequency precision of \(\approx 3\times 10^{-4}\) dex, with an offset of \(\approx 5\times 10^{-5}\) dex. These performance metrics may improve if the network were retrained with, for example, additional grid points or with the aim to reach a lower target loss. However, the flexibility of neural networks
Figure 5: Difference between the inferred and model values of \(\mathbf{M}\), \(Z_{\rm in}\) and age relative to the uncertainty of the inference, for a set of 25 simulated stars. The uncertainty is taken as the standard deviation of the marginalized posterior distributions of each of the parameters (see Sec. 5.1.2 for details).
Figure 6: Samples drawn from the posterior distribution of simulated star 14 are shown in red. Subsequent runs using the same truth values (M=1.696 M\({}_{\odot}\), \(Z_{\rm in}\)=0.0136, \(r\)=8.76 Myr) but different noise realisations are shown in black. The truth values are shown in blue (see Sec. 5.1.3 for details).
Figure 7: Samples drawn from the posterior distribution of simulated star 21 are shown in red. Subsequent runs using the same truth values (M=1.777 M\({}_{\odot}\), \(Z_{\rm in}\)=0.0182, \(r\)=12.97 Myr) but different noise realisations are shown in black. The truth values are shown in blue (see Sec. 5.1.4 for details).
allows for the extension of the model grid to include additional variables, such as initial helium abundance or convective overshoot, by increasing the number of neurons in the network architecture (see, e.g., Hendriks & Aerts, 2019; Lyttle et al., 2021).
We applied our method to 25 simulated stars to quantify our accuracy and precision in the recovery of our input stellar properties. We showed the method to be capable of faithfully reproducing the true input parameters with the exception of simulated star 14. On investigation, we found a bias in the reported metallicity and age of this simulated star to be a result of the error in prediction by the neural network. The bias is of order 1.5 Myr in age for this star, but was confirmed to be systematic in nature. Further improvements in the accuracy of the neural network emulation would reduce the size of this effect.
Finally, we have applied our method to observations of a real \(\delta\) Sct star, HD 99506. We found this star to be in the pre-MS stage of evolution and report a random uncertainty of only 3 per cent in age. Systematic uncertainty, such as that arising from missing or imperfect physics, has not been accounted for in this number but our methods pave the way for the quantification of the systematic uncertainty in future work. Similarly, the inclusion of additional physics such as rotation, and additional modes such as those of higher degree or different azimuthal order, would be trivial extensions to this framework in future.
## Acknowledgements
SJM was supported by the Australian Research Council (ARC) through Future Fellowship FT210100485. MBN and GRD acknowledge support from the UK Space Agency. TRB acknowledges support from Australian Research Council through Laureate Fellowship FL220100117. OJS and AJL acknowledge the support of the Science and Technology Facilities Council. This paper has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (CartographY GA. 804752). This paper includes data collected by the _TESS_ mission. Funding for the _TESS_ mission is provided by the NASA's Science Mission Directorate.
## Data Availability
The data will be supplied upon reasonable request.
\begin{table}
\begin{tabular}{c c c} \hline frequency & \(n\) & \(\ell\) \\ \([\mathrm{d}^{-1}]\) & & \\ \hline
33.48997 & 3 & 0 \\
46.46549 & 5 & 0 \\
53.51968 & 6 & 0 \\
60.60165 & 7 & 0 \\
67.65639 & 8 & 0 \\
35.99870 & 3 & 1 \\
50.04504 & 5 & 1 \\
57.18150 & 6 & 1 \\
64.19957 & 7 & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Identified modes for HD 99506 used as inputs in the modelling.
Figure 8: The posterior distributions in \(M\), \(Z_{\mathrm{in}}\) and age for HD 99506, based on _TESS_ observations.
Figure 9: The posterior predicted frequencies overlaid on observed mode frequencies for HD 99506. The greyscale is the observed amplitude spectrum smoothed by a Gaussian of width 4 times the frequency resolution.
## Software
Below we include additional software used in this work which has not explicitly been mentioned above.
* Python Van Rossum and Drake Jr (1995)
* matplotlib Hunter (2007)
* Numpy Harris et al. (2020)
* Scipy Virtanen et al. (2020)
* Pandas Reback et al. (2020)
* corner Foreman-Mackey (2016)
* lightkurve Lightkurve Collaboration et al. (2018)
* echelle Hey and Ball (2020)
|
2303.09543 | Existence and Uniqueness Theorems for Differential Equations with
Proportional Delay | The differential equation (DE) with proportional delay is a particular case
of the time-dependent delay differential equation (DDE). In this paper, we
solve non-linear DEs with proportional delay using the successive approximation
method (SAM). We prove the existence, uniqueness of theorems, and stability for
DEs with proportional delay using SAM. We derive convergence results for these
equations by using the Lipschitz condition. We generalize these results to the
fractional differential equations (FDEs) and system of FDEs containing Caputo
fractional derivative. Further, we obtain the series solution of the pantograph
equation and Ambartsumian equation in the form of a power series which are
convergent for all reals. Finally, we illustrate the efficacy of the SAM by
example. The results obtained by SAM are compared with exact solutions and
other iterative methods. It is observed that SAM is simpler compared to other
methods and the solutions obtained using SAM are consistent with the exact
solution. | Prajakta Rajmane, Jayvant Patade, M. T. Gophane | 2023-03-16T17:55:17Z | http://arxiv.org/abs/2303.09543v1 | # Existence and Uniqueness Theorems for Differential Equations with Proportional Delay
###### Abstract
The differential equation (DE) with proportional delay is a particular case of the time-dependent delay differential equation (DDE). In this paper, we solve non-linear DEs with proportional delay using the successive approximation method (SAM). We prove the existence, uniqueness of theorems, and stability for DEs with proportional delay using SAM. We derive convergence results for these equations by using the Lipschitz condition. We generalize these results to the fractional differential equations (FDEs) and system of FDEs containing Caputo fractional derivative. Further, we obtain the series solution of the pantograph equation and Ambartsumian equation in the form of a power series which are convergent for all reals. Finally, we illustrate the efficacy of the SAM by example. The results obtained by SAM are compared with exact solutions and other iterative methods. It is observed that SAM is simpler compared to other methods and the solutions obtained using SAM are consistent with the exact solution.
keywords: Successive approximation method; Lipschitz condition; Caputo derivative; Existence-uniqueness; proportional delay; pantograph equation; Ambartsumian equation. Msc: [2020] 26A33; 34A08; 34K06; 34K20. +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
The delay differential equations (DDE) contain the state variable term at a past time \(t-\tau\). The inclusion of the delay \(\tau\) makes the DDE an infinite dimensional dynamical system. Even if it is very difficult to analyze and solve such equations, this branch is popular among the applied scientists due to the applications in various fields.
On the other hand, if the order of the derivative in a differential equation is any arbitrary number (instead of a positive integer) then the equation is called as the fractional differential equation (FDE). Even though there are several inequivalent definitions of fractional derivative
operator, one can select the derivative which is appropriate for the model under consideration. This flexibility is a key feature behind the popularity of fractional calculus.
Daftardar-Gejji and coworkers proposed numerical schemes [1; 2] for solving fractional order delay differential equations (FDDE). Modified Laguerre wavelets method [3], spectral collocation method [4], fractional-order fibonacci-hybrid functions [5] are few other methods for solving FDDEs. Stability analysis of FDDEs is proposed in [6; 7; 8; 9]. Applications of FDDE are presented in [10; 11; 12; 13].
In general, the delay \(\tau\) in the DDE \(x^{\prime}(t)=f(t,x(t),x(t-\tau))\) is not constant. The analysis becomes more difficult when \(\tau\) depends on time or state. The proportional delay differential equation \(x^{\prime}(t)=f(t,x(t),x(qt))\) or a pantograph equation is a particular case of time-dependent DDE with \(\tau(t)=(1-q)t\). These equations are proposed by Ockendon and Tayler in the seminal work [14] to model the motion of an overhead trolley wire. Few other applications of these equations are discussed in [15; 16]. The Daftardar-Gejji and Jafari method (DJM) is applied in [17] to find analytical solutions of pantograph equation. Further, the authors presented the various relations of the solution series with the existing special functions. Patade and Bhalekar proposed the power series solution Ambartsumian equation [18] by using DJM. The analytical solution of pantograph equation are discussed in [19]. In this paper, we generalize the results in [20] on the existence-uniqueness of ordinary differential equations (ODE) to differential equations with proportional delay, FDEs proportional delay and system of FDEs with proportional delay. We use the successive approximation method (SAM) to prove our results.
The paper is organized as follows. In Section 2, we give definitions and notations of fractional derivatives and integrals. The differential equations with proportional delay are described in Section 3. Successive approximate solutions and existence theorem are discussed in Section 4. The stability analysis is presented in Section 5. The series solution of the pantograph equation and Ambartsumian equation are described in Section 6. The generalization of these results to FDEs and the system of FDEs are derived in Section 7 and Section 8. Section 9 deals with illustrative example and the conclusions are summarized in Section 10.
## 2 Preliminaries and Notations
**Definition 2.1**.: _[_21_]_ _The Riemann-Liouville fractional integral of order \(\alpha>0\) of \(f\in C[0,\infty)\) is defined as_
\[I^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-\zeta)^{\alpha-1}f( \zeta)d\zeta,\quad t>0. \tag{2.1}\]
**Definition 2.2**.: _[_21_]_ _The (left sided) Caputo fractional derivative of \(f,f\in C_{-1}^{m},m\in\mathbb{N}\cup\{0\}\), is defined as:_
\[D^{\alpha}f(t) = \frac{d^{m}}{dt^{m}}f(t),\quad\alpha=m \tag{2.2}\] \[= I^{m-\alpha}\frac{d^{m}}{dt^{m}}f(t),\quad m-1<\alpha<m,\quad m \in\mathbb{N}.\]
Note that for \(0\leq m-1<\alpha\leq m\) and \(\beta>-1\)
\[I^{\alpha}x^{\beta} = \frac{\Gamma(\beta+1)}{\Gamma(\beta+\alpha+1)}x^{\beta+\alpha},\] \[\left(I^{\alpha}D^{\alpha}f\right)(t) = f(t)-\sum_{k=0}^{m-1}f^{(k)}(0)\frac{t^{k}}{k!}. \tag{2.3}\]
**Definition 2.3**.: _[_21_]_ _The Mittag-Leffler function is defined as_
\[E_{\alpha}(t)=\sum_{n=0}^{\infty}\frac{t^{n}}{\Gamma(\alpha n+1)},\quad\alpha>0. \tag{2.4}\]
**Definition 2.4**.: _[_21_]_ _The multi-parameter Mittag-Leffler function is defined as:_
\[E_{(\alpha_{1},\cdots,\alpha_{n}),\beta}(z_{1},z_{2},\cdots,z_{n})=\sum_{k=0}^ {\infty}\sum_{\begin{subarray}{c}l_{1}+\cdots+l_{n}=k\\ l_{j}\geq 0\end{subarray}}(k;l_{1},\cdots,l_{n})\left[\frac{\prod_{j=1}^{n}z_{ j}^{l_{j}}}{\Gamma(\beta+\sum_{j=1}^{n}\alpha_{j}l_{j})}\right].\]
_where, \((k;l_{1},l_{2},\cdots,l_{n})\) is the multinomial coefficient defined as_
\[(k;l_{1},l_{2},\cdots,l_{n})=\frac{k!}{l_{1}!l_{2}!\cdots l_{n}!}. \tag{2.5}\]
## 3 Differential Equations with Proportional delay
Consider the differential equations with proportional delay
\[y^{\prime}(t)=f(t,y(t),y(qt)),y(0)=y_{0},0<q<1, \tag{3.1}\]
where \(f\) is a continuous function defined on some rectangle
\[R=\{|t|\leq a,|y(t)-y_{0}|\leq b,|y(qt)-y_{0}|\leq b,a>0,\,b>0\}.\]
**Theorem 1**.: _A function \(\phi\) is a solution of the IVP (3.1) on an interval \(I\) if and only if it is a solution of the integral equation_
\[y(t)=y_{0}+\int_{0}^{t}f(x,y(x),y(qx))dx\quad\text{on}\quad I \tag{3.2}\]
Proof.: Let \(\phi\) is a solution of the IVP (3.1) on an interval \(I\). Then
\[\phi^{\prime}(t)=f(t,\phi(t),\phi(qt)),\phi(0)=y_{0},0<q<1 \tag{3.3}\]
The equivalent integral equation (3.3) is
\[\phi(t)=\phi(0)+\int_{0}^{t}f(x,\phi(x),\phi(qx))dx. \tag{3.4}\]
and \(\phi(0)=y_{0}\). Thus \(\phi\) is a solution of the IVP (3.2).
Conversely, suppose equation (3.4) hold. Differentiate equation (3.4) w.r.t. \(t\), we get
\[\phi^{\prime}(t)=f(t,\phi(t),\phi(qt)),0<q<1\quad\forall t\in I\]
From equation (3.2) \(\phi(0)=y_{0}\).
Hence \(\phi\) is a solution of the IVP (3.1).
## 4 Successive Approximate Solution for Differential Equations with Proportional Delay
Let \(\phi_{0}(t)=y_{0}\) be the first approximate solution of the IVP (3.1). Then
\[\phi_{1}(t) = y_{0}+\int_{0}^{t}f(x,\phi_{0}(x),\phi_{0}(qx))dx.\] \[\phi_{2}(t) = y_{0}+\int_{0}^{t}f(x,\phi_{1}(x),\phi_{1}(qx))dx.\]
Continuing in this way, we obtain
\[\phi_{k+1}(t)=y_{0}+\int_{0}^{t}f(x,\phi_{k}(x),\phi_{k}(qx))dx.\quad k=0,1,2, \cdots. \tag{4.1}\]
**Theorem 2**.: _Let f is continuous and \(|f|\leq M\) on \(R\). The successive approximation (4.1) exist and continuous on the interval \(I=[-\zeta,\zeta]\), where \(\zeta=\text{min}\left\{a,\frac{b}{M}\right\}\). If \(t\in I\) then \((t,y(t),y(qt))\in R\) and \(|\phi_{k}(t)-y_{0}|\leq M|t|\), \(|\phi_{k}(qt)-y_{0}|\leq M|t|\)._
Proof.: We prove the result by mathematical induction.
(i) Clearly \(\phi(0)=y_{0}\) is continuous on \(I\). Thus, theorem is true for \(k=0\).
(ii) For \(k=1\), we have
\[\phi_{1}(t) = y_{0}+\int_{0}^{t}f(x,\phi_{0}(x),\phi_{0}(qx))dx.\] \[\phi_{1}(t) = y_{0}+\int_{0}^{t}f(x,y_{0},y_{0})dx.\]
Since \(f\) is continuous and hence, \(\phi_{1}(t)\) exist.
\[|\phi_{1}(t)-y_{0}| = |\int_{0}^{t}f(x,\phi_{0}(x),\phi_{0}(qx))dx|.\] \[\leq \int_{0}^{t}|f(x,\phi_{0}(x),\phi_{0}(qx))|dx.\] \[\leq M|t|\] \[\leq b,\quad t\in I\] \[\mbox{and}\quad|\phi_{1}(qt)-y_{0}| \leq M|qt|\] \[\leq M|t|,\quad 0<q<1\] \[\leq b,\quad t\in I\]
Thus, for \(t\in I\), \((t,y(t),y(qt))\in R\) and \(|\phi_{1}(t)-y_{0}|\leq M|t|\), \(|\phi_{1}(qt)-y_{0}|\leq M|t|\).
The theorem is true for \(k=1\)
(iii) Assume that theorem is true for \(k=n\).
i.e. For \(t\in I\), \((t,y(t),y(qt))\in R\) and \(|\phi_{n}(t)-y_{0}|\leq M|t|\), \(|\phi_{n}(qt)-y_{0}|\leq M|t|\).
(iv) To prove the theorem for \(k=n+1\).
If \(t\in I\), then
\[\phi_{n+1}(t) = y_{0}+\int_{0}^{t}f(x,\phi_{n}(x),\phi_{n}(qx))dx.\]
Since \(f\) is continuous and hence, \(\phi_{n+1}(t)\) exist on \(I\).
\[|\phi_{n+1}(t)-y_{0}| \leq M|t|\] \[\leq b,\quad t\in I\] \[\mbox{and}\quad|\phi_{n+1}(qt)-y_{0}| \leq M|qt|\] \[\leq M|t|,\quad 0<q<1\] \[\leq b,\quad t\in I\]
Thus, if \(t\in I\), \((t,y(t),y(qt))\in R\) and \(|\phi_{n+1}(t)-y_{0}|\leq M|t|\), \(|\phi_{n+1}(qt)-y_{0}|\leq M|t|\).
Hence by mathematical induction, the result is true for all positive integer \(n\).
**Theorem 3**.: _(Existence Theorem) Let f is continuous and \(|f|\leq M\) on the rectangle_
\[R=\{|t|\leq a,|y(t)-y_{0}|\leq b,|y(qt)-y_{0}|\leq b,a>0,\,b>0\}.\]
_Suppose \(f\) satisfies Lipschitz condition in second and third variable with Lipschitz constants \(L_{1}\) and \(L_{2}\) such that_
\[|f(t,y_{1}(t),y_{1}(qt))-f(t,y_{2}(t),y_{2}(qt))|\leq L_{1}|y_{1}(t)-y_{2}(t)| +L_{1}|y_{1}(qt)-y_{2}(qt)|.\]
_Then the successive approximations (4.1) converges on the interval \(I=[-\zeta,\zeta]\), where \(\zeta=\mbox{min}\left\{a,\frac{b}{M}\right\}\) to a solution \(\phi\) of the IVP (3.1) on \(I\)._
Proof.: We have
\[\phi_{k}(t)=\phi_{0}(t)+\sum_{n=1}^{k}[\phi_{n}(t)-\phi_{n-1}(t)].\]
To prove the sequence \(\{\phi_{k}\}\) converges, it is enough to prove the series
\[\phi_{0}(t)+\sum_{n=1}^{\infty}[\phi_{n}(t)-\phi_{n-1}(t)] \tag{4.2}\]
is convergent.
By theorem (2) the function \(\phi_{k}\) all exist and continuous on \(I\).
Also, \(|\phi_{1}(t)-\phi_{0}(t)|\leq M|t|\) and \(|\phi_{1}(qt)-\phi_{0}(qt)|\leq M|t|\) for \(t\in I\).
Now,
\[\phi_{2}(t)-\phi_{1}(t) = \int_{0}^{t}[f(x,\phi_{1}(x),\phi_{1}(qx))-f(x,\phi_{0}(x),\phi_{ 0}(qx))]dx\] \[\therefore|\phi_{2}(t)-\phi_{1}(t)| \leq \int_{0}^{t}|f(x,\phi_{1}(x),\phi_{1}(qx))-f(x,\phi_{0}(x),\phi_{ 0}(qx))|dx\] \[\leq \int_{0}^{t}[L_{1}|\phi_{1}(x)-\phi_{0}(x)|+L_{2}|\phi_{1}(qx)- \phi_{0}(qx)|]dx\] \[\leq M(L_{1}+L_{2})\frac{|t|^{2}}{2}.\]
We shall prove by mathematical induction
\[|\phi_{n}(t)-\phi_{n-1}(t)|\leq M(L_{1}+L_{2})^{n-1}\frac{|t|^{n}}{n!} \tag{4.3}\]
We have prove that equation (4.3) true for \(n=1,2\).
Assume that (4.3) true for \(n=m\).
We have
\[\phi_{m+1}(t)-\phi_{m}(t) = \int_{0}^{t}[f(x,\phi_{m}(x),\phi_{m}(qx))-f(x,\phi_{m-1}(x),\phi_ {m-1}(qx))]dx\] \[\therefore|\phi_{m+1}(t)-\phi_{m}(t)| \leq \int_{0}^{t}|f(x,\phi_{m}(x),\phi_{m}(qx))-f(x,\phi_{m-1}(x),\phi_ {m-1}(qx)|dx\] \[\leq \int_{0}^{t}[L_{1}|\phi_{m}(x)-\phi_{m-1}(x)|+L_{2}|\phi_{m}(qx)- \phi_{m-1}(qx)|]dx\] \[\leq M(L_{1}+L_{2})^{m}\frac{|t|^{m+1}}{(m+1)!}.\]
Thus, the result is true for \(n=m+1\).
Hence, by the mathematical induction result is true for all \(n=1,2,\cdots.\)
Therefore, the infinite series (4.3) is absolutely convergent on \(I\). This shows that the \(n^{th}\) term of the series \(|\phi_{0}(t)|+\sum_{n=1}^{\infty}|\phi_{n}(t)-\phi_{n-1}(t)|\) is less than \(\frac{M}{(L_{1}+L_{2})}\) times the \(n^{th}\) term of the power series \(e^{(L_{1}+L_{2})|t|}\). Hence The series (4.3) is convergent.
## 5 Stability Analysis
The differential equations with proportional delay
\[y^{\prime}(t)=f\left(t,y(t),y(qt)\right), \tag{5.1}\]
is a special case of the time-dependent delay differential equation (DDE)
\[y^{\prime}(t)=f\left(t,y(t),y\left(t-\tau(t)\right)\right)\quad\text{with} \quad\tau(t)=(1-q)t,\]
**Definition 5.1**.: _[_22_]_ _Consider the DDE,_
\[y^{\prime}(t)=f(y(t),y(t-\tau(t))), \tag{5.2}\]
_where \(f:R\times R\to R\). The flow \(\phi_{t}(t_{0})\) is a solution \(y(t)\) of (5.2) with initial condition \(y(t)=t_{0},\,t\leq 0\). The point \(y^{*}\) is called equilibrium solution of (5.2) if \(f(y^{*},y^{*})=0\). **(a)** If, for any \(\epsilon>0\), there exist \(\delta>0\) such that \(|t_{0}-y^{*}|<\delta\Rightarrow|\phi_{t}(t_{0})-y^{*}|<\epsilon,\) then the system (5.2) is stable (in the Lyapunov sense) at the equilibrium \(y^{*}\). **(b)** If the system (5.2) is stable at \(y^{*}\) and moreover, \(\lim_{t\to\infty}|\phi_{t}(t_{0})-y^{*}|=0\) then the system (5.2) is said to be asymptotically stable at \(y^{*}\)._
The following results are similar to those in [22]
**Theorem 4**.: _Suppose that the equilibrium solution \(y^{*}\) of the equation_
\[y^{\prime}=f(y(t),y(t-\tau^{*})),\quad\tau^{*}=\tau(t_{0}) \tag{5.3}\]
_is stable and \(\|f(y(t),y(t-\tau(t)))-f(y(t),y(t-\tau(t_{1})))\|<\epsilon_{1}|t-t_{1}|\), for some \(\epsilon_{1}>0\) and \(t,t_{1}\in[t_{0},t_{0}+c),\) c is a positive constant, then there exists \(\bar{t}>0\) such that the equilibrium solution \(y^{*}\) of Eq. (5.2) is stable on finite time interval \([t_{0},\bar{t})\)._
**Corollary 5**.: _If the real parts of all roots of \(\lambda-a-be^{-\lambda\tau^{*}}=0\) are negative, where \(a=\partial_{1}f,b=\partial_{2}f\) evaluated at equilibrium. Then there exist \(\epsilon_{c},\bar{t}(>t_{0})\), such that when \(\epsilon_{1}<\epsilon_{c}\), the solution \(y^{*}=0\) of Eq. (5.2) is stable on finite time interval \([t_{0},\bar{t})\)._
## 6 Series Solution of Pantograph Equation
A pantograph is a device used in electric trains to collect current from overloaded lines. The pantograph equation was formulated by Ockendon and Taylor in 1971 and originates in
electrodynamics.
Consider the pantograph equation,
\[y^{\prime}(t)=ay(t)+by(qt),\quad y(0)=1, \tag{6.1}\]
where \(0<q<1\), \(a,b\in R\).
Integrating (6.1), we get
\[y(t)=1+\int_{0}^{t}\left(ay(x)+by(qx)\right)dt \tag{6.2}\]
Suppose \(\phi_{k}(t)\) be the \(k^{th}\) approximate solution, where the initial approximate solution is taken as
\[\phi_{0}(t)=1. \tag{6.3}\]
For \(k\geq 1\), the recurrent formula as below:
\[\phi_{k}(t)=1+\int_{0}^{t}\left(a\phi_{k-1}(x)+b\phi_{k-1}(qx)\right)dx. \tag{6.4}\]
From the recurrent formula, we have
\[\phi_{1}(t) = 1+\int_{0}^{t}\left(a\phi_{0}(x)+by_{0}(qx)\right)dx\] \[= 1+(a+b)\frac{t}{1!},\] \[\phi_{2}(t) = 1+\int_{0}^{t}\left(a\phi_{1}(x)+b\phi_{1}(qx)\right)dt\] \[= 1+(a+b)\frac{t}{1!}+(a+b)(a+bq)\frac{t^{2}}{2!},\] \[\phi_{3}(t) = 1+\int_{0}^{t}\left(a\phi_{2}(t)+b\phi_{2}(qt)\right)dt\] \[= 1+(a+b)\frac{t}{1!}+(a+b)(a+bq)\frac{t^{2}}{2!}(a+b)(a+bq)(a+bq^ {2})\frac{t^{3}}{3!},\] \[\vdots\] \[\phi_{k}(x) = 1+\frac{t^{k}}{k!}\prod_{j=0}^{k-1}\left(a+bq^{j}\right),\quad k =1,2,3\cdots.\] \[\mbox{As}\quad k\to\infty,\quad\phi_{k}(t)\to y(t)\] \[y(t) = 1+\sum_{m=1}^{\infty}\frac{t^{m}}{m!}\prod_{j=0}^{m-1}\left(a+bq ^{j}\right).\]
If we define \(\prod_{j=0}^{m-1}\left(a+bq^{j}\right)=1\), for \(m=0\), then
\[y(t)=\sum_{m=0}^{\infty}\frac{t^{m}}{m!}\prod_{j=0}^{m-1}\left(a+bq^{j}\right). \tag{6.5}\]
**Theorem 6**.: _For \(0<q<1\), the power series (6.5) is convergent for \(t\in R\)._
**Corollary 7**.: _The power series (6.5) is absolutely convergent for all \(t\) and hence it is uniformly convergent on any compact interval on \(R\)._
**Theorem 8**.: _If \(0<q<1\), \(a,b\geq 0\), then_
\[e^{at}\leq y(t)=\sum_{m=0}^{\infty}\frac{t^{m}}{m!}\prod_{j=0}^{m-1}\left(a+bq ^{j}\right)\leq e^{(a+b+c)t},\quad 0\leq t<\infty.\]
**Theorem 9**.: _If \((a+b)<0\) then zero solution of (6.1) is asymptotically stable._
Proof.: Define \[u(t) = \max_{0\leq x\leq t}y^{2}(t)\] \[\therefore\frac{1}{2}u^{\prime}(t) = \frac{1}{2}\frac{d}{dt}(y^{2}(t))\] \[= y(t)y^{\prime}(t)\] \[= y(t)(ay(t)+by(qt))\] \[= ay^{2}(t)+by(t)y(qt)\] \[\leq (a+b)u(t)\] \[\Rightarrow u(t) \leq u(0)e^{2(a+b)t}\] \[\therefore\lim_{x\rightarrow\infty}y(t) = 0,\quad\text{if}\quad(a+b)<0.\]
### Series Solution of Ambartsumian Equation
In [23] Ambartsumian derived a delay differential equation describing the fluctuations of the surface brightness in a milky way. The equation is described as:
\[y^{\prime}(t)=-y(t)+\frac{1}{q}y\left(\frac{t}{q}\right) \tag{6.6}\]
where \(q>1\) and is constant for the given model.
The Eq.(6.6) with initial condition \(y(0)=\lambda\) can be written equivalently as
\[y(t)=\lambda+\int_{0}^{t}\left(\frac{1}{q}y\left(\frac{x}{q}\right)-y(x) \right)dx. \tag{6.7}\]
Suppose \(\phi_{k}(t)\) be the \(k^{th}\) approximate solution, where the initial approximate solution is taken as
\[\phi_{0}(t)=\lambda. \tag{6.8}\]
For \(k\geq 1\), the recurrent formula as below:
\[\phi_{k}(t)=\lambda+\int_{0}^{t}\left(\frac{1}{q}\phi_{k-1}\left(\frac{x}{q} \right)-\phi_{k-1}(x)\right)dx. \tag{6.9}\]
From the recurrent formula, we have
\[\phi_{1}(t) = \lambda+\int_{0}^{t}\left(\frac{1}{q}\phi_{0}\left(\frac{x}{q} \right)-\phi_{0}(x)\right)dx\] \[= \lambda+\int_{0}^{t}\left(\frac{\lambda}{q}-\lambda\right)dx\] \[= \lambda+\left(\frac{\lambda}{q}-\lambda\right)\frac{t}{1!}\] \[= \left(1+\left(\frac{1}{q}-1\right)\frac{t}{1!}\right)\lambda,\] \[\phi_{2}(t) = \lambda+\int_{0}^{t}\left(\frac{1}{q}\phi_{1}\left(\frac{x}{q} \right)-\phi_{1}(x)\right)dx\] \[= \left(1+\left(\frac{1}{q}-1\right)\frac{t}{1!}+\left(\frac{1}{q }-1\right)\left(\frac{1}{q^{2}}-1\right)\frac{t^{2}}{2!}\right)\lambda,\] \[\vdots\] \[\phi_{k}(t) = \left(1+\sum_{m=1}^{k}\frac{t^{m}}{m!}\prod_{j=1}^{m}\left(\frac {1}{q^{j}}-1\right)\right)\lambda.\] \[\mbox{As}\quad k\to\infty,\quad\phi_{k}(t)\to y(t)\] \[y(t) = \left(1+\sum_{m=1}^{\infty}\frac{t^{m}}{m!}\prod_{j=1}^{m}\left( \frac{1}{q^{j}}-1\right)\right)\lambda.\]
If we define \(\prod_{j=1}^{m}\left(\frac{1}{q^{j}}-1\right)=1\), for \(m=0\), then
\[y(t)=\left(\sum_{m=0}^{\infty}\frac{t^{m}}{m!}\prod_{j=1}^{m}\left(\frac{1}{q^ {j}}-1\right)\right)\lambda. \tag{6.10}\]
**Theorem 10**.: _For \(q>1\), the power series (6.10) is convergent for \(t\in R\)._
**Corollary 11**.: _The power series (6.10) is absolutely convergent for all \(t\) and hence it is uniformly convergent on any compact interval on \(R\)._
**Theorem 12**.: _The zero solution of (6.6) is asymptotically stable._
## 7 Fractional order differential equations with proportional delay
Consider the initial value problem (IVP)
\[D^{\alpha}y(t) = f(t,y(t),y(qt)),0<\alpha\leq 1,0<q<1\] \[y(0) = y_{0}, \tag{7.1}\]
where \(D^{\alpha}\) denotes Caputo fractional derivative and \(f\) is a continuous function defined on the rectangle
\[R=\{|t|\leq a,|y(t)-y_{0}|\leq b,|y(qt)-y_{0}|\leq b,a>0,\,b>0\}.\]
**Theorem 13**.: _A function \(\phi\) is a solution of the IVP (7.1) on an interval \(I\) if and only if it is a solution of the integral equation_
\[y(t)=y_{0}+\int_{0}^{t}\frac{(t-x)^{\alpha-1}}{\Gamma(\alpha)}f(x,y(x),y(qx)) dx\quad\text{on}\quad I. \tag{7.2}\]
**Theorem 14**.: _Let f is continuous and \(|f|\leq M\) on \(R.\) The successive approximation_
\[\phi_{k+1}(t) = y_{0}\] \[\phi_{k+1}(t) = y_{0}+\int_{0}^{t}\frac{(t-x)^{\alpha-1}}{\Gamma(\alpha)}f(x, \phi_{k}(x),\phi_{k}(qx))dx.\quad k=0,1,2,\cdots. \tag{7.3}\]
_exist and continuous on the interval \(I=[-\zeta,\zeta]\), where \(\zeta=\text{min}\left\{a,(\frac{\Gamma(\alpha+1)b}{M})^{\frac{1}{\alpha}}\right\}\). If \(t\in I\) then \((t,y(t),y(qt))\in R\) and \(|\phi_{k}(t)-y_{0}|\leq M\frac{|t|^{\alpha}}{\Gamma(\alpha+1)}\), \(|\phi_{k}(qt)-y_{0}|\leq M\frac{|t|^{\alpha}}{\Gamma(\alpha+1)}\)._
**Theorem 15**.: _(Existence Theorem) Let f is continuous and \(|f|\leq M\) on the rectangle_
\[R=\{|t|\leq a,|y(t)-y_{0}|\leq b,|y(qt)-y_{0}|\leq b,a>0,\,b>0\}.\]
_Suppose \(f\) satisfies Lipschitz condition in second and third variable with Lipschitz constants \(L_{1}\) and \(L_{2}\) such that_
\[|f(t,y_{1}(t),y_{1}(qt))-f(t,y_{2}(t),y_{2}(qt))|\leq L_{1}|y_{1}(t)-y_{2}(t)| +L_{1}|y_{1}(qt)-y_{2}(qt)|.\]
_Then the successive approximations (7.3) converges on the interval \(I=[-\zeta,\zeta]\), where \(\zeta=\text{min}\left\{a,(\frac{\Gamma(\alpha+1)b}{M})^{\frac{1}{\alpha}}\right\}\) to a solution \(\phi\) of the IVP (7.1) on \(I\)._
### Series Solution of Fractional Order Pantograph Equation
Consider the fractional order pantograph equation as :
\[D^{\alpha}y(t)=ay(t)+by(qt),\quad y(0)=1, \tag{7.4}\]
where \(0<\alpha\leq 1\), \(0<q<1\), \(a,b\in R\).
The solution of (7.4) using successive approximation is
\[y(t)=\sum_{m=0}^{\infty}\frac{t^{\alpha m}}{\Gamma(\alpha m+1)}\prod_{j=0}^{m-1 }\left(a+bq^{\alpha j}\right). \tag{7.5}\]
**Theorem 16**.: _If \(0<q<1\), then the power series (7.5) is convergent for all finite values of \(t\)._
**Theorem 17**.: _If \(0<q<1\), \(a,b\geq 0\), then_
\[E_{\alpha}(at^{\alpha})\leq y(t)=\sum_{m=0}^{\infty}\frac{t^{\alpha m}}{\Gamma (\alpha m+1)}\prod_{j=0}^{m-1}\left(a+bq^{\alpha j}\right)\leq E_{\alpha}((a+b )t^{\alpha}),\quad 0\leq t<\infty.\]
### Series Solution of Fractional Order Ambartsumian Equation
Consider the fractional order Ambartsumian equation as:
\[D^{\alpha}y(t)=-y(t)+\frac{1}{q}y\left(\frac{t}{q}\right),\quad y(0)=1 \tag{7.6}\]
where \(q>1\) and is constant for the given model.
The solution of (7.6) using successive approximation is
\[y(t)=\sum_{m=0}^{\infty}\frac{t^{\alpha m}}{\Gamma(\alpha m+1)}\prod_{j=0}^{m -1}\left(\frac{1}{q^{1+\alpha j}}-1\right). \tag{7.7}\]
**Theorem 18**.: _If \(q>1\), then the power series (7.7) is convergent for all finite values of \(t\)._
## 8 System of fractional order differential equations with proportional delay
Consider the initial value problem (IVP)
\[D^{\alpha_{i}}y_{i}(t) = f_{i}(t,\bar{y}(t),\bar{y}(qt)),0<\alpha_{i}\leq 1,0<q<1\] \[y_{i}(0) = {}^{i}y_{0},\quad 1\leq i\leq n, \tag{8.1}\]
where \(D^{\alpha_{i}}\) denotes Caputo fractional derivative, \(\bar{y}(t)=(y_{1}(t),y_{2}(t)\cdots,y_{n}(t))\),
\(\bar{y}(qt)=(y_{1}(qt),y_{2}(qt)\cdots,y_{n}(qt))\) and \(f=(f_{1},f_{2}\cdots,f_{n})\) is a continuous function defined on the rectangle
\[R=\{|t|\leq a,|y_{i}(t)-^{i}y_{0}|\leq b_{i},|y_{i}(qt)-^{i}y_{0}|\leq b_{i},a> 0,\,b_{i}>0,1\leq i\leq n\}.\]
**Theorem 19**.: _A function \(\bar{\phi}\) is a solution of the IVP (8.1) on an interval \(I\) if and only if it is a solution of the integral equation_
\[y_{i}(t)=^{i}y_{0}+\int_{0}^{t}\frac{(t-x)^{\alpha_{i}-1}}{\Gamma(\alpha_{i})}f(x,\bar{y}(x),\bar{y}(qx))dx\quad\text{on}\quad I, \tag{8.2}\]
_where \(\bar{\phi}_{m}=(^{1}\phi_{m},^{2}\phi_{m},\cdots,^{n}\phi_{m})\)_
**Theorem 20**.: _Let \(||f||=M\) on rectangle R. The successive approximation_
\[{}^{i}\phi_{0}(t) = {}^{i}y_{0},\quad i=0,1,2,\cdots.\] \[{}^{i}\phi_{k+1}(t) = y_{0}+\int_{0}^{t}\frac{(t-x)^{\alpha_{i}-1}}{\Gamma(\alpha_{i} )}f(x,\bar{\phi}_{k}(x),\bar{\phi}_{k}(qx))dx.\quad k=0,1,2,\cdots. \tag{8.3}\]
_exist and continuous on the interval \(I=[-\zeta,\zeta]\), where_
\[\zeta=\text{min}\left\{a,\left(\frac{\Gamma(\alpha_{1}+1)b_{1}}{M}\right)^{ \frac{1}{\alpha_{1}}},\cdots,\left(\frac{\Gamma(\alpha_{n}+1)b_{n}}{M}\right)^ {\frac{1}{\alpha_{n}}}\right\}.\]
_If \(t\) is in interval \(I\) then \((t,\bar{y}_{m}(t),\bar{y}_{m}(qt))\) is in rectangle R and \(||\bar{y}_{m}(t)-\bar{y}(0)||\leq M\sum_{i=1}^{m}\frac{|t|^{\alpha_{i}}}{ \Gamma(\alpha_{i}+1)}\), \(||\bar{y}_{m}(qt)-\bar{y}(0)||\leq M\sum_{i=1}^{m}\frac{|t|^{\alpha_{i}}}{ \Gamma(\alpha_{i}+1)}\,\forall m\)._
**Theorem 21**.: _Let \(f\) be a continuous function defined on the rectangle_
\[R=\{|t|\leq a,|y_{i}(t)-^{i}y_{0}|\leq b_{i},|y_{i}(qt)-^{i}y_{0}|\leq b_{i},a> 0,\,b_{i}>0,1\leq i\leq n\}.\]
_Suppose \(f\) satisfies Lipschitz condition in second and third variable with Lipschitz constants \(L_{1}\) and \(L_{2}\) such that \(|f(t,\bar{y}(t),\bar{y}(qt))-f(t,\bar{y}(t),\bar{y}(qt))|\leq L_{1}|\bar{y}_{1 }(t)-\bar{y}_{2}(t)|+L_{1}|\bar{y}_{1}(qt)-\bar{y}_{2}(qt)|\). Then the successive approximations (8.3) converges on the interval \(I=[-\zeta,\zeta]\), where \(\zeta=\text{min}\left\{a,\left(\frac{\Gamma(\alpha_{1}+1)b_{1}}{M}\right)^{ \frac{1}{\alpha_{1}}},\cdots,\left(\frac{\Gamma(\alpha_{n}+1)b_{n}}{M}\right)^ {\frac{1}{\alpha_{n}}}\right\}\) to a solution of the \(\phi\) of the IVP (8.1) on \(I\)._
### System of Fractional Order Pantograph Equation
Consider the system of fractional order pantograph equation
\[D^{\alpha}y(t)=Ay(t)+By(qt),\quad y(0)=y_{0},\quad 0<\alpha\leq 1 \tag{8.4}\]
where \(0<q<1\), \(A=\left(a_{ij}\right)_{n\times n}\), \(B=\left(b_{ij}\right)_{n\times n}\) and \(y=[y_{1},y_{2},\cdots,y_{n}]^{T}\) The solution of (6.4) using successive approximation is
\[y(t) = \left[\sum_{k=0}^{\infty}\prod_{j=1}^{k}(A+Bq^{-(k-j)\alpha}) \frac{t^{k\alpha}}{\Gamma(k\alpha+1)}\right]\lambda. \tag{8.5}\]
**Theorem 22**.: _For \(0<q<1\), the power series (8.5) is convergent for \(t\in R\)._
### System of Fractional Order Ambartsumian Equations
In this section, we generalize the Ambartsumian equation (3.1) to the system of fractional order Ambartsumian equations [24] as:
\[D^{\alpha}y(t)=-Iy(t)+By\left(\frac{t}{q}\right),\quad y(0)=\lambda,\quad 0< \alpha\leq 1, \tag{8.6}\]
where \(D^{\alpha}\) denotes Caputo fractional derivative, \(I\) is the identity matrix of order \(n\), \(1<q\),
\(y=\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{n}\end{bmatrix}\), \(\lambda=\begin{bmatrix}\lambda_{1}\\ \lambda_{2}\\ \vdots\\ \lambda_{n}\end{bmatrix}\) and \(B=\begin{bmatrix}\frac{1}{q}&0&0&\cdots&0\\ 0&\frac{1}{q}&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ 0&0&0&\cdots&\frac{1}{q}\end{bmatrix}_{n\times n}\).
Applying SAM to the initial value problem (8.6), we have
\[y(t)=y(0)-IJ^{\alpha}y(t)+BJ^{\alpha}y\left(\frac{t}{q}\right). \tag{8.7}\]
Suppose \(\phi_{k}(t)\) be the \(k\)th approximate solution, where the initial approximate solution is taken as
\[\phi_{0}(t)=\lambda. \tag{8.8}\]
For \(k\geq 1\), the recurrent formula as below:
\[\phi_{k}(t)=\lambda-IJ^{\alpha}\phi_{k-1}(t)+BJ^{\alpha}\phi_{k-1}\left(\frac {t}{q}\right). \tag{8.9}\]
From the recurrent formula, we have
\[\phi_{1}(t) = \lambda-IJ^{\alpha}\phi_{0}(t)+BJ^{\alpha}\phi_{0}\left(\frac{t} {q}\right)\] \[= \lambda-I\frac{\lambda t^{\alpha}}{\Gamma(\alpha+1)}+B\frac{ \lambda t^{\alpha}}{\Gamma(\alpha+1)}\] \[= \left(I+(-I+B)\frac{t^{\alpha}}{\Gamma(\alpha+1)}\right)\lambda,\] \[\phi_{2}(t) = \lambda-IJ^{\alpha}\phi_{1}(t)+BJ^{\alpha}\phi_{1}\left(\frac{t }{q}\right)\] \[= \lambda-IJ^{\alpha}\left[\left(I+(-I+B)\frac{t^{\alpha}}{\Gamma( \alpha+1)}\right)\lambda\right]+BJ^{\alpha}\left(I+(-I+B)\frac{q^{-\alpha}t^{ \alpha}}{\Gamma(\alpha+1)}\right)\lambda\] \[= \lambda-I\left[\frac{\lambda t^{\alpha}}{\Gamma(\alpha+1)}+(-I+ B)\frac{\lambda t^{2\alpha}}{\Gamma(2\alpha+1)}\right]+B\left[\frac{\lambda t^{ \alpha}}{\Gamma(\alpha+1)}+(-I+B)\frac{\lambda q^{-\alpha}t^{2\alpha}}{\Gamma (2\alpha+1)}\right]\] \[= \left[I+(-I+B)\frac{t^{\alpha}}{\Gamma(\alpha+1)}+(-I+Bq^{- \alpha})(-I+B)\frac{t^{2\alpha}}{\Gamma(2\alpha+1)}\right]\lambda,\]
\[\phi_{3}(t) = \left[I+(-I+B)\frac{t^{\alpha}}{\Gamma(\alpha+1)}+(-I+Bq^{-\alpha})( -I+B)\frac{t^{2\alpha}}{\Gamma(2\alpha+1)}\right.\] \[\left.+(-I+Bq^{-2\alpha})(-I+Bq^{-\alpha})(-I+B)\frac{t^{3\alpha}} {\Gamma(3\alpha+1)}\right]\lambda,\] \[\cdots,\] \[\phi_{k}(t) = \left[I+\sum_{m=1}^{k}\prod_{j=1}^{m}(-I+Bq^{-(m-j)\alpha})\frac{ t^{m\alpha}}{\Gamma(m\alpha+1)}\right]\lambda\]
As \(k\rightarrow\infty\), \(\phi_{k}(t)\to y(t)\)
\[y(t) = \left[I+\sum_{k=1}^{\infty}\prod_{j=1}^{k}(-I+Bq^{-(k-j)\alpha}) \frac{t^{k\alpha}}{\Gamma(k\alpha+1)}\right]\lambda.\]
If we set \(\prod_{j=1}^{k}(-I+Bq^{(k-j)\alpha})=I\), for \(k=0\), then
\[y(t) = \left[\sum_{k=0}^{\infty}\prod_{j=1}^{k}(-I+Bq^{-(k-j)\alpha}) \frac{t^{k\alpha}}{\Gamma(k\alpha+1)}\right]\lambda. \tag{8.10}\]
**Theorem 23**.: _For \(q>1\), the power series_
\[y(t) = \left[\sum_{k=0}^{\infty}\prod_{j=1}^{k}(-I+Bq^{-(k-j)\alpha}) \frac{t^{k\alpha}}{\Gamma(k\alpha+1)}\right]\lambda\]
_is convergent for \(t\in R\)._
_Proof:_ Result follows immediately by ratio test [25].
## 9 Illustrative Examples
**Example 1.** Consider the non-linear differential equations with proportional delay [26; 27; 28; 29]
\[\frac{dy(t)}{dt}=1-2y^{2}\left(\frac{t}{2}\right),\quad y(0)=0. \tag{9.1}\]
The corresponding integral equation is
\[y(t)=\int_{0}^{t}\left(1-2u^{2}\left(\frac{t}{2}\right)\right)dx. \tag{9.2}\]
By using successive approximation method (4.1), we obtain
\[\phi_{0}(t) = 0,\] \[\phi_{1}(t) = t,\] \[\phi_{2}(t) = t-\frac{t^{3}}{6},\] \[\phi_{3}(t) = t-\frac{t^{3}}{6}+\frac{t^{5}}{120}-\frac{t^{7}}{8064},\] \[\phi_{4}(t) = t-\frac{t^{3}}{6}+\frac{t^{5}}{120}-\frac{t^{7}}{5040}+\frac{61 t^{9}}{23224320}-\frac{67t^{11}}{3406233600}+\frac{t^{13}}{12881756160}- \frac{t^{15}}{7990652436480},\] \[\phi_{5}(t) = t-\frac{t^{3}}{6}+\frac{t^{5}}{120}-\frac{t^{7}}{5040}+\frac{61 t^{9}}{23224320}-\cdots-\frac{t^{31}}{1062664199886151693758358595882188800},\]
and so on.
The exact solution of Eq.(9.1) is \(y(t)=\sin t\).
The 5-term solutions of Eq.(9.1) using Adomian decomposition method (ADM) [26], variational iteration method (VIM) [27], homotopy analysis method (HAM) [28], optimal homotopy asymptotic method (OHAM) [29] are same and is given by
\[y(t) = t-\frac{t^{3}}{6}+\frac{t^{5}}{120}-\frac{t^{7}}{5040}+\frac{t^{ 9}}{362880}-\frac{t^{11}}{39916800}+\frac{t^{13}}{6227020800} \tag{9.3}\] \[-\frac{t^{15}}{1307674368000}+\frac{t^{17}}{355687428096000}.\]
The 4-term OHAM solution [29] of Eq.(9.1) is
\[y(t) = t-0.166665t^{3}+0.00832857t^{5}-0.000192105t^{7}. \tag{9.4}\]
We compare \(5^{th}\) approximation solution (SAM) and 5-term solutions ( ADM, VIM, HAM) with exact solution in Fig.(1) and \(4^{th}\) approximation solution (SAM) with 4-term solution (OHAM) in Fig.(2). The absolute errors in computation are shown in Figs.(3)-(4). It can be observed that SAM solution is better than the solution obtained by using other methods.
Fig.1:Comparison of SAM, ADM/VIM/HAM solutions with exact solution of Eq.(9.1).
Fig.2:Comparison of SAM, OHAM solutions with exact solution of Eq.(9.1).
Remark: The ADM/VIM/HAM and OHAM solutions given in [26; 27; 28; 29] are considered only for the interval \([0,1]\). Here we have successfully extended the solution using SAM in intervals \([0,8]\).
Fig.3:Comparison of absolute errors in SAM and ADM/VIM/HAM solutions.
Fig.4:Comparison of absolute errors in SAM and OHAM solutions.
## 10 Conclusions
In this paper, we solved non-linear differential equations with proportional delay using the successive approximation method (SAM). The existence, uniqueness, and stability theorems for differential equations with proportional delay are presented. The convergence results are derived by using the Lipschitz condition. A generalization of fractional order and a system of fractional order cases are also presented. The series solution of the pantograph equation and Ambersumian equation is obtained using SAM. Finally, we illustrated the effectiveness of SAM through an example. |
2309.08606 | Best proximity point of generalized $θ-φ-$proximal non-self
contractions | In this manuscript, motivated and inspired by results of Best proximity point
of generalized $ F $-proximal non-self contractions, we introduce the concept
of generalized $\theta-\phi-$proximal contraction and prove new best proximity
results for these contractions in the setting of a metric space. Our results
generalize and extend many recent results appearing in the literature. An
example is being given to demonstrate the usefulness of our results. | Mohamed Rossafi, Abdelkarim Kari | 2023-08-11T02:47:00Z | http://arxiv.org/abs/2309.08606v1 | # Best proximity point of generalized \(\theta-\phi-\)proximal non-self contractions
###### Abstract.
In this manuscript, motivated and inspired by results of Best proximity point of generalized \(F\)-proximal non-self contractions, we introduce the concept of generalized \(\theta-\phi-\)proximal contraction and prove new best proximity results for these contractions in the setting of a metric space. Our results generalize and extend many recent results appearing in the literature. An example is being given to demonstrate the usefulness of our results.
Key words and phrases:\(P\)-property, best proximity point, generalized \(\theta-\phi\)-proximal contraction 2010 Mathematics Subject Classification: Primary 47H10; Secondary 54H25
## 1. Introduction
It is well known that the Banach contraction theorem is the first outstanding result in the field of the fixed point theory that ensure the existence of unique fixed point in complete metric spaces. Due to its importance, various mathematics steadied many interesting extensions and generalizations [7, 8, 12, 14]. One of the famous generalizations of the Banach contraction principle [2] for existence of fixed point for self-mapping on metric space is the theorem by Zheng et al. [14] and the contraction introduced by Jleli and Samet in [6].
Best proximity point theorem analyses the condition under which the optimisation problem, namely \(\inf_{x\in A}d(x,Tx)\), has a solution. The point \(x\) is called the best proximity of \(T:A\to B\), if \(d(x,Tx)=d(A,B)\), where \(\{d(A,B)=\inf d(x,y):x\in A,y\in B\}\). Note that the best proximity point reduces to a fixed point if \(T\) is a self-mapping. Various best proximity point results were established on such spaces [9, 1, 12].
Sankar Raj [10] and Zhang et al. [13] defined the notion of \(P-\)property and weak \(P-\)property respectively. Beg et al. [4] defined the concept of generalized \(F\)-proximal non-self contractions and obtained some best proximity point theorems for self-mappings.
In this paper, inspired by the idea of generalized \(F\)-proximal non-self contractions, introduced by Beg et al. [4] in metric spaces, we prove a new existence of best proximity point for
generalized \(\theta-\phi-\)proximal contraction defined on a closed subset of a complete metric space. Our theorems extend, generalize and improve many existing results.
## 2. Preliminaries
Let \(\left(A,B\right)\) be a pair of non empty subsets of a metric space \(\left(X,d\right)\). We adopt the following notations:
\(d(A,B)=\left\{\inf d\left(a,b\right):a\in A,b\in B\right\}\);
\(A_{0}=\left\{\ a\in A\text{ there exists }b\in A\text{ such that }d\left(a,b\right)=d \left(A,B\right)\right\}\);
\(B_{0}=\left\{\ b\in B\text{ there exists }a\in A\text{ such that }d\left(a,b\right)=d \left(A,B\right)\right\}\).
**Definition 2.1**.: [5] Let \(T:A\to B\) be a mapping. An element \(x^{\ast}\) is said to be a best proximity point of \(T\) if
\[d\left(x^{\ast},Tx^{\ast}\right)=d\left(A,B\right).\]
**Definition 2.2**.: [10] Let \(\left(A,B\right)\) be a pair of non empty subsets of a metric space \(\left(X,d\right)\) such that \(A_{0}\) is non empty. Then the pair \(\left(A,B\right)\) is to have \(P\)-property if and only if
\[\begin{cases}d\left(x_{1},y_{1}\right)=d\left(A,B\right)\\ d\left(x_{2},y_{2}\right)=d\left(A,B\right)\end{cases}\Rightarrow d(x_{1},x_{ 2})=d(y_{1},y_{2})\]
where \(x_{1},x_{2}\in A_{0}\) and \(y_{1},y_{2}\in B_{0}\).
**Definition 2.3**.: [3] A set \(B\) is called approximately compact with respect to \(A\) if every sequence \(\left\{x_{n}\right\}\) of \(B\) with \(d(y,x_{n})\to d(y,B)\) for some \(y\in A\) has a convergent subsequence.
**Definition 2.4**.: [6] Let \(\Theta\) be the family of all functions \(\theta:\,]0,+\infty[\,\rightarrow\,]1,+\infty[\) such that
* \(\theta\) is strictly increasing;
* For each sequence \(x_{n}\in\,]0,+\infty[\);
* \(\lim_{n\to 0}x_{n}=0\), if and only if \(\lim_{n\rightarrow\infty}\theta\left(x_{n}\right)=1\);
* \(\theta\) is continuous.
**Definition 2.5**.: [14] Let \(\Phi\) be the family of all functions \(\phi\): \([1,+\infty[\,\rightarrow[1,+\infty[\), such that
* \(\phi\) is increasing;
* For each \(t\in\,]1,+\infty[\), \(lim_{n\rightarrow\infty}\phi^{n}(t)=1\);
\(\phi\) is continuous.
**Lemma 2.6**.: _[_14_]_ _If \(\phi\in\Phi\) Then \(\phi(1)\)=1, and \(\phi(t)<t\)._
**Definition 2.7**.: _[_14_]__. Let \((X,d)\) be a metric space and \(T:X\to X\) be a mapping._
\(T\) is said to be a \(\theta-\phi-\)contraction if there exist \(\theta\in\Theta\) and \(\phi\in\Phi\) such that for any \(x,y\in X\),
\[d\left(Tx,Ty\right)>0\Rightarrow\theta\left[d\left(Tx,Ty\right)\right]\leq \phi\left[\theta\left(d\left(x,y\right)\right)\right],\]
## 3. Main result
In this section, inspired by the notion of \(F\)-proximal contraction of the first kind and second kind, we introduce new generalized \(\theta-\phi\)-proximal first kind and second kind on complete metric space.
**Definition 3.1**.: The mapping \(T:A\to B\) is said to be a generalized \(\theta-\phi\)-proximal contraction of first kind if there exist \(\theta\in\Theta\), \(\phi\in\Phi\) and \(a,b,c,h\geq 0\) with \(a+b++2ch\), \(c\neq 1\) such that
\[\begin{cases}d\left(u_{1},Tv_{1}\right)=d\left(A,B\right)\\ d\left(u_{2},Tv_{2}\right)=d\left(A,B\right)\end{cases}\Rightarrow\theta(d(u_ {1},u_{2}))\leq\phi\left[\theta\left[ad\left(v_{1},v_{2}\right)+bd\left(u_{1},v_{1}\right)+cd\left(u_{2},v_{2}\right)+h\left(d\left(v_{1},u_{2}\right)+d \left(v_{2},u_{1}\right)\right)\right]\right]\end{cases}\]
for all \(u_{1},u_{2},v_{1},v_{2}\in A\) and \(u_{1}\neq v_{1}\).
**Definition 3.2**.: The mapping \(T:A\to B\) is said to be a generalized \(\theta-\phi\)-proximal contraction of second kind if there exist \(\theta\in\Theta\), \(\phi\in\Phi\) and \(a,b,c,h\geq 0\) with \(a+b++2ch\), \(c\neq 1\) such that
\[\begin{cases}d\left(u_{1},Tv_{1}\right)=d\left(A,B\right)\\ d\left(u_{2},Tv_{2}\right)=d\left(A,B\right)\end{cases}\Rightarrow\theta(d(Tu _{1},Tu_{2}))\]
\[\leq\phi\left[\theta\left[ad\left(Tv_{1},Tv_{2}\right)+bd\left(Tu_{1},Tv_{1} \right)+cd\left(Tu_{2},Tv_{2}\right)+h\left(d\left(Tv_{1},Tu_{2}\right)+d \left(Tv_{2},Tu_{1}\right)\right)\right]\right]\]
for all \(u_{1},u_{2},v_{1},v_{2}\in A\) and \(Tu_{1}\neq Tv_{1}\).
**Theorem 3.3**.: _Let \((X,d)\) be a complete metric space and \((A,B)\) be a pair of non-void closed subsets of \((X,d)\). If \(B\) is approximately compact with respect to \(A\) and \(T:A\to B\) satisfy the following conditions :_
* \(T\left(A_{0}\right)\in B_{0}\) _and the pair_ \((A,B)\) _satisfies the weak_ \(P\)_-property;_
* \(T\) _is a generalized_ \(\theta-\phi\)_-proximal contraction of first kind._
_Then there exists a unique \(u\in A\) such that \(d(u,Tu)=d(A,B)\). In addition, for any fixed element \(u_{0}\in A_{0}\), sequence \(\{u_{n}\}\) defined by_
\[d(u_{n+1},Tu_{n})=d(A,B),\]
_converges to the proximity point._
Proof.: Choose an element \(u_{0}\in A_{0}\). As, \(T\left(A_{0}\right)\in B_{0}\), therefore there is an element \(u_{1}\in A_{0}\) satisfying
\[d(u_{1},Tu_{0})=d(A,B).\]
Since \(T(A_{0})\in B_{0}\), there exists \(u_{2}\in A_{0}\) such that
\[d(u_{2},Tu_{1})=d(A,B).\]
Again, since \(T(A_{0})\in B_{0}\), there exists \(u_{3}\in A_{0}\) such that
\[d(u_{3},Tu_{2})=d(A,B).\]
Continuing this process, by induction, we construct a sequence \(x_{n}\in A_{0}\) such that
\[d\left(u_{n+1},Tu_{n}\right)=d(A,B),\forall n\in\mathbb{N}.\]
Since \((A,B)\) satisfies the \(P\) property, we conclude that
\[d(u_{n},u_{n+1})=d(Tu_{n},Tu_{n+1}),\forall n\in\mathbb{N}. \tag{3.1}\]
If \(u_{n_{0}}=u_{n_{0}+1}\) for some \(n_{0}\in\mathbb{N}\), from (3) one obtains
\[d\left(u_{n_{0}},Tu_{n_{0}}\right)=d\left(u_{n_{0}+1},Tu_{n_{0}}\right)=d(A,B) \tag{3.2}\]
that is, \(u_{n_{0}}\in BPP\). Thus, we suppose that \(d(u_{n},x_{n+1})>0\) for all \(n\in\mathbb{N}\).
We shall prove that the sequence \(u_{n}\) is a Cauchy sequence. Let us first prove that
\[\lim_{n\to\infty}d\left(u_{n},u_{n+1}\right)=0.\]
As \(T\) is generalized \(\left(\theta,\phi\right)\)-proximal contraction of the first kind, we have that
\[\theta\left(d\left(u_{n},u_{n+1}\right)\right) \leq\phi\left[\theta\left[ad\left(u_{n-1},u_{n}\right)+bd\left(u_{n -1},u_{n}\right)+cd\left(u_{n},u_{n+1}\right)+h\left(d\left(u_{n-1},u_{n+1} \right)+d\left(u_{n},u_{n}\right)\right)\right]\right]\] \[=\phi\left[\theta\left[ad\left(u_{n-1},u_{n}\right)+bd\left(u_{n-1 },u_{n}\right)+cd\left(u_{n},u_{n+1}\right)+h\left(d\left(u_{n-1},u_{n+1} \right)\right)\right]\right]\] \[\leq\phi\left[\theta\left[ad\left(u_{n-1},u_{n}\right)+bd\left(x_ {n-1},x_{n}\right)+cd\left(u_{n},u_{n+1}\right)+h\left(d\left(u_{n-1},u_{n} \right)+d\left(u_{n},u_{n+1}\right)\right)\right]\right]\] \[=\phi\left[\theta\left[\left(a+b+h\right)d\left(u_{n-1},u_{n} \right)+\left(c+h\right)d\left(u_{n},u_{n+1}\right)\right]\right]\]
Since \(\theta\) is strictly increasing and by Lemma 2.6, we deduce
\[d\left(x_{n},x_{n+1}\right)<\left(a+b+h\right)d\left(x_{n-1},x_{n}\right)+ \left(c+h\right)d\left(x_{n},x_{n+1}\right).\]
Thus
\[d\left(u_{n},u_{n+1}\right)<\frac{a+b+h}{1-c-h}(d\left(u_{n-1},x_{n}\right)).\]
If \(b+b+c+2h=1\), we have \(0<1-c-h\) and so
\[d\left(u_{n},u_{n+1}\right)\leq\frac{a+b+h}{1-c-h}(d\left(u_{n-1},u_{n}\right) )=d\left(u_{n-1},u_{n}\right),\forall n\in\mathbb{N};\]
Consequently,
\[\theta\left(d\left(u_{n},u_{n+1}\right)\right)\leq\phi\left[\theta\left(d \left(u_{n-1},u_{n}\right)\right)\right]\]
If \(b+b+c+2h<1\), we have \(0<1-c-h\) and so
\[d\left(u_{n},u_{n+1}\right)<d\left(u_{n-1},u_{n}\right),\forall n\in\mathbb{ N};\]
Consequently,
\[\theta\left(d\left(u_{n},u_{n+1}\right)\right)\leq\phi\left[\theta\left(d \left(u_{n-1},u_{n}\right)\right)\right]\]
It implies
\[\theta\left(d\left(u_{n},u_{n+1}\right)\right) \leq\phi\left[\theta\left(d(x_{n-1},u_{n}\right)\right]\] \[\leq\phi^{2}\left[\theta\left(d(u_{n-2},u_{n-1}\right)\right]\] \[\leq...\leq\phi^{n}\left[\theta\left(d(u_{0},u_{1}\right)\right].\]
Taking the limit as \(n\rightarrow\infty\), we have
\[1\leq\theta(d\left(u_{n},u_{n+1}\right))\leq\lim_{n\rightarrow\infty}\phi^{n }\left[\theta(d\left(u_{0},u_{1}\right))\right]=1.\]
Since \(\theta\in\Theta\), we obtain
\[\lim_{n\to\infty}d\left(u_{n},u_{n+1}\right)=0. \tag{3.3}\]
Next, we shall prove that \(\left\{u_{n}\right\}_{n\in\mathbb{N}}\) is a Cauchy sequence, i.e, \(\lim_{n\to\infty}d\left(u_{n},u_{m}\right)=0\), for all \(n\in\mathbb{N}\). Suppose to the contrary that exists \(\varepsilon>0\) and sequences \(n_{(k)}\) and \(m_{(k)}\) of natural numbers such that
\[m_{(k)}>n_{(k)}>k,\ \ d\left(x_{m_{(k)}},x_{n_{(k)}}\right)\geq\varepsilon,\ \ D\left(u_{m_{(k)-1}},u_{n_{(k)}}\right)<\varepsilon. \tag{3.4}\]
Using the triangular inequality, we find that,
\[\varepsilon\leq d\left(u_{m_{(k)}},u_{n_{(k)}}\right) \leq d\left(u_{m_{(k)}},u_{n(k)-1}\right)+d\left(x_{n(k)-1},x_{n_ {(k)}}\right) \tag{3.5}\] \[<\varepsilon+d\left(u_{n(k)-1},u_{n_{(k)}}\right). \tag{3.6}\]
Then, by 3.4 and 3.22, it follows that
\[\lim_{k\to\infty}d\left(u_{m_{(k)}},u_{n_{(k)}}\right)=\varepsilon. \tag{3.7}\]
Using the triangular inequality, we find that,
\[\varepsilon\leq d\left(u_{m_{(k)}},u_{n_{(k)}}\right)\leq d\left(u_{m_{(k)}}, u_{n(k)+1}\right)+d\left(x_{n(k)+1},u_{n_{(k)}}\right) \tag{3.8}\]
and
\[\varepsilon\leq d\left(u_{m_{(k)}},u_{n_{(k)+1}}\right)\leq d\left(u_{m_{(k)} },u_{n(k)}\right)+d\left(u_{n(k)},u_{n_{(k)+1}}\right) \tag{3.9}\]
Then, by (3.25) and (3.9), it follows that
\[\lim_{k\to\infty}d\left(u_{m_{(k)}},u_{n_{(k)+1}}\right)=\varepsilon. \tag{3.10}\]
Similarly method, we conclude that
\[\lim_{k\to\infty}d\left(u_{m_{(k)+1}},u_{n_{(k)}}\right)=\varepsilon. \tag{3.11}\]
Using again the triangular inequality,
\[d\left(u_{m_{(k)+1}},u_{n_{(k)+1}}\right)\leq d\left(x_{m_{(k)+1}},x_{m_{(k)} }\right)+d\left(u_{m(k)},u_{n_{(k)}}\right)+d\left(u_{n_{(k)}},u_{n_{(k)+1}} \right). \tag{3.12}\]
On the other hand, using triangular inequality, we have
\[d\left(u_{m_{(k)}},u_{n_{(k)}}\right)\leq d\left(u_{m_{(k)}},u_{m_{(k)+1}} \right)+d\left(u_{m_{(k)+1}},u_{n_{(k)+1}}\right)+d\left(u_{n_{(k)+1}},u_{n_{( k)}}\right). \tag{3.13}\]
Letting \(k\rightarrow\infty\) in inequality (3.12) and (3.13), we obtain
\[\lim_{k\rightarrow\infty}d\left(u_{m_{\left(k\right)+1}},u_{n_{\left(k\right)+1} }\right)=\varepsilon. \tag{3.14}\]
Substituting \(u_{1}=x_{m_{\left(k\right)+1}},u_{2}=x_{n_{\left(k\right)+1}},v_{1}=u_{m_{ \left(k\right)}}\) and \(v_{1}=u_{n_{\left(k\right)}}\) in assumption of the theorem, we get
(3.15)
Letting Letting \(k\rightarrow\infty\) in (3.15), and using \(\left(\theta_{1}\right)\), \(\left(\theta_{3}\right),\)\(\left(\phi_{3}\right)\) and Lemma (2.6) we obtain
\[\theta\left(\varepsilon\right)\leq\phi\left[\theta\left(a\varepsilon+b \varepsilon+c\varepsilon+2h\varepsilon\right)\right].\]
We derive
\[\varepsilon<\varepsilon.\]
Which is a contradiction. Thus \(\lim_{n,m\rightarrow\infty}d\left(u_{n},u_{m}\right)=0\), which shows that \(\left\{x_{n}\right\}\) is a Cauchy sequence. Then there exists \(z\in A\) such that
\[\lim_{n\rightarrow\infty}d\left(u_{n},u\right)=0.\]
Also,
\[d\left(u,B\right) \leq d\left(u,Tu_{n}\right)\] \[\leq d\left(u,x_{n+1}\right)+d\left(u_{n+1},Tu_{n}\right)\] \[=d\left(u,u_{n+1}\right)+d\left(A,B\right)\] \[\leq d\left(u,u_{n+1}\right)+d\left(u,B\right).\]
Therefore, \(d\left(u,Tu_{n}\right)\to d\left(u,B\right).\) In spite of the fact that \(B\) is approximately compact with respect to \(A\), the sequence \(\left\{Tu_{n}\right\}\) has a subsequence \(\left\{Tu_{n_{k}}\right\}\) converging to some element \(v\in B.\) So it turns out that
\[d(u,v)=\lim_{n\rightarrow\infty}d\left(u_{n_{k}+1},Tu_{n_{k}}\right)=d(A,B). \tag{3.16}\]
Thus \(u\) must be an element of \(A_{0}\). Again, since \(T(A_{0})\in B_{0}\), there exists \(t\in A_{0}\) such that
\[d(t,Tu)=d(A,B) \tag{3.17}\]
for some element \(t\) in \(A\). Using the weak \(p\)-property and (3.33) we have
\[d(u_{n_{k}+1},t)=d(Pu_{n_{k}},Pu),\forall n_{k}\in\mathbb{N}.\]
If for some \(n_{0}\), \(d(t,u_{n_{0}+1})=0\), consequently \(d(Pu_{n_{0}},Tu)=0\). So \(Pu_{n_{0}}=Tu\), hence \(d(A,B)=d(u,Tu)\). Thus the conclusion is immediate. So let for any \(n\geq 0\), \(d(t,u_{n+1})>0\). Since \(T\) is a generalized \((\theta,\phi)\)-proximal contraction of the first kind, it follows from this that
\[\theta(d(t,u_{n+1}))\leq\phi\left[\theta\left[ad\left(u,u_{n}\right)+bd\left( t,u\right)+cd\left(u_{n},u_{n+1}\right)+h\left(d\left(u,u_{n+1}\right)+d \left(u_{n},t\right)\right)\right]\right] \tag{3.18}\]
Since \(\theta\) and \(\phi\) are two continuous functions, by letting \(n\rightarrow\infty\) in inequality (3.18), we obtain
\[\theta(d(t,u)) \leq\phi\left[\theta\left[\left(b+h\right)\left(d\left(u,t\right) \right)\right]\right]\] \[\leq\phi\left[\theta\left[\left(d\left(u,t\right)\right)\right]\right]\] \[<\theta(d(t,u)).\]
It is a contradiction. Therefore, \(u=t\), that
\[d(u,Tu)=d(t,Tu)=d(A,B).\]
Uniqueness: Suppose that there is another best proximity point \(z\) of the mapping \(T\) such that
\[d(z,Tz)=d(A,B).\]
Since \(T\) is a generalized \((\theta,\phi)\)-proximal contraction of the first kind, it follows from this that
\[\theta(d(z,u)) \leq\phi\left[\theta\left[ad\left(z,u\right)+bd\left(z,z\right)+ cd\left(u,u\right)+h\left(d\left(z,u\right)+d\left(z,u\right)\right)\right]\right]\] \[=\phi\left[\theta\left[\left(a+2h\right)d\left(z,u\right)\right] \right],\]
which is a contradiction. Thus, \(z\) and \(u\) must be identical. Hence, \(T\) has a unique best proximity point.
Next, we state and prove the best proximity point theorem for non-self generalized \((\theta,\phi)\)-proximal contraction of the second kind.
**Theorem 3.4**.: _Let \((X,d)\) be a complete metric space and \((A,B)\) be a pair of non-void closed subsets of \((X,d)\). If \(A\) is approximately compact with respect to \(B\) and \(T:A\to B\) satisfy the following conditions :_
* \(T\left(A_{0}\right)\in B_{0}\) _and the pair_ \((A,B)\) _satisfies the weak_ \(P\)_-property;_
* \(T\) _is continuous generalized_ \((\theta,\phi)\)_-proximal contraction of second kind._
_Then there exists a unique \(u\in A\) such that \(d(u,Tu)=d(A,B)\) and \(u_{n}\to u\), where \(u_{0}\) is any fixed point in \(A_{0}\) and \(d(u_{n+1},Tu_{n})=d(A,B)\) for \(n\geq 0\). Further, if \(z\) is another best proximity point of \(T\), then \(Tu=Tz\)._
Proof.: Similar to Theorem 3.3, we can find a sequence \(\{u_{n}\}\) in \(A_{0}\) such that
\[d(u_{n+1},Tu_{n})=d(A,B). \tag{3.19}\]
for all non-negative integral values of \(n\). From the \(p\)-property and (3.19) we get
\[d(u_{n},u_{n+1})=d(Tu_{n-1},Tu_{n}),\forall n\in\mathbb{N}.\]
If for some \(n_{0}\), \(d(u_{n_{0+1}},u_{n_{0+2}})=0\), consequently \(d(Tu_{n_{0}},Tu_{n_{0}+1})=0\). So \(Tu_{n_{0}}=Tu_{n_{0}+1}\), hence \(d(A,B)=d(Tu_{n_{0}},T_{n_{0}+1}).\) Thus the conclusion is immediate. So let for any \(n\geq 0\), \(d(Tu_{n},Tu_{n+1})>0\). We shall prove that the sequence \(u_{n}\) is a Cauchy sequence. Let us first prove that
\[\lim_{n\rightarrow\infty}d\left(u_{n},u_{n+1}\right)=0.\]
As \(T\) is generalized \((\theta,\phi)\)-proximal contraction of the second kind, we have that
\[\theta\left(d\left(Tu_{n},Tu_{n+1}\right)\right) \leq\phi\left[\theta\left[ad\left(Tu_{n-1},Tu_{n}\right)+bd\left( Tu_{n-1},Tu_{n}\right)+cd\left(Tu_{n},Tu_{n+1}\right)+h\left(d\left(Tu_{n-1},Tu_{n+1} \right)+d\left(Tu_{n},Tu_{n}\right)\right)\right]\right.\] \[=\phi\left[\theta\left[ad\left(Tu_{n-1},Tu_{n}\right)+bd\left(Tu _{n-1},Tu_{n}\right)+cd\left(Tu_{n},Tu_{n+1}\right)+h\left(d\left(Tu_{n-1},Tu _{n+1}\right)\right)\right]\right]\] \[\leq\phi\left[\theta\left[ad\left(Tu_{n-1},Tu_{n}\right)+bd\left( Tu_{n-1},Tu_{n}\right)+cd\left(Tu_{n},Tu_{n+1}\right)+h\left(d\left(Tu_{n-1},Tu_{n} \right)+d\left(Tu_{n},Tu_{n+1}\right)\right.\right.\] \[=\phi\left[\theta\left[(a+b+h)d\left(Tu_{n-1},Tu_{n}\right)+(c+ h)d\left(Tu_{n},Tu_{n+1}\right)\right]\right]\]
Since \(\theta\) is strictly increasing and by Lemma 2.6, we deduce
\[d\left(Tu_{n},Tu_{n+1}\right)<(a+b+h)d\left(Tu_{n-1},Tu_{n}\right)+(c+h)d\left( Tu_{n},Tu_{n+1}\right).\]
Thus
\[d\left(Tu_{n},Tu_{n+1}\right)<\frac{a+b+h}{1-c-h}(d\left(Tu_{n-1},Tu_{n} \right)).\]
If \(b+b+c+2h=1\), we have \(0<1-c-h\) and so
\[d\left(Tu_{n},Tu_{n+1}\right)\leq\frac{a+b+h}{1-c-h}(d\left(Tu_{n-1},Tu_{n} \right))=d\left(Tu_{n-1},Tu_{n}\right),\forall n\in\mathbb{N};\]
Consequently,
\[\theta\left(d\left(Tu_{n},Tu_{n+1}\right)\right)\leq\phi\left[\theta\left(d \left(Tu_{n-1},Tu_{n}\right)\right)\right]\]
If \(b+b+c+2h<1\), we have \(0<1-c-h\) and so
\[d\left(Tu_{n},Tu_{n+1}\right)<d\left(Tu_{n-1},Tu_{n}\right),\forall n\in \mathbb{N};\]
Consequently,
\[\theta\left(d\left(Tu_{n},Tu_{n+1}\right)\right)\leq\phi\left[\theta\left(d \left(Tu_{n-1},Tu_{n}\right)\right)\right]\]
It implies
\[\theta\left(d\left(Tu_{n},Tu_{n+1}\right)\right) \leq\phi\left[\theta\left(d(Tu_{n-1},Tu_{n}\right)\right]\] \[\leq\phi^{2}\left[\theta\left(d(Tu_{n-2},Tu_{n-1})\right]\right.\] \[\leq...\leq\phi^{n}\left[\theta\left(d(Tu_{0},Tu_{1}\right)\right].\]
Taking the limit as \(n\rightarrow\infty\), we have
\[1\leq\theta(d\left(Tu_{n},Tu_{n+1}\right))\leq\lim_{n\rightarrow\infty}\phi^ {n}\left[\theta(d\left(Tu_{0},Tu_{1}\right))\right]=1.\]
Since \(\theta\in\Theta\), we obtain
\[\lim_{n\rightarrow\infty}d\left(Tu_{n},Tu_{n+1}\right)=0. \tag{3.20}\]
Next, we shall prove that \(\left\{Tu_{n}\right\}_{n\in\mathbb{N}}\) is a Cauchy sequence, i.e, \(\lim_{n\rightarrow\infty}d\left(Tu_{n},Tu_{m}\right)=0\), for all \(n\in\mathbb{N}\). Suppose to the contrary that exists \(\varepsilon>0\) and sequences \(Tn_{(k)}\) and \(Tm_{(k)}\) of natural numbers such that
\[Tm_{(k)}>Tn_{(k)}>k,\ \ d\left(Tu_{m_{(k)}},Tu_{n_{(k)}}\right)\geq \varepsilon,\ \ d\left(Tu_{m_{(k)-1}},Tu_{n_{(k)}}\right)<\varepsilon. \tag{3.21}\]
Using the triangular inequality, we find that,
\[\varepsilon\leq d\left(Tu_{m_{(k)}},Tu_{n_{(k)}}\right) \leq d\left(Tu_{m_{(k)}},Tx_{n(k)-1}\right)+d\left(Tu_{n(k)-1},Tu _{n_{(k)}}\right) \tag{3.22}\] \[<\varepsilon+d\left(Tu_{n(k)-1},Tu_{n_{(k)}}\right). \tag{3.23}\]
Then, by 3.4 and 3.22, it follows that
\[\lim_{k\rightarrow\infty}d\left(Tu_{m_{(k)}},Tu_{n_{(k)}}\right)=\varepsilon. \tag{3.24}\]
Using the triangular inequality, we find that,
\[\varepsilon\leq d\left(Tu_{m_{(k)}},Tu_{n_{(k)}}\right)\leq d\left(Tu_{m_{(k)}},Tu _{n(k)+1}\right)+d\left(Tu_{n(k)+1},Tu_{n_{(k)}}\right) \tag{3.25}\]
and
\[\varepsilon\leq d\left(Tu_{m_{(k)}},Tu_{n_{(k)+1}}\right)\leq d\left(Tu_{m_{(k )}},Tu_{n(k)}\right)+d\left(Tu_{n(k)},Tu_{n_{(k)+1}}\right) \tag{3.26}\]
Then, by (3.25) and (3.9), it follows that
\[\lim_{k\to\infty}d\left(Tu_{m_{(k)}},Tu_{n_{(k)+1}}\right)=\varepsilon. \tag{3.27}\]
Similarly method, we conclude that
\[\lim_{k\to\infty}d\left(Tu_{m_{(k)+1}},Tu_{n_{(k)}}\right)=\varepsilon. \tag{3.28}\]
Using again the triangular inequality,
\[d\left(Tu_{m_{(k)+1}},Tu_{n_{(k)+1}}\right)\leq d\left(u_{m_{(k)+1}},Tu_{m_{( k)}}\right)+d\left(Tu_{m(k)},Tu_{n_{(k)}}\right)+d\left(Tu_{n_{(k)}},Tu_{n_{(k)+1 }}\right). \tag{3.29}\]
On the other hand, using triangular inequality, we have
\[d\left(Tu_{m_{(k)}},Tu_{n_{(k)}}\right)\leq d\left(Tu_{m_{(k)}},Tu_{m_{(k)+1} }\right)+d\left(Tu_{m_{(k)+1}},Tu_{n_{(k)+1}}\right)+d\left(Tu_{n_{(k)+1}},Tu_{ n_{(k)}}\right). \tag{3.30}\]
Letting \(k\to\infty\) in inequality (3.29) and (3.30), we obtain
\[\lim_{k\to\infty}d\left(Tu_{m_{(k)+1}},Tu_{n_{(k)+1}}\right)=\varepsilon. \tag{3.31}\]
Substituting \(u_{1}=Tu_{m_{(k)+1}},u_{2}=Tu_{n_{(k)+1}},v_{1}=Tu_{m_{(k)}}\) and \(v_{1}=Tu_{n_{(k)}}\) in assumption of the theorem, we get
\[\theta\left(d\left(Tu_{m_{(k)+1}},Tu_{n_{(k)+1}}\right)\right)\leq\phi\left\{ \theta\left\{\begin{aligned} ad\left(Tu_{m_{(k)}},Tu_{n_{(k)}} \right)\\ +bd\left(Tu_{m_{(k)}1},Tu_{n_{(k)}}\right)\\ +cd\left(Tu_{n_{(k)}+1},Tu_{n_{(k)}}\right)\\ +h(d\left(Tu_{m_{(k)}},Tu_{n_{(k)}+1}\right)+d\left(Tu_{n_{(k)}}, Tu_{m_{(k)}+1}\right))\end{aligned}\right\} \tag{3.32}\]
Letting Letting \(k\to\infty\) in (3.32), and using \(\left(\theta_{1}\right)\), \(\left(\theta_{3}\right)\), \(\left(\phi_{3}\right)\) and Lemma (2.6) we obtain
\[\theta\left(\varepsilon\right)\leq\phi\left[\theta\left(a\varepsilon+b \varepsilon+c\varepsilon+2h\varepsilon\right)\right].\]
We derive
\[\varepsilon<\varepsilon.\]
Which is a contradiction. Thus \(\lim_{n,m\to\infty}d\left(Tu_{n},Tu_{m}\right)=0,\) which shows that \(\left\{Tu_{n}\right\}\) is a Cauchy sequence. Then there exists \(v\in B\) such that
\[\lim_{n\to\infty}d\left(Tu_{n},v\right)=0.\]
Also,
\[d\left(v,A\right) \leq d\left(v,Tu_{n}\right)\] \[\leq d\left(v,u_{n+1}\right)+d\left(u_{n+1},Tu_{n}\right)\] \[=d\left(v,u_{n+1}\right)+d\left(A,B\right)\] \[\leq d\left(v,u_{n+1}\right)+d\left(v,A\right).\]
Therefore, \(d\left(v,Tu_{n}\right)\to d\left(v,A\right).\) Since \(A\) is approximately compact with respect to \(B\), the sequence \(\left\{u_{n}\right\}\) has a subsequence \(\left\{u_{n_{k}}\right\}\) converging to some element \(u\in A.\) So it turns out that
\[d(u,v)=\lim_{n\to\infty}d\left(u_{n_{k}+1},Tu_{n_{k}}\right)=d(A,B). \tag{3.33}\]
Because \(T\) is a continuous mapping,
\[d(u,Tu)=\lim_{n\to\infty}d(u_{n+1},Tu_{n})=d(A,B).\]
Uniqueness: Suppose that there is another best proximity point \(z\) of the mapping \(T\) such that
\[d(z,Tz)=d(A,B).\]
Since \(T\) is a generalized \(\left(\theta,\phi\right)\)-proximal contraction of the first second, it follows from this that
\[\theta(d(Tz,Tu)) \leq\phi\left[\theta\left[ad\left(Tz,Tu\right)+bd\left(Tz,Tz \right)+cd\left(Tu,Tu\right)+h\left(d\left(Tz,Tu\right)+d\left(Tz,Tu\right) \right)\right]\right]\] \[=\phi\left[\theta\left[\left(a+2h\right)d\left(Tz,Tu\right) \right]\right],\]
which is a contradiction. Thus, \(z\) and \(u\) must be identical. Hence, \(T\) has a unique best proximity point.
**Theorem 3.5**.: _Let \(\left(X,d\right)\) be a complete metric space and \(\left(A,B\right)\) be a pair of non-void closed subsets of \(\left(X,d\right)\). Let \(T:A\to B\) satisfy the following conditions :_
1. \(T\left(A_{0}\right)\in B_{0}\) _and the pair_ \(\left(A,B\right)\) _satisfies the weak_ \(P\)_-property;_
2. \(T\) _is a generalized_ \((\theta,\phi)\)_-proximal contraction of the first kind as well as a generalized_ \((\theta,\phi)\)_-proximal contraction of the second kind._
_Then there exists a unique \(u\in A\) such that \(d(u,Tu)=d(A,B)\) and \(u_{n}\to u\), where \(u_{0}\) is any fixed point in \(A_{0}\) and \(d(u_{n+1},Tu_{n})=d(A,B)\) for \(n\geq 0\)._
Proof.: Similar to Theorem 3.3, we find a sequence \(\{u_{n}\}\) in \(A_{0}\) such that
\[d(u_{n+1},Tu_{n})=d(A,B)\]
for all non-negative integral values of \(n\). Similar to Theorem 3.3, we can show that sequence \(\{u_{n}\}\) is a Cauchy sequence. Thus converges to some element \(u\) in \(A\). As in Theorem 3.4, it can be shown that the sequence \(\{Tu_{n}\}\) is a Cauchy sequence and converges to some element \(v\) in \(B\). Therefore,
\[d(u,v)=\lim_{n\to\infty}d(u_{n+1},Tu_{n})=d(A,B). \tag{3.34}\]
Eventually, u becomes an element of \(A_{0}\). In light of the fact that \(T(A_{0})\in B_{0}\),
\[d(t,Tu)=d(A,B)\]
for some element \(t\) in \(A\). From the \(p\)-property framework and (3.34,) we get
\[d(u_{n+1},t)=d(Tu_{n},Tu),\forall n\in\mathbb{N}.\]
If for some \(n_{0}\), \(d(t,u_{n_{0}+1})=0\), consequently \(d(Tu_{n_{0}},Tu)=0\). So \(Tu_{n_{0}}=Tu\), hence \(d(A,B)=d(u,Tu)\). Thus the conclusion is immediate. So let for any \(n\geq 0\), \(d(t,u_{n+1})>0\). Since \(T\) is a generalized \((\theta,\phi)\)-proximal contraction of the first kind, it can be seen that
\[\theta(d(t,u_{n+1}))\leq\phi\left[\theta\left(ad(u,u_{n})+bd(t,u)+cd(u_{n},u_ {n+1})+h[d(u,u_{n+1})+d(u_{n},t))\right].\right. \tag{3.35}\]
Since \(\theta\) and \(\phi\) are two continuous functions, by letting \(n\to\infty\) in inequality (3.35), we obtain, \(d(u,Tu)=d(t,Tu)=d(A,B).\) Also, as in the theorem 3.3, the uniqueness of the best proximity point of mapping \(T\) follows.
**Example 3.6**.: Let \(X=\{\lambda_{n}:n\in\mathbb{N}\}\) with the metric \(d(x,y)=|x-y|\) for all \(x,y\in X\), where the sequence \(G_{n}\), defined by
\[\lambda_{1} =1\] \[\lambda_{2} =1+2\] \[\lambda_{3} =1+2+3\] \[...\] \[\lambda_{n} =1+2+3+...+n.\]
We know, \((X,d)\) is a complete metric space. Let \(A=G_{3n}:n\in\mathbb{N}\) and \(B=G_{3n-1}:n\in\mathbb{N}\). It is easy to see that \(d(A,B)=3\), \(A_{0}=A\) and \(B_{0}=B\). Define a mappings \(T:A\to B\), by \(T(\lambda_{3n})=\lambda_{3n-1}\) for all \(n\geq 1\). It is clear that \(A\) is approximately compact with respect to \(B\), \((A,B)\) satisfies the \(p\)-property, \(T\) is continuous and \(T(A_{0})\subseteq B_{0}\). We will show that \(T\) is an \((\theta,\phi)\)-proximal contraction with \(\theta\in\Theta\) and \(\phi\in\Phi\) that is \(\theta(t)=e^{t}\) and \(\phi(t)=t^{\frac{1}{2}}\). Observe that, With out of generality, we may assume that \(n<m\), and since
\[\lambda_{3n-1} =1+2+3+...+3n-1,\] \[\lambda_{3m-1} =1+2+3+...+3m-1,\] \[\lambda_{3n} =1+2+3+...+3n-1+3n,\] \[\lambda_{3m} =1+2+3+...+3m-1+3m.\]
It follow that,
\[d(T(\lambda_{3n}),T(\lambda_{3m})) =|\lambda_{3n-1}-\lambda_{3m-1}|\] \[=3n+(3n+1)+...+(3m-1),\] \[d(\lambda_{3n},\lambda_{2m}) =|\lambda_{2n}-\lambda_{2m}|\] \[=3n+(3n+1)+...+(3m),\]
and
\[d(T(\lambda_{2n}),T(\lambda_{3m}))-d(\lambda_{3n},\lambda_{3m}) =|\lambda_{3n-1}-\lambda_{3m-1}|-|\lambda_{3n}-\lambda_{3m}|\] \[=3n-3m.\]
So that,
\[e^{d(T(\lambda_{3n}),T(\lambda_{3m}))-d(\lambda_{3n},\lambda_{3m})} )=\frac{e^{d(T(\lambda_{3n}),T(\lambda_{3m}))}}{e^{d(\lambda_{3n},\lambda_{3m})}}\] \[=e^{3n-3m})\] \[=e^{-3(m-n)})\] \[\leq e^{-3}=\frac{1}{e^{3}}.\]
So that,
\[e^{d(T(\lambda_{3n}),T(\lambda_{3m}))}+1 =\theta(d(T(\lambda_{3n}),T(\lambda_{3m})))\] \[\leq e^{d(\lambda_{3n},\lambda_{3m})}\frac{1}{e^{3}}+1\] \[\leq\frac{e^{d(\lambda_{3n},\lambda_{3m})}+2}{2}\] \[=\phi\left[\theta(d(\lambda_{3n},\lambda_{3m}))\right].\]
Consequently, \(T\) is an generalized \((\theta,\phi)\)-proximal contraction of the second kind with \(a=1\), \(b=c=h=0\). Thus, all the conditions of Theorem 3.4 are satisfied. Hence, \(T\) has a unique best proximity point and there exist \(\lambda_{3}\in A\) such that
\[d(\lambda_{3},T\lambda_{3})=d(\lambda_{3},\lambda_{2})=3=d(A,B)\]
|
2303.05379 | Orbital stability of two circumbinary planets around misaligned
eccentric binaries | With $n$-body simulations we investigate the stability of tilted circumbinary
planetary systems consisting of two nonzero mass planets. The planets are
initially in circular orbits that are coplanar to each other, as would be
expected if they form in a flat but tilted circumbinary gas disc and decouple
from the disc within a time difference that is much less than the disc nodal
precession period. We constrain the parameters of stable multiple planet
circumbinary systems. Both planet-planet and planet-binary interactions can
cause complex planet tilt oscillations which can destabilise the orbits of one
or both planets. The system is considerably more unstable than the effects of
these individual interactions would suggest, due to the interplay between these
two interactions. The stability of the system is sensitive to the binary
eccentricity, the orbital tilt and the semi-major axes of the two circumbinary
planets. With an inner planet semi-major axis of $5\,a_{\rm b}$, where $a_{\rm
b}$ is semi-major axis of the binary, the system is generally stable if the
outer planet is located at $\gtrsim 8\,a_{\rm b}$, beyond the 2:1 mean motion
resonance with the inner planet. For larger inner planet semi-major axis the
system is less stable because the von-Zeipel--Kozai--Lidov mechanism plays a
significant role, particularly for low binary-eccentricity cases. For the
unstable cases, the most likely outcome is that one planet is ejected and the
other remains bound on a highly eccentric orbit. Therefore we suggest that this
instability is an efficient mechanism for producing free-floating planets. | Cheng Chen, Stephen H. Lubow, Rebecca G. Martin, C. J. Nixon | 2023-03-09T16:26:17Z | http://arxiv.org/abs/2303.05379v2 | # Orbital stability of two circumbinary planets around misaligned eccentric binaries
###### Abstract
With \(n\)-body simulations we investigate the stability of tilted circumbinary planetary systems consisting of two nonzero mass planets. The planets are initially in circular orbits that are coplanar to each other, as would be expected if they form in a flat but tilted circumbinary gas disc and decouple from the disc within a time difference that is much less than the disc nodal precession period. We constrain the parameters of stable multiple planet circumbinary systems. Both planet-planet and planet-binary interactions can cause complex planet tilt oscillations which can destabilise the orbits of one or both planets. The system is considerably more unstable than the effects of these individual interactions would suggest, due to the interplay between these two interactions. The stability of the system is sensitive to the binary eccentricity, the orbital tilt and the semi-major axes of the two circumbinary planets. With an inner planet semi-major axis of \(5\,a_{\rm b}\), where \(a_{\rm b}\) is semi-major axis of the binary, the system is generally stable if the outer planet is located at \(\gtrsim 8\,a_{\rm b}\), beyond the 2:1 mean motion resonance with the inner planet. For larger inner planet semi-major axis the system is less stable because the von-Zeipel-Kozai-Lidov mechanism plays a significant role, particularly for low binary-eccentricity cases. For the unstable cases, the most likely outcome is that one planet is ejected and the other remains bound on a highly eccentric orbit. Therefore we suggest that this instability is an efficient mechanism for producing free-floating planets.
keywords: celestial mechanics - planetary systems - methods: analytical - methods: numerical - binaries: general
## 1 Introduction
Giant planets around a binary star may form with the same initial orbital properties as the circumbinary disc from which they form. Recent observations show that there are many misaligned circumbinary discs (e.g., Chiang & Murray-Clay, 2004; Winn et al., 2004; Capelo et al., 2012; Kennedy et al., 2012; Brinch et al., 2016; Kennedy et al., 2019; Zhu et al., 2022; Kenworthy et al., 2022). Although about 68% of short period binaries (period \(<20\,\)days) have aligned disks (within 3'), those with longer orbital periods have a larger range of inclinations and binary eccentricities (Czekala et al., 2019). The formation of misaligned discs may be due to chaotic accretion (Clarke & Pringle, 1993; Bate et al., 2003; Bate, 2018) or stellar flybys (Cuelle et al., 2019; Nealon et al., 2020). Protoplanetary discs typically notably precess as a solid body (e.g. Papaloizou & Terquem, 1995; Lubow & Ogilvie, 2000). Dissipation in the disc leads to tilt evolution (e.g. Nixon et al., 2011; Martin & Lubow, 2017) but for a sufficiently extended disc, the lifetime may be longer than the alignment timescale. Thus, circumbinary planets (CBPs) may form in misaligned discs.
Although CBPs with a wide range of inclinations are expected to be in binaries with longer orbital periods, they are harder to detect by the transit method because the transit probability is smaller and the planet orbital period is longer (Martin & Triad, 2014; Martin, 2019). To date, all the CBPs which have been detected are nearly coplanar to the binary orbit due to the small orbital period of the Kepler binaries (Czekala et al., 2019). Eclipse timing variations of the binary may be a better method to to distinguish polar planets from coplanar planets (Zhang & Fabrycky, 2019; Martin & Fabrycky, 2021). In the current observations, the Kepler-47 system has multiple CBPs but the binary orbit is nearly circular (\(e<0.03\)) and the three Neptune-size planets are nearly coplanar (Orosz et al., 2012, 2012). Moreover, the TOI-1338 system has two saturnian CBPs detected by transit and radial velocity methods (Kostov et al., 2020; Standing et al., 2023), the NN Ser binary system that is comprised of a red dwarf and white dwarf hosts two Jupiter mass CBPs (Mustill et al., 2013) and Kepler-451 has three
Jupiter mass CBPs (Baran et al., 2015; Esmer et al., 2022). All of these planets are nearly coplanar to the binary orbital plane.
For a misaligned (massless) test particle orbiting around a circular orbit binary, its angular momentum vector precesses around the binary angular momentum vector. The nodal precession is either prograde or retrograde depending upon the initial particle inclination. These are _circulating_ orbits. The longitude of the ascending node fully circulates over 360\({}^{\circ}\) during the nodal precession. For a binary with nonzero eccentricity, a test particle orbit with a sufficiently large initial inclination may undergo nodal libration where the angular momentum vector of the test particle precesses about the binary eccentricity vector. During this process, the orbit undergoes tilt oscillations while the longitude of the ascending node is limited in a range of angles less than 360\({}^{\circ}\)(Verrier and Evans, 2009; Farago and Laskar, 2010; Doolin and Blundell, 2011; Naoz et al., 2017; de Elia et al., 2019). These are _librating_ orbits. The minimum inclination (critical inclination) required for libration decreases with increasing binary eccentricity. Therefore, a test particle orbit with even a small inclination can librate around a highly eccentric binary.
The dynamics of a CBP around an eccentric binary are somewhat affected by the mass of the planet (Chen et al., 2019). For a misaligned non-zero mass planet orbiting around a eccentric orbit binary, the critical angle for the planet to librate depends on the binary eccentricity and angular momentum ratio of the planet to the binary. The angle of the stationary inclination, or polar alignment, occurs at less than 90\({}^{\circ}\) if the planet is massive (Farago and Laskar, 2010; Lubow and Martin, 2018; Zanazzi and Lai, 2018; Martin and Lubow, 2019).
Generally, a single circumbinary planet on an initially circular orbit is stable if its initial orbital radius is greater than 5 times the binary semi-major axis (Doolin and Blundell, 2011; Chen et al., 2020). Stable orbits that are closest to the binary are nearly retrograde and circulating for small binary eccentricity (Cuello and Giuppone, 2019; Giuppone and Cuello, 2019). On the other hand, the most stable orbits are highly inclined, near the polar inclination, for high binary eccentricity (Chen et al., 2020).
With two CBPs around an eccentric orbit binary, the system is more complex because the two planets interact with not only the binary but also each other. Both of these interactions can cause complex tilt oscillations of the planets (Chen et al., 2022). Planet-planet interactions may result in a planet being ejected from system, as has been seen already in coplanar CBP systems (e.g., Smullen et al., 2016; Sutherland and Fabrycky, 2016; Gong, 2017; Gong and Ji, 2017). At least one planet is ejected in 87% of multi-planet circumbinary systems for low-mass and short period-binaries (Fleming et al., 2018). Nevertheless, the existence of multiple CPBs around Kepler-47, TOI-1338, NN Ser and Kepler-451 suggests that multi-planet circumbinary systems may not be rare and the Kepler data show that about half of planets are known to have siblings (e.g. Berger et al., 2018; Thompson et al., 2018). We expect that more circumbinary planetary systems will be found in the future. Hence, a comprehensive study of such systems is necessary for understanding the orbital dynamics and evolution of two or even more circumbinary planets.
In this study, we investigate the orbital stability of two misaligned CBPs with initially circular orbits around circular or eccentric binaries. The two CBPs begin coplanar to each other but misaligned to the binary orbit (e.g. Chen et al., 2022). We first describe the set up of the four-body simulations in Section 2. We describe our results and stability maps with different initial semi-major axes of the two planets in Section 3. We consider the final orbital distributions of the surviving and stable planets in Section 4. Finally, we present our discussion and conclusions in Section 5.
## 2 Simulation set-up and parameter space explored
In this section we first describe the simulation set-up and the parameter space that we explore. To study the orbital stability of two planets orbiting around a circular or eccentric binary star system, we use a wihast integrator which is a second order symplectic Wisdom Holman integrator with 11th order symplectic correctors in the \(n\)-body simulation package, rebound(Rein and Tamayo, 2015). We set the timestep to be 0.7% of the initial orbital period of the inner planet.
We solve the gravitational equations of four bodies in the frame of the centre of mass of the four-body system for which the central binary has components of equal mass \(m_{1}\) and \(m_{2}\) with total mass \(m_{\rm b}=m_{1}+m_{2}\). The semimajor axis of the binary is \(a_{\rm b}\), the eccentricity of the binary is \(e_{\rm b}\) and the orbital period of the binary is \(T_{\rm b}\). Our simulations are scale free such that all masses are scaled to the total binary mass, \(m_{\rm b}\), and all distances are scaled to the binary semi-major axis, \(a_{\rm b}\).
The orbital elements of the two planets are determined by assuming that they lie on Keplerian orbits around the centre of mass of the binary. The planets have equal masses \(m_{\rm pl}=m_{\rm pl}=0.001\,m_{\rm b}\). Our simulations do not account for collisions or the formation of S-type planets (planets that orbit around one star of a binary). Their orbits are defined by six orbital elements: the semi-major axes \(a_{\rm pl}\) and \(a_{\rm pl2}\), inclinations \(i_{\rm pl}\) and \(i_{\rm pl2}\) relative to the binary orbital plane, eccentricities \(e_{\rm pl}\) and \(e_{\rm pl2}\), longitude of the ascending nodes \(\phi_{\rm pl}\) and \(\phi_{\rm pl2}\) measured from the binary semi-major axis, argument of periodides \(\omega_{\rm pl}\) and \(\omega_{\rm pl2}\), and true anomalies \(\nu_{\rm pl}\) and \(\nu_{\rm pl2}\). The orbits of the two planets are initially coplanar to each other and circular so initially \(e_{\rm p}=0\), \(\omega_{\rm p}=0\) and \(\nu_{\rm p}=0\).
As planets form within the disc, they are initially coupled to the disc by gravitational forces and precess with it. Once they open a gap and decouple from the disc, they can precess at a rate that differs from the disc's precession rate. If they decouple within a disc nodal precession period of each other, they will begin their
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & \(e_{\rm b}\) & \(a_{\rm pl}\) (\(a_{\rm b}\)) & Minimum \(a_{\rm pl2}\) (\(a_{\rm b}\)) & Maximum \(a_{\rm pl2}\) (\(a_{\rm b}\)) \\ \hline \hline X1 & 0.0 & 5.0 & 5.9 & 10.0 \\ X2 & 0.2 & 5.0 & 5.9 & 10.0 \\ X3 & 0.5 & 5.0 & 5.9 & 10.0 \\ X4 & 0.8 & 5.0 & 5.9 & 10.0 \\ \hline Y1 & 0.0 & 10.0 & 12.0 & 20.0 \\ Y2 & 0.2 & 10.0 & 12.0 & 20.0 \\ Y3 & 0.5 & 10.0 & 12.0 & 20.0 \\ Y4 & 0.8 & 10.9 & 12.0 & 20.0 \\ \hline Z1 & 0.0 & 20.0 & 25.0 & 40.0 \\ Z2 & 0.2 & 20.0 & 25.0 & 40.0 \\ Z3 & 0.5 & 20.0 & 25.0 & 40.0 \\ Z4 & 0.8 & 20.0 & 25.0 & 40.0 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the simulations. The first column contains the name of the Model, the second and third columns indicate the binary eccentricity and initial semi-major axis of the inner planet. The fourth and fifth columns represent the minimum and maximum separations of the outer planet that we consider with an interval of 0.1 \(a_{\rm b}\).
orbital evolution in a mutually coplanar state. However, the effects of the disc may still play a role in their orbital evolution (Lubow and Martin, 2016).
We consider three initial values of \(a_{\rm p1}=5\,a_{\rm b}\), \(10\,a_{\rm b}\), and \(20\,a_{\rm b}\) and three initial ranges of \(a_{\rm p2}\) that depend on the initial value of \(a_{\rm p1}\) (see Table 1) with the interval \(0.1a_{\rm b}\). We vary the initial values of the tilt \(i_{\rm p1}=i_{\rm p2}\) from \(0^{\circ}\) to \(180^{\circ}\) with an interval of \(2.5^{\circ}\). The binary orbit is not fixed since the binary feels the gravity of massive planets.
In order to analyse the orbital motion of the planet, we work in a frame defined by the instantaneous eccentricity and angular momentum vectors of the binary (\(e_{\rm b}\) and \(l_{\rm b}\)). The frame has the three axes along unit vectors \(\hat{e}_{\rm b}\), \(\hat{l}_{\rm b}\times\hat{e}_{\rm b}\), and \(\hat{l}_{\rm b}\). For the planet angular momentum \(l_{\rm p}\), the inclination of the planet's orbital plane relative to the binary orbital plane is given by
\[i_{\rm p}=\cos^{-1}(\hat{l}_{\rm b}\cdot\hat{l}_{\rm p}), \tag{1}\]
where \(\hat{l}_{\rm b}\) is a unit vector in the direction of the angular momentum of the binary and \(\hat{l}_{\rm p}\) is a unit vector in the direction of the angular momentum of the planet. The mutual inclination between the two planets is given by
\[\Delta i_{\rm p}=\cos^{-1}(\hat{l}_{\rm p1}\cdot\hat{l}_{\rm p2}), \tag{2}\]
where \(\hat{l}_{\rm p1}\) for \(i=1,2\) is a unit vector in the direction of the angular momentum of each planet. The inclination of the binary relative to the total angular momentum \(l\) is
\[i_{\rm b}=\cos^{-1}(\hat{l}\cdot\hat{l}_{\rm b}), \tag{3}\]
where \(\hat{l}\) is a unit vector in the direction of the total angular momentum (\(l=l_{\rm b}+l_{\rm p}\)). Similarly, the phase angle (longitude of ascending node) of the planet in the same reference frame is given by
\[\phi_{\rm p}=\tan^{-1}\left(\frac{\hat{l}_{\rm p}\cdot(\hat{l}_{\rm b}\times \hat{e}_{\rm b})}{\hat{l}_{\rm p}\cdot\hat{e}_{\rm b}}\right)+90^{\circ}. \tag{4}\]
## 3 Simulation results
In this section, we investigate the orbital stability of two planets around a single star and two planets around a binary. We are interested in the destabilising effects of the planet-planet interactions that involve the central binary. For comparison, we first look at the case of two planets orbiting a single star in Sec. 3.1. The criterion for the long term stability of two circular and coplanar planets around a single star is that their separation satisfies
\[\Delta=a_{\rm p2}-a_{\rm p1}\geq\Delta_{\rm crit} \tag{5}\]
with
\[\Delta_{\rm crit}=2\sqrt{3}R_{\rm Hill} \tag{6}\]
(Marchal and Bozis, 1982; Gladman, 1993; Chambers et al., 1996), where the mutual Hill sphere radius is given by
\[R_{\rm Hill}=\left(\frac{m_{\rm p1}+m_{\rm p2}}{3\,m_{\rm b}}\right)^{1/3} \left(\frac{a_{\rm p1}+a_{\rm p2}}{2}\right). \tag{7}\]
The stability criterion in equation (5) has been confirmed numerically (e.g. Chambers et al., 1996; Marzari and Gallina, 2016), but has not been specifically tested for two planet systems around a binary.
For the single CBP case, numerical three-body simulations show that a single CBP is stable if it has semi-major axis \(a_{\rm p}\gtrsim 5\,a_{\rm b}\) around an equal mass binary system for all initial \(i_{\rm p}\) and \(e_{\rm b}\)(Chen et al., 2020). Hence, for the binary case we then consider inner planets at \(a_{\rm p1}=5,\,10\) and \(20\,a_{\rm b}\) after Sec. 3.1. We classify the orbit of a planet to be unstable if the eccentricity of the planet, \(e_{\rm p}\), is larger than \(0.99\) or if the semi-major axis is less than \(a_{\rm b}\) or larger than \(1000\,a_{\rm b}\). Once a planet meets one of these criteria, we consider this four-body system to be unstable.
### Single star system with two planets
To isolate the effects of the binary, we consider a stability map of two planets around a single star with the mass M\({}_{\odot}\). The planets both have a mass of \(0.001\)M\({}_{\odot}\). The inner planet is initially at \(a_{\rm p1}=\)10 au while we consider orbits of the outer planet for which the initial semi-major axis is in the range \(12\) au \(\leq a_{\rm p2}\leq 20\) au. (In the code, we use dimensionless values of length and mass. We take the unit of distance to represent the dimensional value of \(1\) au and the unit of mass to represent the mass of the central star that is dimensionally \(1\)M\({}_{\odot}\)) We make an arbitrary choice for the reference plane that is defined for which the inclination is zero. The planets are initially coplanar to each other. We consider a range of initial planet inclination, \(0^{\circ}\leq i_{\rm p}\leq 180^{\circ}\). While there is no physical difference between different inclinations around a single star, this setup allows us to test how well the outcomes are determined and allows us to check the stability criteria given by equation (5) that was determined for planets around a single star. Finally, it is useful for comparison to the binary cases that we consider later.
In Fig. 1, the different pixel colours represent the final orbital status of the two planets. Blue pixels indicate that both planets are stable at the end of the simulation and white pixels represent two unstable planets. Green pixels indicate that only the inner planet survives at the end of the simulation and red pixels indicate that only the outer planet survives at the end of the simulation.
After we integrate for \(14\) million years, the two planets are quite stable except in the region \(a_{\rm p2}<13\) au. The instability in this region is due to planet-planet interactions that involves Hill instability and chaotic behaviour (Chambers et al., 1996). For our parameters, equation (5) predicts stability for \(a_{\rm p2}>13.6\) au and therefore our numerical results concur with this criterion.
Due to spherical symmetry, these results around a single star should be independent of the initial inclination, \(i_{\rm p}\), of the two planets. The range of unstable orbital radii for the outer planet is independent of \(i_{\rm p}\), reflecting the spherical symmetry. On the other hand, within the unstable region, the plot shows a mixture of unstable outcomes involving two planets being ejected, only the inner planet being ejected and only the outer planet being ejected. These results do not reflect the spherical symmetry. The reason for the change in behaviour is simply due to round off error in the initial conditions at different initial inclinations. This effect shows that we cannot determine the details of a specific unstable outcome with any confidence. This is likely to be a generic issue for all instability plots involving systems like this.
### Binary system with two planets: \(a_{\rm p1}=5\,a_{\rm b}\)
We now take the effect of a binary with total mass \(m_{\rm b}\) into account and first consider systems in which the inner planet is initially placed at \(a_{\rm p1}=5\,a_{\rm b}\). The outer planet is taken to have an initial semi-major axis in the range \(5.9\,a_{\rm b}\leq a\leq 10.0\,a_{\rm b}\). We integrate the orbits for a total time of \(5\times 10^{6}\,T_{\rm b}\), where \(T_{\rm b}\) is the orbital period of the binary. The stability maps in Fig. 2 show models X1-X4, with different binary eccentricities \(e_{\rm b}=0.0\) (upper-left), \(0.2\) (upper-right), \(0.5\) (lower-left), and \(0.8\) (lower-right).
The right vertical axis of each panel in Fig. 2 shows the orbital period ratio of the inner planet to the outer planet, \(T_{\rm p2}/T_{\rm p1}\). From the top to bottom, there are several resonance regions that indicate instability. The five horizontal lines show the 5:2, 7:3, 2:1, 7:4 and 5:3 mean motion resonances between the planets in colours of orange, magenta, cyan, yellow and green, respectively. The widest resonance region is the 2:1. If the outer planet is closer in than the 2:1 resonance, \(\lesssim 8\,a_{\rm b}\), then most of the cases are unstable, mainly for noncloparan cases. This instability results from the closer gravitational interactions and may involve stronger tilt oscillations between the two planets.
In the cases with a small binary eccentricity of \(e_{\rm b}=0\) and 0.2, only the systems that are close to coplanar or close to retrograde coplanar are stable for all plotted separations with \(a_{\rm p2}>a_{\rm p1}+\Delta_{\rm crit}=6.7\,a_{\rm b}\) using equation (5). Their greater stability is due to the relatively small variations of the inclination difference between the planets.
Between the 2:1 and 7:4 resonance regions, the number of unstable orbits increases with increasing \(e_{\rm b}\). This is likely because orbital libration can result in large variations in \(i_{\rm p}\) if \(e_{\rm b}\) is large (see the phase plots in Chen et al., 2019). The inclination variation is small when \(i_{\rm p}\) is close to a stationary inclination. This is why there are more stable orbits near the polar region (\(i_{\rm p}\approx 90^{\circ}\)) in the \(e_{\rm b}=0.5\) and 0.8 cases. The polar region is stable down to near \(a_{\rm p2}=6.5\,a_{\rm b}\). However, the 2:1 resonance unstable region is wider outside of the the polar region.
The binary eccentricity has another effect on stability. Outside of the 2:1 resonance, in \(a_{\rm p2}\gtrsim 8\,a_{\rm b}\), there are two obvious unstable belts located around \(i_{\rm p}=60^{\circ}\) (and \(120^{\circ}\)), \(45^{\circ}\) (and \(135^{\circ}\)), and \(30^{\circ}\) (and \(150^{\circ}\)) in models X2, X3 and X4, respectively. Those angles are are close to the critical minimum inclination angle between the circulating and librating orbits (see section 3.2 in Chen et al., 2019). Unlike the smooth transition in the single CBP stability maps in Chen et al. (2020), CBPs near these critical inclinations can go unstable because the two planets can affect each other. Specifically, in Fig. 12 in Chen et al. (2022), we found that the outer planet in the system has both a lower critical inclination for libration and a lower polar aligned inclination than the inner planet. The two planets can therefore be undergoing different types of nodal oscillation (circulation or libration) with the binary and they can be destabilised in their orbits. In most cases, the inner planets go unstable while the outer planets remain stable.
We notice that there are also two narrow belts in model X1 when the two planets have a certain \(i_{\rm p}\). We run several test simulations and we find that \(e_{\rm p1}\) gets excited within a short time (\(>500\)\(T_{\rm b}\)). The reason is unclear and we should investigate this kind of orbital evolution in the future which may be a dynamical effect we do not consider in our model.
Although we cannot determine the details of a specific unstable outcome with any confidence, we would like to know if the ratio of different outcomes varies with the initial \(a_{\rm p2}\). Thus, we added a slight displacement (\(0.001a_{\rm b}\)) to the initial \(a_{\rm p2}\) values and reran the simulations. We found that the ratio of the cases in which only the inner planet survived to those in which only the outer planet survived (here after \(R_{\rm i(a)}\) does not vary significantly even though the unstable outcome for a specific set of initial conditions may not be the same as the original models. Moreover, the ratio generally decreases with increasing \(e_{\rm b}\) which means that the outer planet has a higher chance of survival than the inner planet in a high \(e_{\rm b}\) system. These ratios are approximately \(R_{\rm i(a)}=1/2\) (models X1 and X2), \(1/3\) (model X3) and \(1/4\) (model X4).
### Binary system with two planets: \(a_{\rm p1}=10\,a_{\rm b}\)
We now consider the case where the inner planet is farther out at \(a_{\rm p1}=10\,a_{\rm b}\), models Y1-Y4. Fig. 3 is the same as Fig. 2 except the initial \(a_{\rm p1}=10\,a_{\rm b}\) and the initial \(a_{\rm p2}\) is in the range \(12-20\,a_{\rm b}\). The integration time is increased to \(14\times 10^{6}\)\(T_{\rm b}\) so that the inner planet has the same number of orbital periods as the simulations in the previous section with \(a_{\rm p1}=5\,a_{\rm b}\). We can then confirm whether the system is stable or not under the same physical timescale for each case.
The stability maps have changed dramatically, although there are the same resonance regions as those of in Fig. 2 including the 5:2, 7:3, 2:1, 7:4 and 5:3 resonances. Note that the resonances in Fig. 3 are narrower than those in Fig. 2 because the the range of \(a_{\rm p2}\) is wider. The same feature can be seen in Fig. 5 next subsection which the range of \(a_{\rm p2}\) is wider than Fig. 3.
Between the 2:1 and 7:4 resonances, in contrast to the previous figure, the number of unstable orbits decreases as \(e_{\rm b}\) increases. For \(e_{\rm b}=0.0\) and 0.2, most of the CBPs can only be stable in this region if \(i_{\rm p}\)\(<\) 55\({}^{\circ}\) or \(>\) 125\({}^{\circ}\). Even though those planets are in prograde or retrograde circulating orbits, von Zeipel-Kozai-Lidov (ZKL, von Zeipel, 1910; Kozai, 1962; Lidov, 1962) oscillations may occur because the two planets are distant from the binary and each other. The classical ZKL mechanism occurs in a binary system that is perturbed by a distant third body. This dynamical effect causes the small object in the binary system to undergo oscillation of the argument of periapsis, which results in a periodic exchange between its eccentricity and inclination. The critical minimum tilt angle is \(39.2^{\circ}\). However, our system is not in the hierarchical triple with a test particle limit. In this case, \(e_{\rm p1}\) and \(e_{\rm p2}\) can get excited, and thus the CBPs can undergo a close encounter.
The eccentricity of a single circumbinary planet remains constant in time to quadrupole order of the binary potential (e.g., Farago and Laskar, 2010). If the binary is replaced by a single star, then the two planets that start mutually coplanar and on circular or
Figure 1: Stability of two planets of mass 0.001 M\({}_{\odot}\) orbiting a single star of mass 1 M\({}_{\odot}\). The inner planet has a semi-major axis of 10 au. The left \(y\)-axis is the initial semi-major axis of the outer planet while the right \(y\)-axis is the orbital period ratio of the inner planet (\(T_{2}\)) to the outer planet (\(T_{1}\)). The \(x\)-axis is the initial inclination of planets. Blue pixels represent stable orbits for both planets, green pixels represent stable orbits for the inner planet and red pixels represent the stable orbits for the outer planet. White pixels represent unstable orbits for both planets. The five horizontal dashed lines indicate the 5:2, 7:3, 2:1, 7:4 and 5:3 mean motion resonances between the planets in orange, magenta, cyan, yellow and green, respectively.
bits would remain coplanar and on circular orbits for all but the closest separations in the stability maps. So neither case results in eccentricity growth. Instead, we suggest that a modified form of ZKL may be acting as the mutual inclination of the planets increases and oscillates due to their different nodal precession rates caused by the central binary, as discuss below.
In Fig 4, we show the dynamics of one survivor case (left) and one unstable case (right) for systems that undergo eccentricity oscillations. For each set of six panels, the upper-left panel is \(e_{\rm p}\), the upper-right is \(a_{\rm p}\), the middle-left panel is \(i_{\rm p}\), the middle-right panel is \(e_{\rm b}\), the lower-left panel is \(\phi_{\rm p}\) and the lower-right panel is the mutual inclination between two planets, \(\Delta i_{\rm p}\), defined by Equation (2). The blue lines represent the inner planet while the yellow lines represent the outer planet except in the two lower right panels where there is only one line. Both cases show eccentricity oscillations of the inner planet similar to ZKL oscillations.
However, the mutual inclination between the planets \(\Delta i_{\rm p}\) undergoes rapid large oscillations of nearly constant amplitude on a timescale that is much shorter than for the eccentricity oscillations, unlike the standard ZKL oscillations. These rapid oscillations are the result of the planets' relative precession at nearly constant tilt \(i_{\rm p}\) relative to the binary. This behaviour is quite different from the standard ZKL mechanism in which the inclination and eccentricity oscillation periods are the same and the eccentricity varies a function of the inclination. The time-varying misalignment might result in ZKL-like oscillations. We note also that there is some evidence of long period inclination oscillations of the outer planet relative to the binary in the left middle panel of Fig 4. The mechanism should be investigated further.
For the case of small \(e_{\rm b}=0.2\) (model Y2), considering planets closer in than the 2:1 resonance, there is still a stable range of \(i_{\rm p}<55^{\circ}\) or \(>125^{\circ}\) and panel is quite similar to model Y1. However, for \(e_{\rm b}=0.5\) (model Y3), the unstable region increases to between \(30^{\circ}<i_{\rm p}<150^{\circ}\). This might be because of nodal oscillations driven by the binary since the boundary of unstable and stable orbits is close to the critical inclination for nodal libration. Unlike the previous two models, there are some stable orbits around the polar librating region which implies that the two planets may be stable if they undergo strong nodal libration with the binary and the nodal oscillation between the binary and planet could offset the effect of the ZKL oscillation.
This effect is more significant for \(e_{\rm b}=0.8\) (model Y4). The two planets closer in than the 2:1 resonance can be not only stable in the coplanar orbits but also in the polar librating orbits. This is because the secular resonance with the binary dominates the system and the ZKL mechanism cannot operate. However, when \(i_{\rm p}\) is away from the stationary inclination, it can vary a lot in the polar
Figure 2: Stability maps of two circumbinary planets in a binary system with \(e_{\rm b}=0.0\) (upper-left), 0.2 (upper-right panel), 0.5 (lower-left) and 0.8 (lower-right). The inner planet has initial semi-major axis \(a_{\rm p1}=5\,a_{\rm b}\). The five horizontal dashed lines and the pixels colours are the same as Fig. 1
Figure 4: Examples of the particle dynamics of planets that undergo eccentricity oscillations in four-body simulations in model Y1 of Fig. 3 that has \(\rm{e_{b}}=0\) and \(a_{\rm{p1}}=10a_{\rm{b}}\). Each set of six panels shows the planet eccentricity (top left), planet semi-major axis (top right), planet inclination (middle left), binary eccentricity (middle right), planet longitude of ascending node (lower left) and the mutual planet inclination (lower right). The initial planet properties are \(l_{\rm{p}}=64^{\circ}\) and \(a_{\rm{p2}}=18a_{\rm{b}}\) in the left panels and \(l_{\rm{p}}=67.5^{\circ}\) and \(a_{\rm{p2}}=18.3a_{\rm{b}}\) in right panels. Each panel shows the orbital evolution of the inner planet (blue line) and the outer planet (yellow line) except the two lower right panels that just show one line in each. The blue lines end in panels after the inner planet becomes unstable.
Figure 3: Same as Fig. 2 except \(a_{\rm{p1}}=10a_{\rm{b}}\).
librating orbits (see Chen et al., 2019, for details). This behaviour may destabilise the orbits of the two planets.
Beyond the 2:1 resonance, the maps with \(e_{\rm b}=0.0\) and 0.2 have complex orbital stability. Moreover, a massive planet can exchange angular momentum between its inclination and the binary eccentricity (Chen et al., 2019). A two CBP system can have significant angular momentum to exchange with the binary. The different libration timescales of the two planets can drive a complicated \(e_{\rm b}\) oscillation. We find that \(e_{\rm b}\) in model Y1, which initially is 0, grows to significant values (it can reach up to 0.1 in some pixels). Thus, a CBP around \(i_{\rm p}=90^{\circ}\) in this system may undergo a polar libration with the binary. Considering the diverse orbital interactions we mentioned above and combining with resonances, the complicated orbital stability regions beyond the 2:1 resonance can be seen in both \(e_{\rm b}=0.0\) and 0.2.
On the contrary, the maps with \(e_{\rm b}=0.5\) and 0.8 are much more stable than the other two maps beyond the 2:1 resonance except in small resonance regions. The stable orbits in this region can be explained by the nodal oscillations between the binary and the outer planet which has more angular momentum to exchange dominating interactions in these systems. It dominates the ZKL. mechanism between the two planets. Models H1 and H2 in Fig. 13 in Chen et al. (2022) display two CBPs with \(i_{\rm p}=80^{\circ}\) and \(90^{\circ}\) at \(10\,a_{\rm b}\) and at \(18\,a_{\rm b}\) in a binary system with \(e_{\rm b}=0.5\). In these two panels, the two planets undergo nodal libration with the binary and the tilt oscillations between the two planets are tiny. This result is similar to what we see in the stability maps of model Y3 and Y4. Additionally, the map with \(e_{\rm b}=0.5\) has the same unstable regions but they are much wider than those in Fig. 2.
In contrast to the \(a_{\rm p1}=5\,a_{\rm b}\) maps, the ratio \(R_{\rm|i_{\rm b}}\) does not vary a lot with increasing \(e_{\rm b}\) because the two planets are farther away from the binary, fewer planets can reach the stability limit of the binary. These ratios are close to \(R_{\rm|i_{\rm b}}=4/5\) (model Y1), \(2/3\) (model Y2), \(1/2\) (model Y3) and \(7/10\) (model Y4).
### Binary system with two planets: \(a_{\rm p1}=20\,a_{\rm b}\)
Fig. 5 is similar to Figs. 2 and 3 but with initial \(a_{\rm p1}=20\,a_{\rm b}\) and initial \(a_{\rm p2}\) in the range \(25-40\,a_{\rm b}\). The integration time is \(40\times 10^{6}\)\(T_{\rm b}\). The stability maps of models Z1 - Z4 are quite different from the previous simulations with the smaller orbital radii of the inner planet. We see the same resonance regions including the 5:2, 7:3, 2:1, 7:4 and 5:3 resonances and there are additionally two small resonances which are below the 5:3 resonance and above the 7:4 resonance.
For the maps with \(e_{\rm b}=0.0\) and 0.2 (models Z1 and Z2), the two upper panels are unlike the previous maps with the same \(e_{\rm b}\). The orbits interior to the 2:1 resonance are quite stable for \(a_{\rm p2}\leq 25.6\,a_{\rm b}\), apart from narrow resonance regions. Since the two planets are close to each other and farther away from the binary than the \(a_{\rm p1}=5a_{\rm b}\) and \(10a_{\rm b}\) cases, the two planets undergo mutual libration that limits their relative inclination. Thus their mutual inclination is small enough to prevent ZKL oscillations and keep them stable. In Fig 6, we show the dynamics of one simulation from model Z1 with initial \(a_{\rm p1}=20\,a_{\rm b}\), \(a_{\rm p2}=29\,a_{\rm b}\) and \(i_{\rm p}=60^{\circ}\). In the upper-right panel, we see that their phase angles are almost locked and thus their largest \(\Delta i\) (lower-right panel) is about \(15^{\circ}\) during the tilt oscillations. On the other hand, between \(i_{\rm p}=55^{\circ}\) and \(125^{\circ}\), the ZKL mechanism dominates the system beyond the 2:1 resonance because the planets undergo mutual circulation with respect to the binary and their \(\Delta i\) is larger than the critical ZKL angle. Consequently, \(e_{\rm p}\) of the two planets get excited and they go unstable due to close encounters with each other.
For the maps with \(e_{\rm b}=0.5\) and 0.8 (model Z3 and Z4), the two lower panels have a more similar configuration to the previous plots that have initial \(a_{\rm p1}=5\,a_{\rm b}\) and \(a_{\rm p1}=10\,a_{\rm b}\). The orbits of the outer planets which are interior to the 2:1 resonance are only stable for \(i_{\rm p}<30^{\circ}\) or \(>150^{\circ}\) and around the polar orbit region. The large unstable region is caused by large variations of \(i_{\rm p}\) due to the nodal oscillation with the binary. In Fig. 7, we show the orbital evolution of two planets from model Z3 which are initially located at \(a_{\rm p1}=20\,a_{\rm b}\) and \(a_{\rm p2}=30\,a_{\rm b}\) and their initial \(i_{\rm p}=30^{\circ}\) (upper-left), \(60^{\circ}\) (upper-right) and \(90^{\circ}\) (lower). For the \(i_{\rm p}=30^{\circ}\) case, two planets not only undergo the tilt oscillation with each other but also undergo the nodal oscillation with the binary individually. This unusual behaviour has been found in our previous study (see Figure 6 in Chen et al., 2022.) For the \(i_{\rm p}=60^{\circ}\) case, \(e_{\rm p}\) get excited in a short time and it results in complicated interactions. For the \(i_{\rm p}=30^{\circ}\) and \(90^{\circ}\) cases, the two planets under nodal circulation (libration) with the binary and no ZKL mechanism is triggered in these two cases. The variation of \(i_{\rm p}\) is small if the planets inclination is close to the stationary inclination. Hence, the orbits close to coplanar and polar configurations tend to be stable.
Beyond the the 2:1 resonance, more planets can be stable with increasing \(e_{\rm b}\), so model Z3 has less stable orbits than model Z4 in this region. The strong nodal oscillations between the planet and the binary dominates the system no matter whether \(\phi_{\rm p}\) are locked or not. Thus, the effect of the ZKL mechanism is not obvious in this region. In contrast to the \(a_{\rm p1}=5\,a_{\rm b}\) maps but similar to the \(a_{\rm p1}=10.0\,a_{\rm b}\) maps, \(R_{\rm|i_{\rm b}}\) does not vary a lot with increasing \(e_{\rm b}\). These ratios are close to \(R_{\rm|i_{\rm b}}=1\) (model Z1), \(4/5\) (model Z2), \(7/10\) (model Z3) and \(1\) (model Z4).
## 4 Final distributions of the stable planets
We now consider the orbital properties of the planets that remain stable at the end of our simulations. We consider histograms of the final planet eccentricity and the final planet semi-major axis relative to its initial semi-major axis. Each histogram represents the sum of both the inner and outer planets since the differences between the distributions for the inner and outer planets are small, especially when the planets are far from the binary.
### Two stable planets cases
For the range of parameters that we have considered, the majority of the outcomes in our stability maps are two stable planets. To understand the orbital dynamics of the two planets, in Fig. 8, we plot histograms of the final \(e_{\rm p}\) of the models with \(e_{\rm b}=0.0\) (upper panels) and 0.8 (lower panels) in the filled coloured columns. The black dashed lines indicate the mean value of each model. The distribution and the mean values are not very sensitive to \(e_{\rm b}\) and so we only display the models with \(e_{\rm b}=0.0\) and 0.8. The cases with a high final \(e_{\rm p}\) are the result of planet-planet resonances (Chiang and Murray, 2002) and eccentricity oscillations (as shown in Fig 4). Because the planets in models X1 and X4 are close to the binary, the maximum \(e_{\rm p}\) attained is restricted by \(e_{\rm b}\) because an eccentric CBP can be disturbed frequently by an eccentric binary. Thus, the final \(e_{\rm p}\) in model X4 cannot exceed about 0.3. In general, the mean eccentricity of the planets is low in systems in which both planets remain stable.
Similar features can be seen in the histograms of the final
(here after \(a_{\rm pf}\)) scaled to the initial \(a_{\rm p}\) (here after \(a_{\rm pf}\)) in Fig 9. The filled coloured columns are the histograms of \(a_{\rm pf}/a_{\rm p0}\) for models with \(e_{\rm b}=0.0\) (upper panels) and 0.8 (lower panels) for systems in which both planets remain stable. The values of \(a_{\rm pf}/a_{\rm p0}\) are concentrated around 1 for all models. With a larger initial \(a_{\rm p1}\), there are more planets that have larger \(a_{\rm pf}/a_{\rm p0}\) values, except in models X1 and X4. There are no planets that have \(a_{\rm pf}/a_{\rm p0}>2\) in model X4 because close encounters with the eccentric binary results in destabilising the orbit of a planet.
### One planet survivor cases
When a planetary system is unstable, the most likely outcome is that one planet remains as a survivor. The probability of the outer planet remaining stable (red pixels) generally increases with increasing \(e_{\rm b}\) (the lower-left panel in Fig. 5 is an exception). The orbital parameters of the remaining CBP may change significantly during the ejection. The red hollow histograms in Fig. 8 represent the final distributions of \(e_{\rm p}\) in cases where one planet survives. The purple dashed lines indicate the mean values of \(e_{\rm p}\). The mean planet eccentricity is increased to about 0.5, except for model X4 which has a maximum of about 0.3 because of close encounters with the binary. The large mean values of \(e_{\rm p}\) in the surviving planets implies a strong interaction between the two planets occurs before one planet got ejected from the system.
The distributions of \(a_{\rm pf}/a_{\rm p0}\) for the one survivor planet cases are shown in the black lines in Fig 9. The trends are similar to the two stable planets cases. Although most of the surviving planets have a final semi-major axis in the range \(1\sim 2\,a_{\rm p0}\), a larger fraction of surviving planets have larger \(a_{\rm pf}\) at the end of simulations compared to the two planet stable cases. This is a result of the strong interactions with the ejected planets.
## 5 Discussion and conclusions
In this paper, we have studied the orbital stability of two misaligned CBPs with masses of 0.001 times the binary mass around a circular or an eccentric binary. In addition to the inclinations of planets, we also considered the influence of the semi-major axis of the inner planet, the separation of the two planets and the binary eccentricity. Unlike a normal multi-planetary system around a single star, there are more orbital interactions including nodal resonances and mean motion resonances with the binary, mean motion resonances and tilt oscillation between planets and some ZKL-like mechanism in multiple CBP systems. Our four-body simulations show that large unstable regions dominate each stability map as the two planets become more misaligned to the binary orbital plane. For example, the cases plotted in Figures 2, 3, and 5, should all be stable to single planet interactions with the binary (Chen et al., 2020). These cases should also be stable against planet-planet interactions involving
Figure 5: Same as Fig. 2 except the inner planet is at \(a_{\rm p1}=20a_{\rm B}\).
a single star, except at the closest planet separations, as seen in Figure 1. Yet, the systems are unstable over much of the plotted parameter ranges. This broader range of instability is due to four-body effects involving the interplay between these two interactions. The final outcome for most of the unstable simulations is that one planet is ejected, rather than two.
What appears to be a modified version of the standard ZKL effect plays an important role in the stability maps when the planets are far from the binary. There are two ways for planets to be stabilised when they are in the ZKL region. First, if the phase angles of the two planets are locked to each other their mutual inclination is too small to trigger the ZKL mechanism. Secondly, libration of planet orbits with the binary reduces their mutual inclinations and suppresses the ZKL effect when \(e_{\rm b}\) is large.
The majority of the outcomes in our stability maps are two stable planets cases. If both planets are stable at the end of the simulation, the final distribution of the orbital properties show that \(e_{\rm p}\) of both planets have non zero but \(<0.1\) mean values while the mean value of \(a_{\rm p}\) is in the range \(0.5\sim 1.5\) times final \(a_{\rm p}\) / initial \(a_{\rm p}\). Thus, the final \(a_{\rm p1}\) and \(a_{\rm p2}\) on the stability map may not represent their final locations although most of cases are very close to their initial locations. In Fig. 10, the left panel shows that the final distribution of \(a_{\rm p1}\) versus \(a_{\rm p2}\) of two planet stable cases in model Y1 \(\sim\) Y4. The blue, yellow, green and red dots represent \(e_{\rm b}=0\), \(0.2\), \(0.5\) and \(0.8\), respectively. (The red dots look like being most dominant because they overlap other dots with different colours.) The final \(a_{\rm p}\) in the most of two planets stable cases are about \(\leq 1\%\) different from initial \(a_{\rm p}\) except in resonance and innermost regions where are less two planets stable cases. A clump of inner planets are scattered inward while the outer planet are scattered outward and vice versa. On the right panel, we plot the ratio of final semi-major axes \(a_{\rm p2,i}\) / \(a_{\rm p1,i}\) versus the ratio of initial semi-major axes \(a_{\rm p2,i}\) / \(a_{\rm p1,i}\) which shows that inner than \(a_{\rm p1,i}=13a_{\rm b}\), some two planets cases can be stable but they are much further than their original semi-major axes. The closest inner planet in model X1 \(\sim\) X4 at the end of the simulation can be close to \(3\,a_{\rm b}\) and those of in model Y1 \(\sim\) Y4 and model Z1 \(\sim\) Z4 are about \(5\,a_{\rm b}\). Moreover, these distributions are not sensitive to \(e_{\rm b}\) except for models X1 \(\sim\) X4 because planets could be very close to the binary. As a result, we predict that two CBPs in a binary system may not have highly eccentric orbits unless they underwent large migration and had close encounters with a binary during the early stages of planet formation.
The single survival planet cases make up the majority of the unstable cases. After the ejection of one planet, the final distributions of the orbital properties of the surviving planets can help us to understand and predict future observations for a binary with a single planet because \(e_{\rm p}\) of two CBPs are excited during the evolution. However, the distributions of \(a_{\rm pf}\) show that the final \(a_{\rm p}\) could be several times larger than the initial \(a_{\rm p}\) unlike the two stable planets cases. It may be hard to detect those distant and misaligned CBPs in binary systems. The final orbital distributions are also not very sensitive to \(e_{\rm b}\). There are not significant differences between the inner planet surviving cases and outer planet surviving cases and the mean value of \(e_{\rm p}\) is about 0.5. As the result, a CBP which has a significant eccentricity and an inclined orbit may have undergone planet-planet interactions after the gas disc was dissipated.
The mass of the planet also plays a role in the orbital dynamics and stability. A high mass planet can have a strong interaction with the other planet due to the high angular momentum exchange. On the contrary, a small mass planet has a smaller effect on the other planet. Furthermore, the critical inclination for the planet to be in a polar librating orbit and the stationary inclination of the planet depend on the binary eccentricity and angular momentum ratio of the planet to the binary (Martin & Lubow, 2019). We ran model Z3 again but changed two CBPs to have a smaller mass of \(0.0001\,m_{\rm b}\). The dynamics of the planets were dominated by the binary with little perturbation from the other planet. Additionally, during the gas disc phase, small planets may undergo type-I migration. The high type 1 migration speed is linearly proportional to the planet mass (Tanaka et al., 2002; Lubow & Ida, 2010). Consequently, these planets may not stay in the outer disc for a long time but they may migrate inwards until meeting a steep density gradient at the inner edge of the circumbinary disc (Pierens, A. & Nelson, R. P., 2008, 2013; Thun, Daniel & Kley, Wilhelm, 2018; Penzlin, Anna B. T. et al., 2021). Formation of terrestrial planets from a misaligned disc of planetesimals after the gas disc has dissipated may also be difficult, unless the disc is close to coplanar or polar alignment (Childs & Martin, 2021, 2022). A highly misaligned disc of planetesimals around a binary may be largely ejected (Childs & Martin, 2022). Therefore, multiple low mass CBP systems may be not applicable for our study. A comprehensive study of the orbital instabilities of two CBPs which are very close to the binary has been done by Sutherland & Kratter (2019).
In this study, we do not allow for collisions or resolve the formation of S-type planets (planets that orbit around one star of a binary) in our simulations. Further, we only consider equal mass binaries. CBPs that are captured by one star of a binary may reveal a formation mechanism for S-type planets (Dvorak, 1986). Current observations show that there are no S-type planets found in binary systems with \(a_{\rm b}<5.2\) au. However, since the Kepler telescope has detected more than 3000 eclipsing binaries with \(T_{\rm b}<3000\) days (Kirk et al., 2016), this implies that S-type planets could be found in those short period binary systems. Simulations in Gong & Ji (2018) show that the maximum capture probability due to coplanar planet-planet scattering is about 10 % for small binary mass ratio \(q_{\rm b}\) and \(e_{\rm b}\) while it is only 1 % for equal mass binaries. Therefore, it is worth considering the effect of \(q_{\rm b}\) in future work to investigate the orbital stability of the multi-circumbinary planet systems with different \(q_{\rm b}\). Furthermore, our model has the potential to calculate the capture probability of CBPs with misaligned orbits and it could provide a
Figure 6: The orbital evolution of two CBPs in model Z1 that has \(e_{\rm b}=0\). The two planets are initially located at \(a_{\rm p1}=20\,a_{\rm b}\), \(a_{\rm p2}=29\) and have \(i_{\rm p}=60^{\circ}\). The upper left panel shows their semi-major axes, the upper right panel shows their longitude of ascending nodes and the lower left panel shows their inclinations. In each of these, the blue line shows the inner planet and the yellow line shows the outer planet. The lower right panel shows the mutual inclination of the planets.
comprehensive understanding of the formation of S-type planets in short period binaries.
During planet-planet interactions, an ejection of one planet from the system requires the ejected planet to be accelerated to a velocity above the escape speed from the binary \(v_{\rm esc,p}=\sqrt{2Gm_{\rm p}/a_{\rm p}}\). For the bound planet to accelerate the ejected planet to this velocity roughly requires the planets' escape velocity, \(v_{\rm esc,p}=\sqrt{2Gm_{\rm p}/R_{\rm p}}\) (where \(R_{\rm p}\) is the radius of the planet), to be greater than the escape speed from the binary. The criterion \(v_{\rm esc,p}>v_{\rm esc,p}\) implies that the orbital radius of the planet around the binary must be \(a>(m_{\rm p}/m_{\rm p})R_{\rm p}\). For a solar mass binary and Jupiter-like planet, this results in \(a>0.5\)au. We therefore expect that most of the unstable cases in these types of systems will lead to ejections rather than direct collisions of the planets. However, for short period binaries with planets orbiting at radii of order an au, collisions may be common and we would expect interesting dynamics and observational signatures -- including accretion of planetary debris on to the central binary -- to occur.
## 6 Implications for observations
The origin of free-floating planets is still not clear because there is, as yet, no statistical analysis with a large homogeneous sample (Miret-Roig et al., 2021). However, our results show that misaligned binaries can be efficient drivers for the formation of free-floating planets, especially for Jovian size planets. The conditions required for a stable two planet system are much stricter around a binary than around a single star system. Planet-planet interactions around a binary system can lead to planet-binary or planet-planet close encounters and planet ejection. Moreover, by the end of our simulations, some planets have extremely eccentric orbits with large \(a_{\rm p}\). These may also contribute to the population of the free-floating planets since they could become unstable due to the small disturbance and our integration time is limited. Long-term dynamical effects may not be seen since the orbital period of the planet is too long.
There are currently 12 observed unbound planetary objects with a mass in the range 3 to 15 Jupiter masses. They were all observed through gravitational microlensing and their abundance is estimated to be about 1.8 times as common as main-sequence stars (Sumi et al., 2011). Planet-planet scattering and post-formation evolution are not enough to explain the free-floating planet population. Indeed, scattering of Jupiter-mass planets in multi-planetary systems around a single star is not efficient (see Fig. 1). It has been suggested that other effects may have to take into account such as planetary stripping in stellar clusters and post-main-sequence ejection (Boss, 2006; Veras and Raymond, 2012; Pfyffer et al., 2015; Barclay et al., 2017). Predictions for the microlensing event rate of free-floating planets from the core accretion theory are smaller than the estimated value from observations (Sumi et al., 2011; Ma et al., 2016). The effect of fragment-fragment interactions in a self-gravitating disc can also contribute to producing massive free-floating planets (Forgan et al., 2018). However, we suggest that formation of planets in a misaligned disc around a binary may significantly increase the number of free-floating planets. The Nancy
Figure 7: Orbital evolution of two CBP cases with an initial \(a_{\rm p1}=20\,a_{\rm b},a_{\rm p2}=30\,a_{\rm b}\) and initial \(i_{\rm p}=30^{\circ}\) (upper-left), \(60^{\circ}\) (upper-right) and \(90^{\circ}\) (lower) in model Z3 with \(e_{\rm b}=0.5\). Each set of six panels are the same as those described in Fig. 4.
Grace Roman Space Telescope (Roman) survey will look for free-floating planets in the Galactic bulge in an upcoming observation season (Penny et al., 2019; Johnson et al., 2020) and the Roman mission has the potential to search free-floating planets in the Magellanic Clouds (Sajadian, 2021).
Apart from TESS and _PLAnetary Transits and Oscillations of stars_ (PLATO), the Kepler telescope has also contributed to finding CBPs. It is much easier to confirm a CBP if a transit cannot be mocked by an eclipsing binary (Doyle et al., 2011). There are two systems KIC 07177553 and KIC 7821010 that may have a planetary-mass object orbiting around in a misaligned orbit around a binary (Borkovits et al., 2016). We consider the first two systems since there is enough orbital information. KIC 07177553 has a total mass of \(1.9\,\mathrm{M_{\odot}}\), a binary mass fraction of \(f_{b}=0.486\), a binary eccentricity of \(e_{b}=0.39\) and the planetary object has a mass of \(5.24\,M_{\mathrm{J}}\), with orbital separation \(a=9.52\,a_{\mathrm{b}}\), inclination to the binary orbit \(i=26^{\circ}\) and \(\phi=26^{\circ}\). KIC 7821010 has a total mass of \(2.52\,\mathrm{M_{\odot}}\), a binary mass fraction of \(f_{b}=0.488\), a binary eccentricity of \(e_{b}=0.68\) and the planetary object has a mass of \(2.56\,\mathrm{M_{\mathrm{J}}}\), orbital semi-major axis \(a=11.88\,a_{\mathrm{b}}\), inclination to the binary orbital plane \(i=25^{\circ}\) and a longitude of ascending node of \(\Omega=-19^{\circ}\). With similar analysis to that presented in Chen et al. (2019, 2020), we find that the two planetary objects are both in prograde circulating orbits. Recently, using ETVs, two CBPs were found around an eclipsing binary RR Cae which is comprised of a white dwarf and M-type dwarf (Rattanamala et al., 2021). We expect to find more inclined or polar circumbinary multi-planetary systems with future observations by TESS or PLATO as a result of the well-developed ETV tools (Zhang and Fabrycky, 2019).
## Data availability
The simulations in this paper can be reproduced by using the REBOUND code (Astrophysics Source Code Library identifier ascl.net/1110.016). The data underlying this article will be shared on reasonable request to the corresponding author.
## Acknowledgements
Computer support was provided by UNLV's National Supercomputing Center. C.C. acknowledges support from a UNLV graduate assistantship. CC and CJN acknowledge support from the Science and Technology Facilities Council (grant number ST/W000857/1). CJN acknowledges support from the Leverhulme Trust (grant number RPG-2021-380). We acknowledge support from NASA through grants NNX17AB96G and 80NSSC19K0443. We thank the anonymous reviewer for his/her careful reading of our manuscript and gives us many insightful comments and suggestions. Simulations in this paper made use of the REBOUND code which can be downloaded freely at [http://github.com/hannorein/rebound](http://github.com/hannorein/rebound).
|
2305.01291 | Arax: A Runtime Framework for Decoupling Applications from Heterogeneous
Accelerators | Today, using multiple heterogeneous accelerators efficiently from
applications and high-level frameworks, such as TensorFlow and Caffe, poses
significant challenges in three respects: (a) sharing accelerators, (b)
allocating available resources elastically during application execution, and
(c) reducing the required programming effort. In this paper, we present Arax, a
runtime system that decouples applications from heterogeneous accelerators
within a server. First, Arax maps application tasks dynamically to available
resources, managing all required task state, memory allocations, and task
dependencies. As a result, Arax can share accelerators across applications in a
server and adjust the resources used by each application as load fluctuates
over time. dditionally, Arax offers a simple API and includes Autotalk, a stub
generator that automatically generates stub libraries for applications already
written for specific accelerator types, such as NVIDIA GPUs. Consequently, Arax
applications are written once without considering physical details, including
the number and type of accelerators. Our results show that applications, such
as Caffe, TensorFlow, and Rodinia, can run using Arax with minimum effort and
low overhead compared to native execution, about 12% (geometric mean). Arax
supports efficient accelerator sharing, by offering up to 20% improved
execution times compared to NVIDIA MPS, which supports NVIDIA GPUs only. Arax
can transparently provide elasticity, decreasing total application turn-around
time by up to 2x compared to native execution without elasticity support. | Manos Pavlidakis, Stelios Mavridis, Antony Chazapis, Giorgos Vasiliadis, Angelos Bilas | 2023-05-02T09:47:49Z | http://arxiv.org/abs/2305.01291v1 | # Arax: A Runtime Framework for Decoupling Applications from Heterogeneous Accelerators
###### Abstract.
Today, using multiple heterogeneous accelerators efficiently from applications and high-level frameworks, such as TensorFlow and Caffe, poses significant challenges in three respects: (a) sharing accelerators, (b) allocating available resources elastically during application execution, and (c) reducing the required programming effort.
In this paper, we present Arax, a runtime system that decouples applications from heterogeneous accelerators within a server. First, Arax maps application tasks dynamically to available resources, managing all required task state, memory allocations, and task dependencies. As a result, Arax can share accelerators across applications in a server and adjust the resources used by each application as load fluctuates over time. Additionally, Arax offers a simple API and includes Autotalk, a stub generator that automatically generates stub libraries for applications already written for specific accelerator types, such as NVIDIA GPUs. Consequently, Arax applications are written once without considering physical details, including the number and type of accelerators.
Our results show that applications, such as Caffe, TensorFlow, and Rodinia, can run using Arax with minimum effort and low overhead compared to native execution, about 12% (geometric mean). Arax supports efficient accelerator sharing, by offering up to 20% improved execution times compared to NVIDIA MPS, which supports NVIDIA GPUs only. Arax can transparently provide elasticity, decreasing total application turn-around time by up to 2x compared to native execution without elasticity support.
Heterogeneous accelerators, Spatial sharing, Dynamic resource assignment, Live-migration +
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
## 1. Introduction
The increasing need for high performance at low energy consumption has resulted in the proliferation of heterogeneous accelerators, such as GPUs, FPGAs, and TPUs (1; 8; 10; 27; 31; 32). Recent estimates (1; 2; 33; 27; 36; 1) indicate that by 2030 servers will include a plethora of processing units and specialized accelerators (3; 6; 18; 29). This trend poses significant challenges in how applications and higher-level frameworks, such as TensorFlow (9) and Caffe (14), can fully utilize the capacity of heterogeneous accelerators.
Today, a large percentage of applications or frameworks is _statically bound to specific accelerators throughout their execution_. Many applications are directly written for one accelerator type, e.g., NVIDIA GPUs, to allow for device-specific optimizations. Over the last years, unified programming models, e.g., SYCL (11) and oneAPI (13), aim to offer portability to different accelerator types. However, applications are still required to explicitly select the desired accelerators during initialization and prior to starting their execution. As a result, each application execution is still bound to a specific set of accelerators or accelerator types that cannot change at runtime. This results in poor resource and application efficiency in two ways: (a) reduced sharing of resources and (b) lack of adaptation over time.
First, existing resource assignment techniques fully allocate accelerators to a single application. Although practical, this exclusive assignment creates significant _load imbalance_ in heterogeneous setups with multiple accelerators and results in resource under-utilization. Existing time-sharing approaches (12; 34; 35; 36) cannot address this issue effectively, e.g., in cases where an application cannot fully utilize an accelerator during its time slice. Spatial sharing, on the other hand, has the potential to increase resource utilization. However, existing approaches, such as NVIDIA MPS (23), are limited to specific accelerator types and require applications to perform manual task assignment and data placement.
Second, resources assigned to each application remain fixed throughout its execution. However, applications often exhibit dynamic behavior and fluctuating load requirements (12; 34). Given that it is difficult to estimate the resource demands of applications accurately and statically assign resources to each application, the _lack of elasticity mechanisms_ results in application under- or over-provisioning and eventually to poor resource utilization.
In this paper, we present Arax, a runtime system that decouples applications from heterogeneous accelerators _within_
a single server. Our approach is based on RPC, a mechanism that is proven to be very successful in decoupling complex software stacks, using clear and conceptually simple boundaries. The client-side stubs of Arax allow applications to be written once using a simple API, without considering any low-level details, such as the number or type of accelerators. The core component of Arax is a backend service, the Arax server, that dynamically maps application tasks and data to available accelerators at runtime. This enables spatial accelerator sharing and adjusts resources at runtime. Last but not least, Arax includes a stub generator (Autotalk) that reduces porting effort for existing accelerator-enabled applications. Table 1 summarizes the main capabilities of Arax, compared to state-of-the-art approaches. The whole Arax ecosystem is available at GitHub1.
Footnote 1: [https://github.com/CARV-ICS-FORTH/arax](https://github.com/CARV-ICS-FORTH/arax)
The RPC-based approach of Arax allows **decoupling accelerators from applications**. Arax applications do not need to perform accelerator selection, memory allocation, or task assignment operations; all are handled transparently by Arax. This approach allows Arax to perform memory allocations lazily and only when the actual task assignment occurs. To improve accelerator utilization while ensuring application performance Arax provides three capabilities:
(a) **Spatial sharing** that manages existing mechanisms in heterogeneous accelerators, transparently, and across all applications in a server. We use asynchronous host-threads to issue tasks to GPU streams and FPGA command queues. Regarding FPGAs, Arax loads bitstreams with multiple kernels that need to be collocated in the same FPGA. The advantage of our approach is that it moves all the related management from individual applications to the shared Arax runtime and can make decisions across all applications.
(b) **Elasticity and dynamic resource assignment** to applications at runtime. To achieve this, Arax requires fine-grain access to application tasks and their data. Arax uses asynchronous operations to issue independent tasks across different accelerators, while ensuring that tasks with dependencies execute in-order.
(c) **Live-migration** that moves application tasks across heterogeneous accelerators. Unlike existing approaches, our migration mechanism does not require application modifications or specialized accelerator support. Arax uses task arguments to keep track of the data used by each task and transfers only relevant data upon task migration. Although arbitrary pointers may result in moving large amounts of memory, our approach is adequate to support real applications, such as TensorFlow and Caffe.
Finally, Arax includes **Autotalk**, a generator that creates stubs for a given accelerator API based on a description of the target API. Applications are then linked dynamically with the stub library that internally calls the Arax API. Currently, Autotalk generates stubs for a subset of CUDA that can support Caffe and TensorFlow.
We evaluate Arax using Caffe, TensorFlow, and Rodinia. Our results show that Arax applications can run without any modifications at low overhead--up to 12% compared to native--when other approaches, i.e., AvA (Shi et al., 2017), result in up to 30% overhead for the same applications. In addition, Arax provides elasticity, decreasing total application turn-around time by 2\(\times\) compared to native execution without elasticity support. Our migration mechanism adds 7% overhead compared to standalone execution. Finally, our sharing mechanism provides up to 20% improvement in total execution time compared to NVIDIA MPS.
The main contributions of this paper are:
1. We propose an RPC-based approach to decouple applications from heterogeneous accelerators within servers.
2. We present a mechanism for spatial sharing of heterogeneous accelerators and dynamic and transparent assignment of tasks to accelerators.
3. We present an application live-migration mechanism that reduces data movement based on data ownership by tasks.
4. We present a stub generator that allows existing applications to use Arax with minimal effort and demonstrate our approach with Caffe and TensorFlow.
5. We demonstrate and evaluate Arax in an accelerator-rich server environment, using GPUs, FPGAs, and CPUs, with Caffe, TensorFlow, and Rodinia.
## 2. Design
Figure 1 shows a high-level overview of Arax. Applications use the Arax API to access available accelerators, regardless of their types. Applications create task queues and issue tasks, providing their data in the form of Arax buffers. Tasks and buffers are being transported to the Arax server via a transport layer over shared memory, mapped to both the application and server address spaces. The Arax server assigns dynamically and asynchronously application tasks to accelerators, managing accelerator streams and command queues, maintaining task ordering, and handling data dependencies.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Capabilities** & **MPS** & **StarPU** & **Gandiva** & **DCUDA** & **Ava** & **Arax** \\ & (Zhou et al., 2018) & (Zhou et al., 2018) & (Zhou et al., 2018) & (Zhou et al., 2018) & (Zhou et al., 2018) & (Zhou et al., 2018) \\ \hline Heterogeneity & - & ✓ & - & - & - & ✓ & ✓ \\ \hline Spatial sharing & ✓ & - & - & - & - & ✓ \\ \hline Dynamic & - & - & ✓ & ✓ & - & ✓ \\ resource assign. & - & - & - & - & ✓ & ✓ \\ \hline Reducing effort & - & - & - & - & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1. Capabilities of Arax vs. state-of-the-art approaches.
Finally, Arax's stub generator, Autotalk, allows generating the stub library automatically for a particular accelerator API, given a description file of the API calls. Next, we discuss each component of Arax in more detail.
### Client
Arax provides three basic abstractions: (a) _tasks_, (b) _task buffers_, and (c) _task queues_. Table 2 shows an overview of the main Arax API calls.
_Tasks_. A task can be either a compute or a transfer task. A compute task is an accelerator kernel, while a transfer task is a data transfer between the host and the accelerator. Both tasks are executed without interruption and are asynchronous. Arax provides synchronization primitives to allow applications to wait for their completion. A compute task takes the kernel name and its corresponding arguments as parameters, i.e., inputs, outputs, and arguments required from a kernel. The kernel name is associated with the actual kernel at the server (SS2.2). Unlike existing accelerator APIs, task arguments do not include accelerator-specific information, such as thread number or thread block size. The parameters for a transfer task include the task buffers provided by Arax and any data from the application address space.
_Task buffers_. A buffer represents the input and output data of a task. Multiple tasks or applications can operate on the same buffer concurrently. It is important to note that Arax decouples the accelerator memory management from applications using a lazy memory allocation strategy. When an application requests memory, Arax stores the requested allocation size but does not allocate this memory on the accelerator (SS2.2). The actual allocation will be performed only after the task is successfully assigned to an accelerator. In the meantime, applications can continue issuing tasks since buffers are implemented as opaque types in the shared memory. For all allocations in the shared memory, we use the Doug Lea allocator. This abstraction hides accelerator memory, and applications are unaware of which accelerator hosts their data.
_Task queues_. Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues. The main difference of Arax is that these queues are not assigned directly to an accelerator. Instead, Arax is responsible for assigning them to one or more accelerators at runtime (SS2.2), while ensuring that asynchronous tasks will be executed in-order. Each task queue holds tasks with dependencies. To denote independent sets of work, applications need to acquire different task queues. This approach works well for the ML frameworks we examine due to the inherent serialization of NN layers.
### Server
The Arax server is responsible for maintaining task issue order and managing data dependencies while performing dynamic task assignment and data placement to accelerators. These mechanisms allow Arax to provide efficient spatial sharing and elastic allocation of resources.
_Spatial Sharing_. The spatial sharing mechanism of Arax is based on streams/command queues and host-threads (Arax accelerator threads). In particular, to execute kernels in parallel, the server spawns multiple threads per physical accelerator. Each accelerator thread internally creates different streams (CUDA and ROCm) or command queues (OpenCL). The design of spatial sharing in Arax can support advanced task assignment policies that do not rely on low-level accelerator-specific APIs. To enable spatial sharing for NVIDIA GPUs, we require a single context; thus, the Arax server is implemented as a single process for all accelerators. Regarding
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Abstraction** & **API call** & **Description** \\ \hline \multirow{2}{*}{Tasks} & a\_issue() & Issue a task \\ \cline{2-3} & a\_wait() & Wait for a task \\ \hline \multirow{2}{*}{Task Buffers} & a\_allocate() & Allocate Buffer \\ \cline{2-3} & a\_free() & Free Buffer \\ \cline{2-3} & a\_sync\_to\_0, a\_sync\_from() & Transfer Data \\ \hline \multirow{2}{*}{Task Queues} & a\_acquire() & Acquire a queue \\ \cline{2-3} & a\_release() & Release a queue \\ \hline \end{tabular}
\end{table}
Table 2. Methods of Arax API.
Figure 1. Arax high-level overview. The main components of Arax are: Clients, Server, Transport layer, and Autotalk.
FPGAs, the Arax server loads a bitstream that contains multiple kernels, similar to Vinetalk (Vinetalk, 2017). The server can select and load the appropriate bitstream to serve each task.
_Application migration_. Even when accelerators are sha-red, there can be load imbalances. Arax offers an application migration mechanism to correct load imbalances. This migration mechanism can move application tasks and their data across heterogeneous accelerators. The migration mechanism cannot stop a task during execution. Instead, it waits for the task to finish and moves any pending tasks and their data to another accelerator. There are _three challenges_ that our migration mechanism needs to tackle:
_(i) Migrate an application without interrupting its execution._ Arax offers task queues to applications to issue their tasks. The Arax server stops and resumes the execution of a task queue, and thus it does not affect the execution of the application. In particular, Arax performs the following steps: (a) The server marks this task queue as an orphan (Figure 2; step 1). At this point, accelerator threads cannot launch tasks from this task queue. (b) Since then, there could have been tasks issued for execution; the server waits for them to finish before re-assigning this task queue to a different accelerator thread (Figure 2; step 1). (c) From here on, any remaining task from this particular task queue will invoked to the new accelerator. We note that, during the migration, the application continues issuing tasks to its task queues.
_(ii) Move only the data of the migrated task._ The server should move only the data required from the migrated task and not all the application state. Existing checkpoint approaches (Beng et al., 2017; Wang et al., 2018) migrate all the application state, which involves transfers in the range of gigabytes. The Arax server maintains metadata for each task and is aware of the data required. After assigning the task queue to a new accelerator thread, the server instructs the previous accelerator thread to copy the task data from its accelerator memory to the server memory and free the corresponding allocations (Figure 2; step 1). The server then notifies the new accelerator thread to allocate and copy that data from the server's memory (Figure 2; step 1) using the native accelerator API. We note that the server memory is an intermediate buffer to transfer data across different accelerators. As part of our future work, we plan to eliminate this extra copy using accelerator-to-accelerator transfers, at least for the cases supported (Wang et al., 2018).
_(iii) Migrate the most recent version of the data._ Before a data migration, we must ensure that the data required from the migrated task(s) are up-to-date. To achieve that, the server allows only one valid copy of the data (at any given time) to the distinct accelerator memories in multi-accelerator setups.
_Dynamic task assignment_. The server assigns the incoming task queues to the underlying accelerators. Individual tasks from the same task queue can be assigned to different accelerators. This assignment involves task and data migrations for tasks with dependencies. When the server detects an unassigned, non-empty task queue, it assigns it to an accelerator using a round-robin policy (default). Advanced assignment policies can be implemented with relatively low effort. This is facilitated by the fact that Arax already collects information regarding the memory footprint of each task, the number of tasks per accelerator, and the data ownership.
As a proof of concept that our accelerator selector can host advanced assignment policies, we also implement an elastic assignment policy. This policy is essential to handle load fluctuations or data bursts by performing dynamic task assignments. The server keeps track of the assigned task queues per accelerator and knows the owner of each task queue. Consequently, the accelerator selector can increase/decrease the accelerators assigned to an application based on the load.
For instance, lets assume that we have a low-priority application with two task queues, i.e., _task queue1_ and _task queue2_. Initially, both task queues are assigned to the same accelerator. When the accelerator selector detects idle accelerators, it expands the resources used by the low-priority application by assigning _task queue2_ to the idle accelerator. Reversely, when another high-priority application arrives,
Figure 2. The steps required for an application migration. The task queue is marked orphan (1) and reassigned to a new thread (2). The relevant data are then transferred to the new accelerator via the server memory (3,4).
the server shrinks the accelerators used from the low-priority application by moving _task queue2_ to the accelerator where _task queue1_ executes. This re-assignment requires moving the application state between accelerators, i.e., application migration. Consequently, the high-priority application can make exclusive use of the idle accelerator.
To perform memory management, the server maintains internally a mapping of the allocated buffers per task queue and their corresponding sizes. We note that the actual memory allocation is performed only after its corresponding task queue has been assigned to a physical accelerator (Figure 3; step 1). After the selection of the physical accelerator, the thread of that accelerator gets a task from the task queue (Figure 3; step 2) and checks if any memory has already been allocated in that particular accelerator memory. If not, it performs the actual allocation (Figure 3; step 3) and keeps a reference to that memory segment so that it can be used for deallocation purposes. After that, the accelerator thread can issue the task to the accelerator. If the task is a data transfer, the accelerator thread copies the data from the client address space to the accelerator memory (Figure 3; step 4).
To support different accelerator types, the server spawns separate accelerator threads. Each thread uses the accelerator's native API to communicate with that particular accelerator. Currently, Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm. When receiving a compute task, the accelerator thread uses the kernel name--passed as a task parameter--to find the appropriate kernel program and loads it to the physical accelerator for execution. For this reason, the server maintains a dispatch table that associates kernel names with the actual kernel programs in the server stub.
We assume that kernels are implemented by third-party experts using the native accelerator's API. Accelerators offer libraries such as RAND (Random Number Generation) and BLAS (Basic Linear Algebra Subroutine). The function calls in these libraries can involve multiple kernel invocations internally, which cannot be extracted in case the library is closed-source (e.g., NVIDIA cuBLAS and cuRAND). To overcome this limitation, we incorporate these libraries into Arax, as-is, forming different server stubs, one for each accelerator. The server stubs are compiled using the accelerator-specific compilers. For NVIDIA GPUs we use NVCC, for Intel FPGAs we use AOCL, and for AMD GPUs we use HIPCC.
### Transport Layer
Arax applications and the Arax server are separate processes. Consequently, Arax requires an IPC mechanism for the applications and the server to exchange tasks and data. We use a shared memory approach to avoid system calls in the common path. Our initial implementation of the shared memory transport layer uses an extra copy of the data. In particular, application data are copied in the shared memory segment. Then, the server copies the data to the accelerator memory. We evaluate the impact of this copy in Section 4.1. We believe that future versions of Arax should consider zero-copy mechanisms by using shared pointers between the application and server address spaces.
### Autotalk: stub-generator
Existing frameworks are complex and require considerable manual effort to port them to different accelerator APIs. Arax reduces this effort by providing Autotalk, a generator that creates client and server stubs for each accelerator API offline (Figure 4; Offline). The generated stubs are linked with the applications and the Arax server during their initialization (Figure 4; Online). The offline phase is performed once and consists of three main steps: _parse, generate_, and _extract_.
**Step 1: Parse.** The Autotalk parser gets as input an accelerator API header and produces an API specification file (Figure 4; API specifications). The specification file contains for each API call, the number of arguments, their order, and
Figure 3. Arax dynamic task assignment. Application issues tasks to a task queue. Initially, the task queue is assigned to an accelerator (1), then the accelerator thread gets a task (2). It allocates accelerator memory for that data (3) and copies the data from the application (4).
the return value. The current version of Autotalk targets the CUDA API (v10.1) and can automatically create the API specification file for 85% of the existing functions (1800 in total) without requiring any user intervention.
**Step 2: Generate.** The Autotalk generator takes as input the API specification file that has been produced from the parser and an annotation file provided by the user (Figure 4; User Annotations). This user-provided annotation file contains information about the function calls that cannot be auto-produced from the Autotalk parser and require manual effort. The parser fails for some API calls because they take pointers as parameters, the bounds of which cannot be generated automatically in C/C++, and the address space they belong to (host or device), cannot be found automatically. The user annotation file provides this information with size expressions that calculate the bounds of each pointer. It also specifies the address space of the pointer parameter based on each API's documentation. The user annotation file is created once and consists of 2-3 lines of code for each function that cannot be generated automatically. Currently, these functions are about 270 (out of the 1800 in CUDA API v10.1). The generator produces the client and server stubs using the API specification and the user annotation files. The client stub contains an implementation of the accelerator API used by applications over the Arax API. The server stub contains the function calls to accelerator libraries (e.g., BLAS, RAND).
**Step 3: Extract.** Autotalk uses cuobjdump(Krishnan, 2017) to extract kernels from the native CUDA applications that are not included in accelerator libraries (Figure 4; Extractor); these kernels are in PTX format (Krishnan, 2017) and are dynamically linked with the server executable so they can be invoked at runtime.
### Implementation issues
The current version of Arax supports the execution of kernels on CPU and three accelerator types: NVIDIA GPUs, AMD GPUs, and Intel Altera FPGAs. To add a new accelerator, one should implement an new accelerator thread that will contain the following functions: accelAlloc() and accelFree() that are responsible for memory allocations and de-allocations respectively; accelSyncTo() and accelSyncFrom() that transfer data to and from the accelerator; accelMemset() that sets device memory to a particular value and accelDevcpy() that performs a transfer within an accelerator. These functions are implemented once for each accelerator type using the native accelerator API.
Accelerator APIs offer function calls that query specific device information, such as cudaGetDeviceProperties(), and cudaGetDeviceCount(). The design of Arax hides the number and type of the underlying accelerators, so it cannot provide such information. Instead, the Arax server returns some "synthesized" information, ensuring that calls depending on such information will run correctly. This information is based on the specifications of the accelerator with minimal resources; by doing so, we ensure that an application will execute to at least one accelerator. We note that this approach is acceptable for the applications used in our experimental evaluation; however, other applications may require advanced policies, which is left as future work.
Existing applications can use library handles or generators, such as cuBLAS handles or cuRAND generators. Typically, library handles and generators are opaque structures that store the context required from a library. However, these handles do not have the same semantics in all accelerator libraries. For instance, CBLAS (the BLAS library for CPUs) does not have the notion of handles. Such cases are managed by Arax before issuing a task to an accelerator: The accelerator threads that are implemented using the native accelerator API prepare handles and generators according to the semantics of each accelerator and use them during the kernel invocation.
## 3. Experimental Methodology
For our evaluation, we use two servers with different accelerator types, as shown in Table 3. The first server (S1) is equipped with one FPGA and two different GPUs, while the second (S2) with two identical NVIDIA GPUs. The NVIDIA RTX 4000 is equipped with 8 GB of GDDR6, has 2304 CUDA cores, and is connected over PCIe v3 x16. The NVIDIA RTX 2080 Ti has 11 GB GDDR6, consists of 4352 CUDA cores, and uses a PCIe v3 x8 port in our server. For the NVIDIA GPUs, we use CUDA v10.1. The Intel Arria 10 FPGA (de5a_net_ddr4) has 4 GB of DDR4 and uses PCIe v3 x8. We use OpenCL 1.2 and Quartus 20.1 to implement and compile the bitstreams and the server accelerator threads. AMD RX550X GPU has 512 compute cores, has 4 GB of GDDR5 VRAM, and uses PCIe v3 x16. For the AMD GPU, we use R0Cm v4.1.0.
Figure 4. Client and Server stub generation (offline phase) and loading (online phase). The three steps of the offline phase are performed by the parser, the generator, and the extractor.
In our evaluation, we use a set of micro-benchmarks and real-world applications. We use micro-benchmarks to evaluate the overhead Arax introduces compared to native kernel execution and data transfers. For kernel execution, we use an empty kernel, without computation and data. Regarding data transfers, we copy varying amounts of data from the application to the accelerator via the Arax primitives.
Table 4 shows the real-world applications and their inputs/outputs used for our evaluation. Similar to AvA (Srivastava et al., 2017), we use applications from Rodinia (Romina et al., 2018) as well as model training and inference from Caffe (Caffe et al., 2016) and TensorFlow (Caffe et al., 2017) version 2.3.2. The last column of Table 4 indicates the accelerator environment for which each kernel is available. We use CUDA for NVIDIA, ROCm for AMD, and OpenCL for FPGA. Using optimized accelerator kernels is orthogonal to our work.
For Caffe Mnist, Siamese, and Cifar, we use the datasets downloaded by the scripts provided in the Caffe repository. For Caffe Googlenet, Alexnet, and Caffenet, we use the ImageNet dataset (Vaswani et al., 2017). For TensorFlow Mnist (Krizhevsky et al., 2015) we use the dataset in LeCun et. all (LeCun et al., 2015). For Keras, we use Computer Vision (CV), Generative Deep Learning (GDL), Graph Neural Networks (GNN), and a Recommendation System (RS) applications, with the code and datasets provided in the Keras repository (Keras, 2018). Regarding Rodinia datasets, we increase their size by 10\(\times\) and the kernel execution time by 8\(\times\), compared to previous works (Srivastava et al., 2017) because the default values are small for executing on a real system (as opposed to simulation).
In all native application runs used as baselines, we add a warm-up phase that initiates the accelerator and moves its power state from idle to maximum. With this warm-up, we avoid the latency implied to the first accelerator call. The FPGA warm-up phase includes the creation of the context, the command queue, the program, and kernel creation, while it excludes the bitstream loading time. In runs with Arax, this warm-up phase is performed by our server. We exclude this warm-up time from all our comparisons.
Finally, to evaluate accelerator sharing, we create a set of workloads with concurrently running applications. These workloads are listed in Table 5 and contain a mix of compute- and data-intensive applications. Workloads A-H use multiple instances of the same application, while I-P include different applications.
## 4. Experimental Evaluation
Our evaluation tries to answer the following questions:
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Subite** & **App.** & \begin{tabular}{c} **Input Data** \\ **(MB)** \\ \end{tabular} & \begin{tabular}{c} **Output Data** \\ **(MB)** \\ \end{tabular} & \begin{tabular}{c} **Kernel** \\ **code** \\ \end{tabular} \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & BFS & 40 & 4 & \multirow{8}{*}{\begin{tabular}{c} CUDA \\ ROCm \\ \end{tabular} } \\ \cline{2-3} & Gaussian (2k) & 32 & 32 & \\ \cline{2-3} & Gaussian (1k) & 8 & 8 & \\ \cline{2-3} & Hotspot & 8 & 4 & \\ \cline{2-3} & Hotspot3D & 16 & 8 & \\ \cline{2-3} & LavaMD & 60 & 25 & \\ \cline{2-3} & NN & 16 & 8 & \\ \cline{2-3} & NW & 512 & 256 & \\ \cline{2-3} & Particle & 1.5 & 0.25 & \\ \cline{2-3} & Pathfinder & 1024 & 0.6 & \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & Mnist & 284 & 279 & \multirow{8}{*}{
\begin{tabular}{c} CUDA \\ ROCm \\ \end{tabular} } \\ \cline{2-3} & Siamese & 566 & 556 & \\ \cline{2-3} & Cifar & 1052 & 1050 & \\ \cline{1-1} \cline{2-3} & Googlenet & 3416 & 3400 & \\ \cline{1-1} \cline{2-3} & Alexnet & 5472 & 5470 & \\ \cline{1-1} \cline{2-3} & Caffenet & 4274 & 4274 & \\ \hline TF & Mnist & 5460 & 5460 & \\ \cline{1-1} \cline{2-3} & CV & 3316 & 3216 & \\ \cline{1-1} \cline{2-3} & GDL & 3974 & 3871 & \\ \cline{1-1} \cline{2-3} TF & GNN & 2784 & 2780 & \\ \cline{1-1} \cline{2-3} & RS & 5310 & 5310 & \\ \hline \end{tabular}
\end{table}
Table 4. Applications and their memory footprint.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Workload** & **Description** & \begin{tabular}{c} **Iterations** \\ **per** \\ **instance** \\ \end{tabular} &
\begin{tabular}{c} **Epochs** \\ **per** \\ **instance** \\ \end{tabular} \\ \hline A & 2xMnist & 10 & 500 \\ \hline B & 4xMnist & 10 & 500 \\ \hline C & 2xCifar & 9 & 100 \\ \hline D & 4xCifar & 9 & 100 \\ \hline E & 2xGaussian & - & - \\ \hline F & 4xGaussian & - & - \\ \hline G & 2xLAvAMD & - & - \\ \hline H & 4xLavaMD & - & - \\ \hline I & Mnist-Siamese & 100-50 & 5000-50 \\ \hline J & Siamese-Cifar & 12-9 & 30-100 \\ \hline K & 2xMnist-Siamese-2xCifar & 100-12-9 & 5000-30-100 \\ \hline L & 3xMnist-Siamese-2xCifar & 100-12-9 & 5000-30-100 \\ \hline M & Hotspot-Guassian & - & - \\ \hline N & Gaussian-LavaMD & - & - \\ \hline O & Particle-Hotspot & - & - \\ \hline P & Gaussian-Hotspot- & - & - \\ \cline{1-1} & LavaMD-Particle & - & - \\ \hline \end{tabular}
\end{table}
Table 5. Workloads for spatial sharing.
* What is the overhead of Arax for decoupling applications from accelerators (SS4.1)?
* How effective is accelerator sharing in Arax (SS4.2)?
* What is the performance improvement of elasticity (SS4.3)?
* What is the overhead of application migration (SS4.4)?
* What is the overhead introduced by Arax in real-life ML frameworks (SS4.5)?
### Overhead of accelerator decoupling
In this section, we evaluate the performance of Arax with heterogeneous accelerators. We use Rodinia (Romma et al., 2017), which offers OpenCL, ROCm, and CUDA kernels. To execute Rodinia in Arax, we port the host code of its CUDA version. Figure 5 shows a breakdown of the total execution time achieved for Arax and native execution. The breakdown consists of: (i) the initialization phase, i.e., generation of application inputs, (ii) the accelerator calls, i.e., memory allocations, memory transfers, and the actual kernel execution, and (iii) the accelerator warm-up, i.e., an accelerator call that changes the accelerator power state. We note that the warm-up time is not considered in our comparisons.
Figure 5(a) shows the execution time of Rodinia when running on an NVIDIA GPU. The relative performance of Arax is between 1% and 5% for all benchmarks, except NW (78%) and Pathfinder (62%). The reason for that is the low computation-to-communication ratio NW and Pathfinder exhibit. In particular, the computation-to-communication ratio for NW is 0.3: 0.9 ms for computation over 3 ms for transferring data. Pathfinder is 0.12: 21 ms for computation over 179 ms for transferring data. The other Rodinia applications have more significant computation-to-communication ratios than Pathfinder. For instance, Gaussian's computation-to-communication ratio is 30: 330 ms for computation over 11 ms for transferring data. We run some Rodinia applications with varying computation-to-communication ratios to validate our findings. For instance, Hotspot3D transfers input data to the accelerator and performs a configurable number of passes upon this data. The relative performance of Arax compared to native CUDA for ten iterations is 1.13x. As we increase the number of iterations to 100 and 1000, the relative performance compared to native is 1.03x and 1.01x, respectively. The overall overhead of Arax is 5.5% (geometric mean) for Rodinia applications, ranging from 1% up to 78%.
Figure 5(b) and Figure 5(c) show the total execution time of Rodinia when running on an Intel FPGA and an AMD GPU accordingly. We observe that the relative performance of Arax compared to AMD GPUs is 2% across all applications, except NW and Pathfinder (8% and 55% respectively). Similarly, the performance for FPGA is up to 3% for all applications, except NW and Pathfinder (9% and 14% accordingly).
The difference in relative performance between the NVIDIA GPU and the other two, i.e., FPGA and AMD GPU, is because the kernel execution takes much less time in the NVIDIA GPU. As a result, the computation-to-communication ratio is proportionally smaller in NVIDIA GPUs than in the AMD GPU or the FPGA.
_Cost analysis for kernel launch and data transfer._ To measure the overhead of a kernel launch, we time the execution of an _empty_ kernel. Since kernel launch is asynchronous, we also place a barrier to ensure that the kernel has finished its execution. Figure 6 shows the corresponding operations for the case of CUDA and Arax. As we can see, a simple _launch kernel_ in CUDA costs approximately 9000 CPU cycles, mainly because it involves a system call. The _device barrier_ operation, which is required to wait for the kernel to finish, costs about 2300 CPU cycles. On top of that, Arax introduces a constant overhead of approximately 1500 CPU cycles that are always applied before the _launch kernel_. This overhead is small compared to the duration of the actual _launch kernel_ call and becomes proportionally negligible as the kernel duration increases. This effect favors kernels running on AMD GPUs and Intel FPGAs since they exhibit a slower execution than NVIDIA GPUs. For example, the NVIDIA GPU can execute Pathfinder 11x faster than the FPGA and 2x faster on the AMD GPU. Thus, the overheads of Arax are less pronounced when it is compared to native OpenCL (FPGA) and ROCm (AMD).
To measure the overhead implied to a data transfer, we create a micro-benchmark that transfers variable size data. On average, Arax is 1.7x slower than native CUDA, due to the extra copy performed to the shared memory segment. In particular, to transfer 1 GB data from an application to the accelerator, Arax requires 180 ms for the CUDA copy and another 135 ms for the copy from the application to the shared memory. The extra copy in the shared memory achieves 8.2 GB/s throughput (measured by the STREAM (Srivastava et al., 2017) benchmark, using a single CPU-core). We note that this overhead affects primarily the applications that exhibit a low computation-to-communication ratio. As part of our future work, we plan to use zero-copy between the applications and server address spaces to minimize this overhead.
_Arax vs Ava._ We use Rodinia to compare Arax and AvA (Romma et al., 2017), which is a state-of-the-art framework for heterogeneous accelerators. Figure 7 shows the normalized execution time to native for both Arax and AvA. Arax performs between 10%-32% better than AvA for Gaussian, Hotspot, LavaMD, and Particle. This is because the overhead of _task issue_ in Arax is less than AvA. In AvA, every accelerator call goes through the hypervisor, which is not the case for Arax. For NW and Pathfinder, Arax results in 78% and 62% more execution time than native. For these benchmarks, AvA introduces 40% and
3% overhead, respectively, compared to native. These two applications have a low computation-to-communication ratio, and the data copy in Arax across the application and server address spaces becomes more pronounced. This indicates that zero-copy data transfers from the client to server address space are necessary for applications with a low computation-to-communication ratio.
### Effectiveness of accelerator sharing
We now compare Arax sharing with NVIDIA MPS (Wang et al., 2019), AMD, and FPGA sharing mechanisms. Even though AMD GPUs do not provide any documentation regarding sharing, our experimentation reveals that they offer spatial sharing by default. Intel Altera FPGAs do not natively support spatial sharing; as a matter of fact, when an application starts, it binds the FPGA, and all subsequent applications fail to start. Instead, with Arax, applications do not have direct access to the FPGA; hence they do not acquire the FPGA exclusively, and they can share its resources.
Figure 8 compares sharing mechanisms upon NVIDIA GPUs. We compare Arax (spatial sharing) with MPS (spatial sharing) and native CUDA (time-slice sharing) using the workloads listed in Table 5. The x-axis shows the different workloads, while the y-axis shows the total execution time achieved. Overall, the execution time of Arax is comparable to MPS. However, with four concurrent instances, workloads B, D, F, H, K, P, Arax has between 4% and 20% less execution time. Even though we could not investigate the reason behind
Figure 5. Overhead of Arax compared to native (NAT) using Rodinia benchmarks over heterogeneous accelerators.
Figure 8. Effectiveness of sharing with NVIDIA GPUs for Arax, native (without MPS), and MPS.
Figure 6. Breakdown of overhead for launching an empty kernel with Arax (CPU cycles).
Figure 7. Execution time normalized to native for Arax and AVA.
this, due to the closed-source nature of NVIDIA MPS, we run further micro-benchmarks with different GPU models, i.e., RTX 2080, V100, and TITAN V, with a varying number of in-flight kernels and concurrent instances. This evaluation shows the same performance improvement of Arax over MPS. To verify these findings, we disclosed them to NVIDIA, which has confirmed them as two separate issues2.
Footnote 2: ID 3559606, ID 3350973
Comparing Arax with native CUDA (time-slice sharing), we observe that Arax provides 31% (geometric mean) less execution time for all workloads. With four concurrent instances, the performance improvement is more pronounced. In particular, Arax has between 1.32\(\times\) and 2\(\times\) less execution time compared to native.
Figure 9(a) shows the execution time when multiple applications use the same FPGA for native (time-slice sharing) and Arax (spatial sharing). We examine two versions of native FPGA sharing: (a) The _Single-KernelBS_ case in which the bitstream loaded to the FPGA contains one kernel, and (b) the _Multi-KernelBS_ case in which the bitstream contains multiple kernels. The drawback of the former is that the FPGA requires reconfiguration to execute a kernel that is not in the current bitstream--an operation that costs about 15 s. In the latter case, i.e., _Multi-KernelBS_, the execution time of an individual kernel, running standalone, increases due to conflicting requirements upon the bitstream compilation. For instance, Gaussian execution takes about 9200 s when a single kernel bitstream (_Single-KernelBS_) is used. For the multi-kernel case (_Multi-KernelBS_), the execution time increases by 17% for the two kernel bitstream and by 52% for the four kernel bitstream.
The spatial sharing capability provided by Arax (Figure 9(a); Arax _Multi-KernelBS_) decreases execution time from 3% up to 85% compared to the single kernel bitstream (Figure 9(a); Native _Single-KernelBS_) and between 9% and 75% compared to the multi-kernel bitstream (Figure 9(a); Native _Multi-KernelBS_). This improvement is because Arax allows applications to execute in parallel in the FPGA, while in the native case, the FPGA is time-shared.
Comparing the _native_ single kernel bitstream with the multi-kernel one, we observe that the _Single-KernelBS_ is between 6% - 50% faster than _Multi-KernelBS_ for workloads E-N. This happens because the reconfiguration time is less than the performance degradation implied by the conflicting requirements of _Multi-KernelBS_. For workload O (Particle-Hotspot), _Multi-KernelBS_ has 81% less execution time compared to _Single-KernelBS_. These two kernels do not have conflicting requirements, so their performance degradation is minimal compared to the FPGA reconfiguration time. As the number of reconfigurations increases, as in workload P (Gaussian-Hotspot-Lava-Particle), it is worth packing kernels in the same bitstream to avoid the reconfiguration overhead. In workload P, the execution time of _Multi-KernelBS_ is 40% less than _Single-KernelBS_.
Figure 9(b) compares Arax with AMD spatial sharing. Arax provides comparable performance to the AMD native execution. In some workloads, such as M and N, Arax provides 45% and 66% performance improvement. Due to the limited information provided by AMD, we extrapolate that there might be performance issues similar to NVIDIA MPS.
### Performance gains of elasticity
Arax can opportunistically grow and shrink the number of homogeneous or heterogeneous accelerators provided to an application.
_Elasticity with homogeneous accelerators_. To evaluate the performance of elasticity, we modify a representative set of the Arax Rodinia applications to use multiple task queues and, consequently, multiple accelerators. Figure 10 depicts the execution time of one application, when increasing the amount of NVIDIA GPUs and the corresponding streams, from one (_1xgpu-1xstr_) to two (_2xgpu-2xstr_). For this experiment, we use the S2 server from Table 3, and each application creates eight task queues. The first GPU uses a PCIe v3 \(\times\)8, while the second one uses a PCIe v3 \(\times\)16. Due to this heterogeneity aspect, we could not see a linear performance improvement when using two GPUs.
Gaussian (1k) and LavaMD do not scale as the number of streams in a GPU increases (_1xgpu-1xstr_, _1xgpu-2xstr_, _1xgpu-4xstr_). This happens because their kernels occupy almost all the GPU threads, so two or more kernels cannot execute in parallel in a GPU. On the contrary, when we provide two GPUs (_2xgpu-1xstr_, _2xgpu-2xstr_) to Gaussian, its execution time decreases by 1.35\(\times\) compared to four streams in a GPU (_1xgpu-4xstr_). LavaMD execution time decreases by 1.7\(\times\) compared to four streams.
Figure 9. Effectiveness of sharing with Intel FPGAs and AMD GPUs for Arax and Native. For FPGAs we compare Arax with a multi-kernel & a single-kernel bitstream.
Particle execution time decreases as we increase the number of streams per GPU. In particular, the execution time of two streams (_1xgpu-2xstr_) and four streams (_1xgpu-4xstr_) compared to one stream (_1xgpu-1xstr_) decreases by 1.6\(\times\) and 2.6\(\times\), respectively. This happens because four Particle kernels do not contend for resources in the GPU, and there is not much serialization due to data transfers. The execution time in the two GPU setups (_2xgpu-1xstr_) is comparable to the one GPU configuration with two streams (_1xgpu-2str_), whereas it is 1.4\(\times\) worst compared to the one GPU with four streams setup (_1xgpu-4xstr_). Finally, NW execution time decreases by up to 16% when increasing the number of GPUs and streams. NW scaling is limited because the computation-to-communication ratio is small.
_Elasticity with heterogeneous accelerators_. We now evaluate the elasticity over heterogeneous accelerators using the same applications as in homogeneous elasticity. We note that these applications do not need any modifications due to Arax's accelerator agnostic API. Figure 11 shows the execution times of four representative applications using multiple heterogeneous accelerators. Each application is running with the following configurations: (a) _1xFPGA_, (b) _1xFPGA and 1xNVIDIA_, (c) _1xFPGA_, _1xNVIDIA with two streams and 1xAMD_, (d) _1xFPGA_, _1xNVIDIA_, and _1xAMD with two streams_, We use the S1 server and four task queues for each application.
As shown in Figure 11, the execution time of LavaMD, Gaussian, and NW decreases by 2\(\times\) when an NVIDIA GPU is used along with an FPGA, shown with the _FPGA_ and _FPGA+NVIDIA_ bars. As we add more accelerators along with the FPGA, shown with the _FPGA+2strNVIDIA+AMD_ and _FPGA+NVIDIA+2strAMD_ bars, the execution time of LavaMD, Gaussian, and NW decreases by 1.95\(\times\), 1.8\(\times\), and 1.3\(\times\) compared to _FPGA+NVIDIA_, respectively.
Finally, we notice that the performance improvement of Particle between the _FPGA_ only setup and the setup with the FPGA and an NVIDIA GPU is only 2%. This is because the execution in RTX 4000 is slower than in the FPGA. When we add more accelerators, shown as _FPGA+2strNVIDIA+AMD_ and _FPGA+NVIDIA+2strAMD_, the performance increases by 1.5\(\times\) compared to the _FPGA+NVIDIA_ setup.
### Overhead of application migration
Arax's application migration moves application tasks and their data across heterogeneous accelerators. In this section, we evaluate migration overheads using Rodinia and Caffe running over homogeneous and heterogeneous accelerators.
_Application migration with homogeneous accelerators_. We use the Gaussian application and the S2 server to evaluate our migration mechanism. To increase/decrease the accelerators assigned to an application, we require an assignment policy. We use the elastic assignment policy described in SS2.2. We run two applications, one with low-priority and one with high-priority. The low-priority application starts first, and the high-priority arrives after a while. In the standalone setup, the low-priority application is statically assigned to an accelerator (A1) while the second accelerator
Figure 11. Performance improvement of applications when increasing the number of _heterogeneous_ accelerators or GPU streams.
Figure 12. Effectiveness of migration when decreasing the accelerators provided to a low-priority application upon the arrival of a high-priority one. We compare elasticity with the standalone execution in which applications are statically assigned to accelerators. We use datasets from 134 MB up to 2 GB.
Figure 10. Performance improvement of applications when increasing the number of _homogeneous_ accelerators or GPU streams.
is idle (A2). When the high-priority arrives, it is assigned to A2. With elasticity enabled, the low-priority application initially uses both A1 and A2 since the load is low. Upon the arrival of the high-priority application, the accelerator selector shrinks the resources provided to the low-priority one. The accelerator selector uses the Arax application migration mechanism to move the low-priority application state to A1. Now the low-priority application uses A1, while the A2 is freed for the high-priority one.
Figures 12(a), 12(b), and 12(c) show the execution time for applications with datasets from 134 MB up to 2 GB. We compare elasticity with the standalone execution time. Figure 12 shows that the execution time of the high-priority application increases by only 7% compared to standalone execution. The execution time of the low-priority application decreases slightly since it uses more resources at the beginning of its execution. By breaking down the overhead of our migration mechanism, we observed that 80% of the total time is spent in the first data transfer from the accelerator to the server memory. This data transfer must wait for all the issued kernels (approximately 600 in-flight kernels) in the accelerator hardware queue to finish, and then it can start transferring data. The Gaussian kernel execution time increases as we increase the data size from 134 MB to 2 GB. The average kernel duration is 550 \(\mu\)s with 134 MB and 12 ms with 2 GB. As a result, the waiting time of the transfer call increases; for the 134 MB, the transfer has to wait for 0.33 s, i.e., 600 kernels \(\times\) 550 \(\mu\)s, whereas for the 2 GB, it waits for 9 s, i.e., 600 kernels \(\times\) 15 ms. We can use kernel preemption [(26)] to reduce the waiting time of our migration mechanism, but this is beyond the purpose of this paper.
_Application migration for tasks with dependencies and heterogeneous accelerators._ Now we evaluate the effectiveness and overheads of our migration mechanism for applications containing tasks with dependencies. Frameworks, such as Caffe, may not have kernels for all accelerator types. In particular, Caffe cannot run on AMD GPUs or FPGAs since BLAS is not supported for these two accelerators.
To emulate this scenario, we run Mnist, Siamese, and Cifar (with ten epochs) using the NVIDIA GPU as the primary accelerator and executing some kernels in the CPU, AMD GPU, and Intel FPGA, as a "helper accelerator". We execute im2col and col2im kernels to the helper accelerator in all setups. Regarding the FPGA, we implement the im2col and col2im using OpenCL. In all setups, a migration is triggered every time an im2col or a col2im task is popped by the main accelerator. The Arax server checks for every task if the current accelerator thread has the kernel required from that task. If the required kernel is not in the server stub of an accelerator thread, the accelerator selector sets the task queue to another accelerator that supports this kernel. The task queue re-assignment triggers data migrations. Consequently, we perform 380k migrations for Mnist (380k times an im2col and a col2im were not supported), 760k for Siamese, and 890k for Cifar.
Table 6 shows the execution time of Caffe running over heterogeneous accelerators. By comparing the NVIDIA-CPU execution with the native execution using only the CPU, we observe 6% performance degradation due to migrations. On the other hand, by comparing the _NVIDIA-CPU_, _NVIDIA-AMD_, and _NVIDIA-FPGA_ with the setup that uses only the NVIDIA GPU (without migrations), the performance is much worse, mainly due to the performance of the kernels to other accelerators. FPGA kernels (im2col, col2im) run 10\(\times\) worst than the NVIDIA GPU since they are un-optimized.
### Overhead for Caffe and TensorFlow
In this section, we examine the applicability of our API to complex, real-life ML frameworks and the performance achieved. Arax provides a complete API that can be used directly from new applications (manual-porting) and Autotalk that can be used to auto-port complex frameworks, such as Caffe and TensorFlow. Figure 13 shows manual-porting, Autotalk, and native CUDA execution time when executing the Caffe framework. We show the training phase with ten epochs of three networks Mnist, Siamese, and Cifar (Figure 13(a)). The relative performance of manual-porting compared to native CUDA is between 3% and 17%. With more than ten epochs, as Figure 13(b) shows, the execution time increases between 9% and 28%. This slight increase (less than 9%) is because the number of data transfers increases with more epochs. To find the maximum performance degradation regarding training, we run Googlenet, Alexnet, and Caffenet, which perform thousands of epochs and use gigabytes of data. Figure 13(e) shows manual-porting and the native CUDA execution time (in hours) for Googlenet, Alexnet, and Caffenet. The performance degradation of manual-porting is between 13% and 28%. The geometric mean of the overhead implied to all Caffe applications is 12.5%.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Mnist & Siamese & Cifar \\ \hline
**NVIDIA-CPU** & 202 & 401 & 520 \\ \hline
**NVIDIA-AMD** & 100 & 213 & 213 \\ \hline
**NVIDIA-FPGA** & 248 & N.A. & N.A. \\ \hline
**CPU only (single-core)** & 190 & 378 & 490 \\ \hline
**NVIDIA only** & 7 & 13 & 19 \\ \hline \end{tabular}
\end{table}
Table 6. The execution time (seconds) of Caffe when the execution is migrated from the NVIDIA GPU to another accelerator. _CPU only_ and _NVIDIA only_ represent the native execution without migrations.
Figures 13(c) and 13(d) present the inference phase for manual-porting, Autotalk, and native CUDA. We run inference for Mnist, Siamese, and Cifar with 1k and 10k iterations. The maximum performance degradation for 1k iteration of manual-porting compared to native is 30% with Cifar. For 10k iterations, the degradation is between 24% and 42%. As explained, the increase in the execution time of manual-porting compared to native CUDA is due to the data transfers. Autotalk adds a minimal overhead compared to manual-porting up to 16%. This happens because with manual-porting we can use fewer barriers and decrease the times that the application blocks. The geometric mean of the overhead implied to all TensorFlow applications is 12.9%.
We use Autotalk to convert TensorFlow and Keras to Arax API. To evaluate the correctness-completeness of Autotalk, we run the unit-tests of TensorFlow, achieving 90% coverage. We also run Mnist and a representative set of Keras applications for the vanilla case, and Arax: some preliminary results are presented in Table 7. Our findings suggest that Arax and Autotalk can transparently handle complex, real-life frameworks without significant effort.
## 5. Related Work
We categorize related work in four areas: (a) static accelerator assignment, (b) dynamic accelerator assignment, (c) accelerator virtualization, and (d) accelerator spatial sharing.
Existing programming models, such as CUDA (Zhou et al., 2017), SYCL (Zhou et al., 2017), and oneAPI (Zhou et al., 2017), enforce applications to select the desired accelerator types either at compile time or at the beginning of application execution, resulting in static binding of applications to accelerators. StarPU (Brands et al., 2016) performs finer-grain assignment of a graph of tasks to multiple and heterogeneous processing units; however, still in a static manner. Arax assigns tasks dynamically to the available accelerators. It also provides spatial sharing across heterogeneous accelerators and a stub generator to reduce application porting effort. We note that Arax and StarPU offer a similar approach for defining independent sets of work. StarPU indicates a set of dependent tasks with labels, whereas Arax uses task queues.
Arax shares similar goals with recent work in dynamically assigning GPUs to applications. Gandiva (Gandiva, 2018) is a cluster-level scheduler for ML training applications that dynamically assigns GPUs to applications. DCUDA (Gandiva et al., 2018) is a runtime system that provides dynamic assignment of applications to GPUs. The main limitation of these works is that they are either based on domain-specific application features or vendor-specific accelerator mechanisms. Gandiva migration uses TensorFlow checkpoints, which however, are not provided by all applications and frameworks (Gandiva et al., 2018). DCUDA provides support only for NVIDIA GPUs. In contrast, Arax is accelerator-agnostic and does rely on application- or accelerator- specific mechanisms.
Previous work has also explored the concept of accelerator virtualization (Gandiva et al., 2018; Gandiva et al., 2018; Gandiva et al., 2018). API remoting (Gandiva et al., 2018; Gandiva et al., 2018) is an I/O virtualization technique in which API calls are forwarded to a user-level computing framework (Gandiva et al., 2018) or to a remote server (Gandiva et al., 2018). The main disadvantage of API remoting is the inability to support multiple APIs, which is not the case for Arax. AvA (Gandiva et al., 2018) is a framework that virtualizes heterogeneous accelerators. However, with AvA, all accelerator calls, including kernels with microsecond execution time, go through the hypervisor, increasing response time. Additionally, AvA requires applications to select the accelerators in advance, leading to static application to accelerator assignment. AvA
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **Mnist** & **CV** & **GDL** & **GNN** & **RS** \\ \hline
**Native CUDA** & 49 & 190 & 27 & 51 & 235 \\ \hline
**Autotalk** & 80 & 240 & 28 & 54 & 250 \\ \hline \end{tabular}
\end{table}
Table 7. The execution time (seconds) of TensorFlow and Keras for Autotalk and native CUDA.
Figure 13. The overheads of Arax using manual-porting and Autotalk (automatic stub generation) compared to native CUDA for Caffe with varying epochs and iterations.
creates a server for each application to execute tasks to accelerators. This design decision does not allow GPU spatial sharing due to the lack of a single context. Arax is a user-space approach resulting in less overhead, as we show in our evaluation. Arax frees applications from accelerator selection, allowing dynamic task assignment. By creating a single GPU context, our server enables spatial sharing.
Finally, GPUs support spatial sharing through NVIDIA MPS (Krizhevsky et al., 2016), while AMD GPUs support it by default. On the other hand, FPGAs require partial reconfiguration that divides the FPGA into fixed areas; these areas can then accommodate different compute kernels. Even though each of these mechanisms provides spatial sharing primitives for each accelerator type, they still require low-level knowledge of each accelerator API and its runtime to implement task assignment policies. Moreover, it may require coordination across different applications, e.g., in the case of FPGAs, which is not always possible in modern servers. Finally, existing sharing mechanisms rely on applications to select the accelerator they will use, leading to inefficiencies. Arax's advantage is that it can handle sharing of heterogeneous accelerators, while abstracting the related complexity away from applications. For instance, with FPGAs, the Arax server performs any required partial reconfiguration, loading the appropriate bitstream that can serve a task. Finally, Arax makes it easy to apply new task assignment policies transparently to all applications facilitating further research in the area.
## 6. Conclusions
In this paper, we present Arax, a runtime that decouples applications from low-level accelerator operations, such as accelerator selection, memory allocation, and task assignment. Arax provides three main capabilities: (a) It assigns application tasks dynamically to different accelerators at runtime and performs all required accelerator memory management internally. (b) It offers fine-grain spatial sharing that improves the utilization of multiple heterogeneous accelerators. (c) It can perform live application migration across heterogeneous accelerators without application modifications or specialized accelerator support. To reduce porting effort, it provides Autotalk, a stub generator that allows linking existing applications, such as TensorFlow and Caffe, to the Arax runtime library with minimal user intervention.
Our evaluation using real-world applications shows that Arax introduces 12% overhead (geometric mean) compared to native execution. Regarding accelerator sharing, Arax improves the execution time up to 20% compared to NVIDIA MPS. Also, its elastic resource assignment reduces total application turn-around time by up to 2\(\times\) compared to the execution without elasticity support.
The extra data copy in the Arax transport layer introduces 80% overhead for applications with low computation to communication ratio. Consequently, future work should examine optimizations for zero-copy data transfers across application, server, and accelerator address spaces. In addition, mechanisms for low-overhead, on-demand data transfer across accelerators when using arbitrary pointers as task arguments can further reduce data transfers during task migrations.
###### Acknowledgements.
We thank our shepherd Dong Du for his help preparing the final version of the paper and the anonymous reviewers for their insightful comments. We thankfully acknowledge the support of the European Commission projects: HiPEAC (GA No 871174), EUPILOT (GA No 101034126)3 and DEEP-SEA (GA No 955606)4.
Footnote 3: European PILOT has received funding from the European High-Performance Computing Joint Undertaking (EuroHPC JU) under grant agreement No 101034126. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Italy, Switzerland, Germany, France, Greece, Sweden, Croatia, and Turkey.
Footnote 4: DEEP-SEA has received funding from the EuroHPC JU under grant agreement No 955606. National contributions from the involved state members (including the Greek General Secretariat for Research and Innovation) match the EuroHPC JU funding.
|
2303.02416 | PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling | Masked Image Modeling (MIM) has achieved promising progress with the advent
of Masked Autoencoders (MAE) and BEiT. However, subsequent works have
complicated the framework with new auxiliary tasks or extra pre-trained models,
inevitably increasing computational overhead. This paper undertakes a
fundamental analysis of MIM from the perspective of pixel reconstruction, which
examines the input image patches and reconstruction target, and highlights two
critical but previously overlooked bottlenecks. Based on this analysis, we
propose a remarkably simple and effective method, {\ourmethod}, that entails
two strategies: 1) filtering the high-frequency components from the
reconstruction target to de-emphasize the network's focus on texture-rich
details and 2) adopting a conservative data transform strategy to alleviate the
problem of missing foreground in MIM training. {\ourmethod} can be easily
integrated into most existing pixel-based MIM approaches (\ie, using raw images
as reconstruction target) with negligible additional computation. Without bells
and whistles, our method consistently improves three MIM approaches, MAE,
ConvMAE, and LSMAE, across various downstream tasks. We believe this effective
plug-and-play method will serve as a strong baseline for self-supervised
learning and provide insights for future improvements of the MIM framework.
Code and models are available at
\url{https://github.com/open-mmlab/mmselfsup/tree/dev-1.x/configs/selfsup/pixmim}. | Yuan Liu, Songyang Zhang, Jiacheng Chen, Kai Chen, Dahua Lin | 2023-03-04T13:38:51Z | http://arxiv.org/abs/2303.02416v2 | # PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling
###### Abstract
Masked Image Modeling (MIM) has achieved promising progress with the advent of Masked Autoencoders (MAE) and BEiT. However, subsequent works have complicated the framework with new auxiliary tasks or extra pre-trained models, inevitably increasing computational overhead. This paper undertakes a fundamental analysis of MIM from the perspective of pixel reconstruction, which examines the input image patches and reconstruction target, and highlights two critical but previously overlooked bottlenecks. Based on this analysis, we propose a remarkably simple and effective method, PixMIM, that entails two strategies: 1) filtering the high-frequency components from the reconstruction target to de-emphasize the network's focus on texture-rich details and 2) adopting a conservative data transform strategy to alleviate the problem of missing foreground in MIM training. PixMIM can be easily integrated into most existing pixel-based MIM approaches (i.e., _using raw images as reconstruction target_) with negligible additional computation. Without bells and whistles, our method consistently improves three MIM approaches, MAE, ConvMAE, and LSMAE, across various downstream tasks. We believe this effective plug-and-play method will serve as a strong baseline for self-supervised learning and provide insights for future improvements of the MIM framework. Code and models are available in MMSelfSup1.
Footnote 1: [https://github.com/open-mmlab/mmselfsup/tree/dev-1.x/configs/selfsup/pixmim](https://github.com/open-mmlab/mmselfsup/tree/dev-1.x/configs/selfsup/pixmim)
## 1 Introduction
Recent years have witnessed substantial progress in self-supervised learning (SSL). Inspired by the success of masked language modeling (MLM) in language processing, masked image modeling (MIM) has been introduced to computer vision, leading to rapid growth in SSL. Pioneering works such as BEiT [1] and MAE [20] exploit Vision Transformers (ViT) to learn discriminative visual representations from raw image data without manual annotations, and their transfer learning performance has outperformed the supervised learning counterpart.
Early MIM methods share a simple pipeline - a portion of non-overlapped image patches are randomly masked, and the model learns to extract discriminative representations by reconstructing the pixel or feature values of the masked patches [1, 20, 58]. To improve the representation quality, some advanced MIM works [30, 63] incorporate extra auxiliary tasks (, contrastive learning) while some other efforts leverage powerful pre-trained models for distillation [26, 44]. However, these attempts either complicate the overall framework or inevitably introduce non-negligible training costs.
Unlike recent works, this paper investigates the most fundamental but usually overlooked components in the data
Figure 1: **Frequency analysis with an example image.** (Top) Components belonging to different frequency intervals for the input and MAE-reconstructed image. (Bottom) The peak signal-to-noise ratio (PSNR) of the reconstruction for components of fine-grained frequency intervals. MAE tends to focus on both the low and high-frequency components and reconstructs intricate details. We increased the image brightness for better visualization.
reconstruction process of MIM,,, _the input image patches and the reconstruction target_, and proposes a simple yet effective method that improves a wide range of existing MIM methods while introducing minimal computation overhead. The core of the paper is a meticulous analysis based on the milestone algorithm - MAE [20], which discloses critical but neglected bottlenecks of most pixel-based MIM methods. The analysis yields two important observations:
**(1). Reconstruction target**: Since the advent of MAE, most MIM methods have adopted raw pixels as the reconstruction target. The training objective requires perfect reconstruction of the masked patches, including intricate details,, textures. This perfect reconstruction target tends to waste the modeling capacity on short-range dependencies and high-frequency details (see Figure 1), which has been pointed out by BEiT [1] and also broadly studied in generative models [49, 50]. In addition, studies on the shape and texture bias [16, 17] indicate that models relying more on shape biases usually exhibit better transferability and robustness. However, reconstructing the fine-grained details inevitably introduces biases toward textures, thus impairing the representation quality.
**(2). Input patches**: MAE employs the commonly used Random Resized Crop (RRC) for generating augmented images. However, when coupling the RRC with an aggressive masking strategy (, masking out 75% image patches), the visible patches in MAE's input can only cover 17.1% of the key object on average (see Figure 2 for examples and subsection 3.2 for details). Semantic-rich foregrounds are vital for learning good visual features [52]. The low foreground coverage during training likely hinders the model's ability to effectively capture the shape and semantic priors, thus limiting the quality of representation.
Guided by the analysis, we propose a method consisting of two simple yet effective modifications to the MIM framework. Firstly, we apply an ideal low-pass filter to the raw images to produce the reconstruction targets, such that the representation learning prioritizes the low-frequency components (e.g., shapes and global patterns). Secondly, we substitute the commonly used RRC with a more conservative image transform operation,, the Simple Resized Crop (SRC) employed by AlexNet [32], which helps to preserve more foreground information in the inputs and encourages the model to learn more discriminative representation. As our method operates directly on raw pixels of the input patches and reconstruction targets to improve the MIM framework, we dub it **PixMIM**. Figure 4 illustrates the overall architecture of our method.
PixMIM can be effortlessly integrated into most existing pixel-based MIM frameworks. We thoroughly evaluate it with three well-established approaches, MAE [20], ConvMAE [15], and LSMAE [28]. The experimental results demonstrate that PixMIM consistently enhances the performance of the baselines across various evaluation protocols, including the linear probing and fine-tuning on ImageNet-1K [11], the semantic segmentation on ADE20K [62], and the object detection on COCO [37], without compromising training efficiency or relying on additional pre-trained models. We additionally conduct experiments to assess the model's robustness against domain shift and exploit an off-the-shelf toolbox [16] to analyze the shape bias of the model, which further highlights the strength of our method. In summary, our contributions are three-fold:
* We carefully examine the reconstruction target and input patches of pixel-based MIM methods, which reveals two important but previously overlooked bottlenecks.
* Guided by our analysis, we develop a simple and effective plug-and-play method, PixMIM, which filters out the high-frequency components from the reconstruction targets and employs a simpler data transformation to maintain more object information in the inputs.
* Without bells and whistles, PixMIM consistently improves three recent MIM approaches on various downstream tasks with minimal extra computation.
## 2 Related Works
**Self-supervised Learning.** Since the success of BERT [12] and GPT series [3, 48] in natural language processing, self-supervised learning (SSL) has made revolutionary progress in various areas and gradually replaced the conventional supervised learning paradigm. One main-stream SSL framework in computer vision is contrastive learning, which has shown great effectiveness in image [19, 6, 8, 21], video [14, 27, 39, 46], and multi-modal data [4, 31, 34, 47]. The main
Figure 2: **Visualization of MAE’s input patches. For each example, we show the original image, the image after the Random Resized Crop (RRC), and the visible patches produced by MAE’s masking strategy from left to right. The coupling of RRC and an aggressive masking strategy leads to low foreground coverage in the inputs and potentially impairs the representation quality.**
philosophy of contrastive learning is to enforce the model to pull augmented views of the same data samples together while pushing views of different samples apart, such that the model learns to extract discriminative representations.
**Masked Image Modeling.** Compared to contrastive learning, masked image modeling is another paradigm for self-supervised representation learning. It works by masking a portion of an image and enforcing the model to reconstruct these masked regions. BEiT [1] masks \(60\%\) of an image and reconstructs the features of these masked regions, output by DALL-E [49]. Recently, there are also some attempts [26, 44] to align these masked features with features from a powerful pre-trained teacher model, _e.g_., CLIP [47]. Since these target features contain rich semantic information, the student model can achieve superior results on many downstream tasks. Instead of reconstructing these high-level features, MAE [20] reconstructs these masked pixel values. Besides, MAE only feeds these visible tokens into the encoder, which can speed up the pre-training by 3.1\(\times\), compared to BEiT.
## 3 A Closer Look at Masked Image Modeling
In this section, we first revisit the general formulation of Masked Image Modeling (MIM) and describe the fundamental components (subsection 3.1). We then present a careful analysis with the milestone method, MAE [20], to disclose two important but overlooked bottlenecks of most pixel-based MIM approaches (subsection 3.2), which guides the design of our method.
### Preliminary: MIM Formulation
The MIM inherits the denoising autoencoder [53] with a conceptually simple pipeline, which takes the corrupted images and aims to recover the masked content. The overall framework typically consists of 1) data augmentation & corruption operation, 2) the auto-encoder model, and 3) the target generator. Table 1 compares representative MIM methods based on the three components. Formally, let \(\mathbf{I}\in\mathbb{R}^{H\times W}\times 3\) be the original image, and \(H,W\) are the height and width of the image, respectively. The corrupted image \(\hat{\mathbf{I}}\) is generated with augmentation \(\mathcal{A}(\cdot)\) and corruption \(\mathcal{M}(\cdot)\), as \(\hat{\mathbf{I}}=\mathcal{M}(\mathcal{A}(\mathbf{I}))\). As in supervised learning, the random resized crop (RRC) is the _de facto_ operation for \(\mathcal{A}(\cdot)\) in MIM [1, 20, 58]. The corruption \(\mathcal{M}(\cdot)\) is instantiated by masking image patches with different ratios (_e.g_. \(75\%\) in MAE [20] and \(60\%\) in SimMIM [58]).
The reconstruction target \(\mathbf{Y}\) is also a key component of MIM methods. We denote the target generator function by \(\mathcal{T}(\cdot)\), and the target is produced by \(\mathbf{Y}=\mathcal{T}(\mathcal{A}(\mathbf{I}))\). The community has explored various target generation strategies \(\mathcal{T}(\cdot)\), which could be roughly divided into non-parametric strategies (_e.g_., identity function for RGB, HOG) and parametric strategies (_e.g_., a pre-trained model like DALLE [49] or CLIP [47]). Our analysis focuses on the non-parametric family, as it does not rely on external pre-training data and is typically more computationally efficient.
Given the target \(\mathbf{Y}\), the autoencoder model \(\mathcal{G}(\cdot)\) takes the corrupted images \(\hat{\mathbf{I}}\) as the input and generates the prediction \(\hat{\mathbf{Y}}\). The model is then optimized by encouraging the prediction to match the pre-defined target \(\mathbf{Y}\):
\[\hat{\mathbf{Y}}=\mathcal{G}(\hat{\mathbf{I}}),\quad\mathcal{L}=\mathcal{D}( \mathbf{Y},\hat{\mathbf{Y}}) \tag{1}\]
The loss \(\mathcal{L}\) is computed according to a distance measurement \(\mathcal{D}(\cdot)\) (_e.g_., \(L1\) or \(L2\) distance) between the prediction and the target. In the following analysis, we investigate MAE [20] by diagnosing its reconstruction target and input image patches, identifying two important but previously overlooked bottlenecks that could have hurt the representation quality.
### Empirical Analysis with MAE
**Reconstruction target.** MAE and most pixel-based MIM methods enforce the model to reconstruct intricate details of raw images. These complicated details contain textures with repeated patterns and belong to the high-frequency components in the frequency domain, which are usually independent of object shapes or scene structures. However, MAE tends to make significant efforts in encoding and reconstructing high-frequency details, as shown in Figure 1.
According to recent studies on shape and texture biases [16, 17], vision models with stronger shape biases behave more like human visual perception, demonstrating better robustness and performing b
\begin{table}
\begin{tabular}{l l l l} \hline Method & aug\&corruption & auto-encoder & target \\ \hline BEiT [1] & RRC+\(40\%\) mask & ViT+Linear & DALLE \\ SimMIM [58] & RRC+\(60\%\) mask & ViT+Linear & RGB \\ MaskFeat [55] & RRC+\(40\%\) mask & ViT+Linear & HOG \\ ConvMAE [15] & RRC+\(75\%\) mask & ConvViT+MSA & RGB \\ MAE [20] & RRC+\(75\%\) mask & ViT+MSA & RGB \\ \hline \end{tabular}
\end{table}
Table 1: **Empirical decomposition of MIM approaches.** aug: data augmentation. mask: mask ratio. MSA: multi-head self-attention layer. RRC: random resized crop.
Figure 3: **Computation of object coverage percentage.** In the above example, A\({}_{1}\) is the foreground area. A\({}_{2}\) is the area of the yellow region. The blue rectangular is the cropped image produced by data augmentation. The object coverage percentage is obtained by the ratio between A\({}_{2}\) and A\({}_{1}\).
downstream tasks than those with stronger texture biases. Apparently, the current reconstruction target has introduced non-negligible texture biases, which deviate from the insights of previous works and might have hurt the representation quality. In subsection 4.1, we provide a straightforward solution to de-emphasize the high-frequency components from the reconstruction target and justify its effectiveness with a quantitative analysis in Figure 5.
**Input patches.** To better understand the inputs to MIM methods at training time, we quantitatively measure how the input patches of MAE cover the foreground objects of raw images. Specifically, we adopt the binary object masks of ImageNet-1K generated by PSSL [35] and propose an object coverage percentage metric to evaluate an image processing operation \(\mathcal{F}(\cdot)\), denoted by \(\mathcal{J}(\mathcal{F})\). As illustrated by Figure 3, \(\mathcal{J}(\mathcal{F})\) is defined as the ratio between areas A\({}_{2}\) and A\({}_{1}\). A\({}_{1}\) and A\({}_{2}\) are the areas of foreground objects in the original image \(\mathbf{I}\) and the processed image \(\mathcal{F}(\mathbf{I})\), respectively. We then leverage the metric to investigate how MAE's choice of \(\mathcal{A}(\cdot)\) and \(\mathcal{M}(\cdot)\) have influenced the object coverage. As discussed in Table 1, MAE employs the commonly used RRC for \(\mathcal{A}(\cdot)\) and a masking operation with 75% mask ratio for \(\mathcal{M}(\cdot)\). We found that \(\mathcal{J}(\mathcal{A})=68.3\%\), but \(\mathcal{J}(\mathcal{M}\circ\mathcal{A})\) sharply reduces to \(17.1\%\), indicating a potential lack of foreground information in the inputs of MAE.
As argued by DeiT III [52], the foreground usually encodes more semantics than the background, and a lack of foreground can result in sub-optimal optimization in supervised learning. In MIM, the coupling of RRC and aggressive masking might have hindered representation learning. In subsection 4.2, we rigorously review various augmentation functions \(\mathcal{A}(\cdot)\) and propose a simple workaround to preserve more foreground information in the input patches.
## 4 PixMIM
Based on the analysis, we develop a straightforward yet effective method, **PixMIM**, which addresses the two identified bottlenecks discussed in section 3. PixMIM includes two strategies: 1) generating low-frequency reconstruction targets (subsection 4.1), and 2) replacing the RRC with a more conservative augmentation (subsection 4.2). An overview of PixMIM is presented in Figure 4.
### Low-frequency Target Generation
To de-emphasize the model from reconstructing texture-dominated high-frequency details, we propose a novel target generator \(\mathcal{G}(\cdot)\), in which we maintain the target in RGB format for efficiency but filter out the high-frequency components. Specifically, we define the low-frequency target generation with the following three steps: 1) domain conversion from spatial to frequency, 2) low-frequency components extraction, and 3) reconstruction target generation from frequency domain (see Figure 4 for an illustration).
**Step-1: Domain conversion from spatial to frequency.** We use the one-channel image \(\mathbf{I}_{i}\in\mathbb{R}^{H\times W}\) to demonstrate our approach for notation simplicity. With 2D Discrete Fourier Transform (DFT) \(\mathcal{F}_{\text{DFT}}(\cdot)\), the frequency representation of the image could be derived by:
\[\mathcal{F}_{\text{DFT}}(\mathbf{I}_{i})(u,v)=\sum_{h=0}^{H-1}\sum_{w=0}^{W-1 }\mathbf{I}_{i}(h,w)e^{-i2\pi(\frac{nh}{H}+\frac{i\pi}{W})} \tag{2}\]
Where \((u,v)\) and \((h,w)\) are the frequency spectrum and spatial space coordinates, respectively. \(\mathcal{F}_{\text{DFT}}(\mathbf{I}_{i})(u,v)\) is
Figure 4: **The architecture of PixMIM.** Guided by the analysis (section 3), PixMIM consists of straightforward strategies: 1) generates low-frequency reconstruction targets to de-emphasize the texture-dominated details and prioritize the learning of low-frequency patterns (subsection 4.1), and 2) replace the commonly used Random Resized Crop (RRC) with the less aggressive Simple Resized Crop (SRC) to alleviate the problem of missing foreground in the input patches (subsection 4.2).
the complex frequency value at \((u,v)\). \(\mathbf{I}_{i}(h,w)\) is the pixel value at the \((h,w)\) and \(i\) is the imaginary unit. Please refer to the Appendix for full details of the imaginary and real parts of \(\mathcal{F}_{\text{DFT}}(\mathbf{I}_{i})(u,v)\).
**Step-2: Low-frequency components extraction.** To only retain the low frequency components of the image \(\mathbf{I}_{i}\), we apply an ideal low-pass filter \(\mathcal{F}_{\text{LPF}}\) on the frequency spectrum \(\mathcal{F}_{\text{DFT}}(\mathbf{I}_{i})\). The ideal low-pass filter is defined as:
\[\mathcal{F}_{\text{LPF}}(u,v)=\begin{cases}1,&\sqrt{((u-u_{c})^{2}+(v-v_{c})^{ 2})}\leq r,\\ 0,&\text{otherwise}.\end{cases} \tag{3}\]
Where \(v_{c}\) and \(u_{c}\) are the center coordinates of the frequency spectrum. \(r\) is the bandwidth of the circular ideal low-pass filter to control how many high-frequency components will be filtered out from the spectrum, and we have \(r\in[0,\min(\frac{H}{2},\frac{W}{2})]\). The extraction process is represented as \(\mathcal{F}_{\text{LPF}}(u,v)\otimes\mathcal{F}_{\text{DFT}}(\mathbf{I}_{i}) (u,v)\), and \(\otimes\) is the element-wise multiplication.
**Step-3: Reconstruction target generation.** We then apply the inverse Discrete Fourier Transform (IDFT) \(\mathcal{F}_{\text{DFT}}\) on the filtered spectrum to generate the RGB image as the final reconstruction target:
\[\mathbf{Y}=\mathcal{F}_{\text{IDFT}}(\mathcal{F}_{\text{LPF}}(u,v)\otimes \mathcal{F}_{\text{DFT}}(\mathbf{I}_{i})(u,v)) \tag{4}\]
Both \(\mathcal{F}_{\text{DFT}}\) and \(\mathcal{F}_{\text{DFT}}\) can be computed efficiently with Fast Fourier Transform [2]. The computation cost of the above three steps is negligible thanks to the highly optimized implementation in PyTorch [43].
To verify if our method successfully de-emphasizes the reconstruction of high-frequency components, Figure 5 presents a frequency analysis across 50,000 images from the validation set of ImageNet-1K, using \(r=40\). Compared to the vanilla MAE, our method produces obviously lower reconstruction PSNR at high-frequency intervals and slightly higher PSNR at low-frequency intervals, justifying the effectiveness of our method.
### More Conservative Image Augmentation
Based on the analysis in section 3, we would like to retain more foreground information in the input patches to our model. As a high masking ratio is crucial for MIM to learn effective representations [20], the most straightforward strategy is to keep the corruption \(\mathcal{M}(\cdot)\) unchanged but make the augmentation function \(\mathcal{A}(\cdot)\) more conservative.
We extend our quantitative analysis of object coverage to get Figure 6, which compares RRC with two less aggressive image augmentation operations. Simple Resized Crop (SRC) is the augmentation technique used in AlexNet [32], which resizes the image by matching the smaller edge to the pre-defined training resolution (_e.g._, 224), then applies a reflect padding of 4 pixels on both sides, and finally randomly crops a square region of the specified training resolution to get the augmented image. Center Crop (CC) always takes the fixed-size crop from the center of the image. The results show that the SRC has much higher \(\mathcal{J}(\mathcal{F})\) than RRC and CC. When the masking strategy of MAE is applied, SRC produces a \(\mathcal{J}(\mathcal{F})\) of \(22.1\%\), which is very close to the upper bound (_i.e._, \(25\%\)). Therefore, we simply adopt the SRC as the augmentation function \(\mathcal{A}(\cdot)\) and take the off-the-shelf implementation from [52].
Note that when there is no image masking, the SRC raises the \(\mathcal{J}(\mathcal{F})\) from the \(68.3\%\) of RRC to \(88.2\%\), indicating that it offers less diversity than RRC, which accounts for the performance degeneration in supervised image classification observed by DeiT III [52]. But unlike supervised learning, the aggressive image masking in MIM already provides sufficient randomness, and the use of SRC will not hurt the diversity as in supervised learning.
Figure 5: **Frequency analysis of MAE and PixMIM.** The PSNR of the reconstructed image for various frequency intervals (similar to Figure 1), averaged across 50,000 images from ImageNet-1K’s validation set. PixMIM shifts the model’s focus toward low-frequency components.
Figure 6: **Object coverage analysis.** SRC retains a higher proportion than RRC and CC, even under the aggressive masking strategy of MAE. Note that the upper bound of the object coverage is 25\(\%\) when the masking strategy of MAE is applied. (RRC: random resized crop, SRC: simple resized crop, CC: center crop)
### Plug into existing MIM Methods
Unlike recent approaches such as CAE [7], MILAN [26], or BEiTv2 [44], our method is lightweight and straightforward. It can easily be plugged into most existing pixel-based MIM frameworks. To demonstrate its effectiveness and versatility, we apply our method to MAE [20], ConvMAE [15], and LSMAE [28] to obtain \(\text{PixMIM}_{\text{\tiny{MAE}}}\), \(\text{PixMIM}_{\text{\tiny{ConvMAE}}}\), and \(\text{PixMIM}_{\text{\tiny{LSMAE}}}\) respectively. The experimental results are presented in the next section.
## 5 Experiments
In subsection 5.1, we describe the experimental settings for pre-training and evaluation. Then in subsection 5.2, we apply our method to three MIM baselines (_i.e_., MAE [20], ConvMAE [15], and LSMAE [28]), compare the results with the state of the arts, and discuss the sensitivity of the ImageNet fine-tuning protocol. To complement the ImageNet fine-tuning protocol, subsection 5.3 demonstrates additional analyses by checking the robustness of pre-trained models with out-of-distribution (OOD) ImageNet variants and conducting a shape bias analysis. Finally, subsection 5.4 provides comprehensive ablation studies for our method.
### Experiment Settings
We evaluate our methods and validate our design components with extensive experiments over image classification on ImageNet-1K [11], object detection on COCO [37], and semantic segmentation on ADE20K [62]. Unless otherwise specified, we report the performance with ViT-B [13].
**ImageNet-1K [11]** ImageNet-1K consists of 1.3M images of 1k categories and is split into the training and validation sets. When applying our methods to MAE [20], ConvMAE [15], and LSMAE [28], we strictly follow their original pre-training and evaluation settings on ImageNet-1K to guarantee the fairness of experiments, including the pre-training schedule, network architecture, learning rate setup, and fine-tuning protocols, etc. The only exception is that we increase the batch size of ConvMAE from 1024 to 4096 to accelerate the pre-training, while this change does not affect the performance according to our observations. We provide complete implementation details in the Appendix.
**ADE20K [62]** For the semantic segmentation experiments on ADE20K, we follow the basic off-the-shelf settings from MAE [20]. A UperNet [57] is fine-tuned for 160k iterations with a batch size of 16. In addition, we also turn on the relative position bias and initialize them with zero. We report
\begin{table}
\begin{tabular}{l c c c c c c} \multicolumn{2}{c}{Evaluation Protocol\(\rightarrow\)} & \multicolumn{3}{c}{ImageNet} & \multicolumn{2}{c}{COCO\({}^{\dagger}\)} & ADE20K \\ \cline{3-7} Method & Target & Epoch & ft(\%) & lin(\%) & AP\({}^{\text{box}}\) & AP\({}^{\text{mask}}\) & mIoU \\ \hline \multicolumn{2}{l}{**Supervised learning**} & & & & & & \\ DeiT III [52] & - & 800 & 83.8 & - & - & - & 49.3 \\ \hline \multicolumn{2}{l}{**Masked Image Modeling w/ pre-trained target generator**} & & & & & & \\ BEiT [1] & DALLE & 800 & 83.2 & 56.7 & - & - & 45.6 \\ CAE [7] & DALLE & 800 & 83.8 & 68.6 & 49.8 & 43.9 & 49.7 \\ MILAN [26] & CLIP-B & 400 & 85.4 & 78.9 & 52.6 & 45.5 & 52.7 \\ BEiT-v2 [44] & VQ-KD & 1600 & 85.5 & 80.1 & - & - & 53.1 \\ MaskDistill [45] & CLIP-B & 800 & 85.5 & - & - & - & 54.3 \\ \hline \multicolumn{2}{l}{**Masked Image Modeling w/o pre-trained target generator**} & & & & & & \\ MaskFeat [55] & HOG & 1600 & 84.0 & 62.3 & 52.3 & 46.4 & 48.3 \\ SemMAE [33] & RGB & 800 & 83.4 & 65.0 & - & - & 46.3 \\ SimMIM [58] & RGB & 800 & 83.8 & 56.7 & - & - & - \\ \multicolumn{2}{l}{\(\text{MAE}^{*}\)[20]} & RGB & 800 & 83.3 & 65.6 & 51.3 & 45.7 & 46.1 \\ \multicolumn{2}{l}{\(\text{PixMIM}_{\text{\tiny{MAE}}}\)} & RGB & 800 & 83.5 (+0.2) & 67.2 (+1.6) & 51.7 (+0.4) & 46.1 (+0.4) & 47.3 (+1.2) \\ \multicolumn{2}{l}{\(\text{ConvMAE}^{*}\)[15]} & RGB & 800 & 84.6 & 68.4 & 52.0 & 46.3 & 50.2 \\ \multicolumn{2}{l}{\(\text{PixMIM}_{\text{\tiny{ConvMAE}}}\)} & RGB & 800 & 85.0 (+0.4) & 70.5 (+2.1) & 53.1 (+1.1) & 47.0 (+0.7) & 51.3 (+1.1) \\ \multicolumn{2}{l}{\(\text{LSMAE}^{*}\)[28]} & RGB & 800 & 83.2 & 63.7 & 51.0 & 45.4 & 48.5 \\ \multicolumn{2}{l}{\(\text{PixMIM}_{\text{\tiny{LSMAE}}}\)} & RGB & 800 & 83.6 (+0.4) & 66.7 (+3.0) & 52.1 (+1.1) & 46.3 (+0.9) & 50.1 (+1.6) \\ \end{tabular}
\end{table}
Table 2: **Performance comparison of MIM methods on various downstream tasks.** We report the results with fine-tuning (ft) and linear probing (lin) experiments on ImageNet-1K, objection detection on COCO, and semantic segmentation on ADE20K. The backbone of all experiments is ViT-B [13]. \(*\): numbers are reported by running the official code release. \(\dagger\): As there is no uniform number of fine-tuning epochs for MAE, LSMAE, and ConvMAE for object detection, we fine-tuned PixMIM using the same number of epochs as each respective base method.
the Mean Intersection over Union (mIoU) results averaged over two runs for a robust comparison. The full details can be found in the Appendix.
**COCO**[37] For object detection experiments on COCO, we adopt the Mask R-CNN approach [22] that produces bounding boxes and instance masks simultaneously, with the ViT as the backbone. Similar to MAE, we employ the box and mask AP as the metrics. For MAE and LSMAE, we use the official implementation of ViTDet [36]. For ConvVAAE, we use its released official repository. More detailed settings can be found in the Appendix.
**Ablation studies** All ablation studies are based on the MAE settings. Following the common practice of previous MIM works [7, 38, 15], we pre-train all model variants on ImageNet-1K for 300 epochs and comprehensively compare their performance on linear probing, fine-tuning, and semantic segmentation. All other settings are the same as those discussed above.
### Main Results
In Table 2, we show the results of applying our simple method to MAE [20], ConvMAE [15], and LSMAE [28], and compare these results with the state-of-the-art MIM approaches. Without extra computational cost, we consistently improve the original MAE, ConvMAE, and LSMAE across all downstream tasks. The margins on linear probing, object detection, and semantic segmentation are remarkable. Specifically, \(\text{PixMIM}_{\text{LSMAE}}\) significantly improves the original LSMAE on linear probing and semantic segmentation by 3.0% and 1.6%, respectively. To further demonstrate the effectiveness of our method across various pre-training schedules, we plot the _performance vs. epoch_ curves in Figure 7. The curves of \(\text{PixMIM}_{\text{BASE}}\), \(\text{PixMIM}_{\text{ConvMAE}}\), and \(\text{PixMIM}_{\text{LSMAE}}\) consistently remain above the corresponding base methods by clear gaps. All these results demonstrate the universality and scalability of our methods.
**Methods with pre-trained target generator.** Although the methods with a powerful pre-trained target generator [44, 26] achieve the best results in Table 2, they rely on extra pre-training data and bring significant computational overhead to MIM when generating targets dynamically. In contrast, our improvements come with negligible cost and take a step towards closing the gap between pixel-based approaches and those relying on pre-trained target generators.
**Remarks on the ImageNet fine-tuning protocol.** According to Table 2, the improvements brought by our method on the ImageNet fine-tuning protocol are less obvious than those on the other three protocols. Table 3 investigates the correlation between the evaluation protocols by _sorting_ six MIM approaches based on the ImageNet-finetuning performance, and we have the following observations:
* With the same network backbone and not using extra pre-trained models for generating training targets, the ImageNet fine-tuning performances of various methods always show marginal gaps.
\begin{table}
\begin{tabular}{l c c c c c} Method & Target & Backbone & ft & lin & seg \\ \hline LSMAE [28] & RGB & ViT-B & \(83.2\) & 63.7 & 48.5 \\ MAE [20] & RGB & ViT-B & \(83.3\) & 65.6 & 46.1 \\ \(\text{PixMIM}_{\text{MAE}}\) & RGB & ViT-B & \(83.5\) & 67.2 & 47.8 \\ \(\text{PixMIM}_{\text{LSMAE}}\) & RGB & ViT-B & \(83.6\) & 66.7 & 50.1 \\ \(\text{SimMIM}\)[58] & RGB & ViT-B & \(83.8\) & 56.7 & - \\ \(\text{MaskFeat}\)[55] & HOG & ViT-B & \(84.0\) & 62.3 & 48.3 \\ \end{tabular}
\end{table}
Table 3: **Investigating the ImageNet fine-tuning protocol.** Six MIM approaches are _sorted_ based on their ImageNet fine-tuning (ft) performance. The fine-tuning result alone hardly distinguishes different approaches with the same backbone and not using an extra pre-trained model for generating training targets, and it does not necessarily correlate with other evaluation protocols. Best viewed in color.
Figure 7: **Performance vs. epoch plots.** With different training epochs, \(\text{PixMIM}\) consistently brings significant gains to the baseline MIM approaches across various evaluation protocols.
* Better result on ImageNet fine-tuning does not necessarily mean better performance on linear probing or semantic segmentation. This is also shown by the curves of LSMAE in Figure 7.
Hence, we argue that ImageNet fine-tuning is _not a sensitive metric_, and we should include more protocols for comprehensively evaluating the representation quality. A potential explanation provided by CAE [7] is that the pre-training and fine-tuning data follow the same distribution and can narrow the gap among different methods. We provide additional analyses in the next subsection to complement the ImageNet fine-tuning protocol.
### Additional Analyses
Two additional experiments are presented to complement the less sensitive ImageNet fine-tuning protocol and further validate the effectiveness of our method.
**Robustness checking.** we compare the pre-trained models on four out-of-distribution ImageNet variants: ImageNet-Corruption [24], ImageNet-Adversarial [25], ImageNet-Rendition [23], and ImageNet-Sketch [54]. These datasets introduce various domain shifts to the original ImageNet-1K and are widely used to assess a model's robustness and generalization ability. Table 4 shows that \(\text{PixMIM}_{\text{\tiny MAE}}\), \(\text{PixMIM}_{\text{\tiny ConvMAE}}\), and \(\text{PixMIM}_{\text{\tiny LSMAE}}\) consistently outperform their baselines, and the margins of improvement are much more pronounced than those on the validation set of Imagenet-1K. The better robustness against domain shifts strengthens the value of our simple yet effective method.
**Shape bias analysis.** We take the off-the-shelf shape bias toolbox [16] to analyze our pre-trained models. Shape bias measures how much the model relies on shapes to extract the semantic representation of the image, quantified as the fraction of correct decisions based on object shape. Figure 8 shows that \(\text{PixMIM}_{\text{\tiny MAE}}\), \(\text{PixMIM}_{\text{\tiny ConvMAE}}\), and \(\text{PixMIM}_{\text{\tiny LSMAE}}\) improve the shape bias of their baselines, confirming that our methods prevent the model from being excessively texture-biased by filtering out the high-frequency components of the target image. The colored lines denote the _weighted average_ of shape bias across different categories for different methods.
### Ablation Studies
We further conduct ablation studies for our key design components: the filtering of the high-frequency components of the target image and the use of the Simple Resized Crop.
**The bandwidth of the low-pass filter.** Table 4(a) investigates how varying the bandwidth \(r\) influences the vanilla MAE. All model variants in the table are trained for 300 epochs following the training recipes of MAE. The optimal bandwidth is 40, and it improves the baseline significantly (_i.e_., \(+1.2\%\) on linear probing and \(+1.7\%\) on semantic segmentation). A narrow bandwidth could discard important information about the image (_e.g_. edges of objects), leading to a performance drop. In comparison, a too-large bandwidth fails to remove unessential textures effectively.
**Replace RRC with SRC.** Table 4(b) compares different data augmentations. The simple resized crop (SRC) brings non-trivial improvement to the original random resized crop (RRC) used by MAE on both linear probing and semantic segmentation. However, recall in DeiT III [52], replacing RRC with SRC degrades the performance as it decreases the cropped image's diversity and impairs the model's generalization ability. The opposite results we obtain in MIM here suggest that the RRC could have led to the severe issue of missing foreground, which is further confirmed by the fact that even the simple center crop can outperform RRC in linear probing and semantic segmentation.
To better support our analysis on the input patches of
\begin{table}
\begin{tabular}{l l l l l} Method & IN-C\(\downarrow\) & IN-A & IN-R & IN-S \\ \hline LSMAE & 48.8 & 34.2 & 50.3 & 36.2 \\ \(\text{PixMIM}_{\text{\tiny LSMAE}}\) & 48.0 (-0.8) & 36.1 (+1.9) & 50.8 (+0.5) & 37.1 (+0.9) \\ MAE & 51.7 & 35.9 & 48.3 & 34.5 \\ \(\text{PixMIM}_{\text{\tiny MAE}}\) & 49.9 (-1.8) & 37.1 (+1.2) & 49.6 (+1.3) & 35.9 (+1.4) \\ \(\text{ConvMAE}\) & 45.5 & 50.8 & 54.6 & 41.1 \\ \(\text{PixMIM}_{\text{\tiny ConvMAE}}\) & 45.3 (-0.2) & 52.5 (+1.7) & 55.3 (+0.7) & 41.8 (+0.7) \\ \end{tabular}
\end{table}
Table 4: **Robustness evaluation on ImageNet variants.** To complement the less sensitive ImageNet fine-tuning protocol, we further evaluate the fine-tuned models from the main table on four ImageNet variants. Results are reported in top-1 accuracy, except for IN-C which uses the mean corruption error.
Figure 8: **Shape bias analysis.** PixMIM consistently improves the shape bias of the baselines. Each vertical line is the _weighted average_ of all 16 categories.
MAE, we conduct reverse engineering of SRC, which crops mostly the background region instead of the foreground (, the BG entry in Table 4(b)). Please check the Appendix for implementation details and visualizations. The results demonstrate that the absence of foreground information can significantly impair the representation quality, further confirming our analysis regarding the input patches.
Table 4(c) further verifies that the gains brought by the two components of PixMIM effectively accumulate over all three evaluation protocols. We also extend Table 4(b) to a non-object-centric dataset, Place365 [40]. Table 6 shows that SRC still brings non-negligible improvements over RRC when pre-trained on scene-scale images, suggesting that the lacking foreground issue in pixel-based MIM is universal and not specific to object-centric datasets (, ImageNet).
## 6 Limitation and Future Works
Currently, our experiments are based on ViT-B [13], which is also the common practice by some other works [33, 58]. Some studies, like [20], suggest that after extending experiments to larger models,, ViT-L or ViT-H, the same method can still get equivalent gains, but evaluating the scalability of our methods on larger models is also expected. In addition, the bandwidth \(r\) is designed as a hyper-parameter and may vary across different datasets or input resolutions. So a self-adaptive bandwidth is also expected. Finally, self-supervised pre-training has been criticized for consuming many computational resources. Even though our method brings negligible computation overhead, making the entire pre-training pipeline more efficient should be one of the directions for future research.
## 7 Conclusion
In this paper, we first provide an empirical analysis of the milestone algorithm, MAE, from the perspective of input patches and reconstruction targets, identifying the potential bottlenecks of existing pixel-based MIM approaches. Based on the analysis, we propose a simple yet effective method, PixMIM, without introducing extra computation overhead or complicating the pre-training pipeline. When applied to three representative pixel-based MIM approaches, PixMIM brings consistent performance boosts across various downstream tasks and improves the model's robustness, demonstrating its effectiveness and universality.
|
2301.03171 | Refined finite-size analysis of binary-modulation continuous-variable
quantum key distribution | Recent studies showed the finite-size security of binary-modulation CV-QKD
protocols against general attacks. However, they gave poor key-rate scaling
against transmission distance. Here, we extend the security proof based on
complementarity, which is used in the discrete-variable QKD, to the previously
developed binary-modulation CV-QKD protocols with the reverse reconciliation
under the finite-size regime and obtain large improvements in the key rates.
Notably, the key rate in the asymptotic limit scales linearly against the
attenuation rate, which is known to be optimal scaling but is not achieved in
previous finite-size analyses. This refined security approach may offer
full-fledged security proofs for other discrete-modulation CV-QKD protocols. | Takaya Matsuura, Shinichiro Yamano, Yui Kuramochi, Toshihiko Sasaki, Masato Koashi | 2023-01-09T05:11:23Z | http://arxiv.org/abs/2301.03171v3 | # Refined finite-size analysis of binary-modulation continuous-variable quantum key distribution
###### Abstract
Recent studies showed the finite-size security of binary-modulation CV-QKD protocols against general attacks. However, they gave poor key-rate scaling against transmission distance. Here, we extend the security proof based on the complementarity, which is used in the discrete-variable QKD, to the previously developed binary-modulation CV-QKD protocols with the reverse reconciliation under the finite-size regime and obtain large improvements in the key rates. Notably, the key rate in the asymptotic limit scales linearly against the attenuation rate, which is known to be optimal scaling but is not achieved in previous finite-size analyses. This refined security approach may offer full-fledged security proofs for other discrete-modulation CV-QKD protocols.
## 1 Introduction
Quantum key distribution (QKD) [1] enables two remote parties to share identical secret bits that are secure against arbitrary eavesdropping allowed in the law of quantum mechanics. QKD combined with the one-time pad [2] can thus realize the information-theoretic security of bipartite communication. Nowadays, there is increasing interest in implementing QKD in the real world. Among other things, continuous-variable (CV) QKD [3, 4, 5, 6, 7, 8, 9] has advantages over short-distance high-bit-rate QKD due to the low cost of its implementation and the affinity to the wavelength division multiplexing. This is because homodyne and heterodyne detectors used in CV-QKD protocols do not require a low-temperature environment and have good wavelength selectivity. The (single-)photon detectors used in discrete-variable (DV) QKD, on the other hand, typically require a low-temperature environment for stable operation and a high-quality frequency filter for the wavelength division multiplexing.
The main problems of CV-QKD protocols are difficulties in their complete security proofs. Compared to the DV-QKD protocols, most of which have complete security proofs even in the finite-size regime, almost all the CV-QKD protocols only have asymptotic security proofs [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] or security proofs against collective attacks [20, 21, 22, 23, 24, 25]. There are, however, some results for the composable finite-size security against general attacks. One is for the protocol using the two-mode squeezed vacuum state [26, 27], whose security proof is based on the entropic uncertainty relation on the infinite dimension [28]. Unfortunately, this protocol has difficulty in its implementation and poor key-rate scaling against the transmission distance. Another is for the \(U(N)\)-symmetric protocol that uses coherent states with their complex amplitudes modulated according to a Gaussian distribution [29, 21, 30]. The security proof for this type of protocol utilizes the de Finetti reduction theorem [31, 32] to the i.i.d. case. This methodology has proved the security of several
\(U(N)\)-invariant CV-QKD protocols [33, 34]. However, in practice, ideal Gaussian modulation cannot be implemented and should be approximated by a finite number of coherent states. It turns out that an overwhelming number of coherent states is needed to directly approximate the Gaussian ensemble for the security condition to be satisfied [35, 36]. If we try to mitigate the required number, additional assumptions are needed, which makes it difficult to apply it in the finite-size regime [15]. The other completely different approach [37, 38] is targeted at the discrete-modulation CV QKD from the beginning. Refs. [37, 38] show the finite-size security against general attacks for a binary-modulation protocol. It also takes into account the discretization of the signal processing, such as binned homodyne and heterodyne measurements (see also [39] for this topic). Although it has a nice feature, the obtained key rate has very poor scaling against transmission distance. A possible reason for this bad performance is the fact that its security proof is based on the entanglement distillation [40, 41]. It is known that the security proof based on the entanglement distillation is too stringent in general for secure key distribution. There are alternative types of security proofs [42, 43, 44] that can be applied to general cases. In particular, for CV-QKD protocols, the security proof based on the reverse reconciliation often provides better performance than that based on the direct reconciliation [10], which may be unattainable by a security proof based on the entanglement distillation due to its symmetric nature between the sender and the receiver in the security proof.
**Contributions of this paper.** In this article, we aim to develop another approach to carry out the finite-size security proof for the discrete-modulation CV QKD against general attacks. The approach should be able to exploit the benefit of the reverse reconciliation. To do it concretely, we develop refined security proofs based on the reverse reconciliation for the binary-modulation CV-QKD protocols proposed in Refs. [37, 38], i.e., the protocol in which the sender Alice performs BPSK-type modulation according to her randomly generated bit and the receiver Bob performs homodyne measurement, heterodyne measurement, and trash randomly [37], or performs heterodyne measurement followed by a random selection of the post-processing of the outcome [38]. We use the same apparatuses and setups as those in Refs. [37, 38] but slightly change the protocols. To refine the security proofs, we use an approach based on the complementarity [43, 45] under the reverse reconciliation, which is more general than the one based on the entanglement distillation [46] and treats Alice and Bob asymmetrically in the security proof. In these refined security proofs, we have degrees of freedom that did not appear in the previous analyses. By setting these degrees of freedom to be optimal in the pure-loss channel [47], we obtain a significant improvement in the key gain rates; in fact, the asymptotic key rates of the protocols scale linearly with regards to the attenuation rate of the pure-loss channel, which is known to be the optimal scaling for the one-way QKD [47]. This shows that we can exploit the benefit of using the reverse reconciliation in the approach based on complementarity. Although the protocols are still fragile against the excess noise, this approach itself may be a step towards the full-fledged security proofs for discrete-modulation CV QKD.
**Organization of this paper.** The article is organized as follows. In Section 2, we provide the refined security proofs based on the complementarity [43] for protocols that use the same experimental setups as proposed in Refs. [37, 38]. The section is further divided into three parts. The first part 2.1 defines the actual protocols, which are almost the same as the ones in Refs. [37, 38], and develops virtual protocols for the complementarity approach [43]. In the second part 2.2, we derive an explicit form of the phase error operator defined by the virtual procedure of the previous part. In the third part 2.3, we finish the finite-size security proof by developing operator inequalities. In Section 3, we numerically demonstrate the improved performance of the protocols with our refined security proof. Finally, in Section 4, we wrap up our article by discussing future work and open problems.
## 2 Security proof
### Actual, virtual, and estimation protocols
In this section, we define two binary-modulation CV-QKD protocols that are closely related to the ones proposed in Refs. [37, 38], and present their security proofs based on the reverse reconciliation. The definition of the (composable) security is the same as that in Ref. [37]. The setups of the protocols are illustrated in Fig. 1. In the following, a random number is denoted with a hat such as \(\hat{\cdot}\). For the places where the slash "/" is used, one can adopt either its left-hand side or right-hand side depending on which of "Homodyne protocol" or "Heterodyne protocol" defined in Fig. 1 one chooses. Note that Homodyne protocol is the same as the protocol proposed in Ref. [37] except for the definition of \(f_{\text{suc}}(x)\) as well as the way of bit error correction, and Heterodyne protocol is the same as the protocol proposed in Ref. [38] except for the additional trash round as well as the way of bit error correction.
Prior to the protocol, Alice and Bob determine the number \(N\) of total rounds, the acceptance probability function \(f_{\text{suc}}(x)\) (\(x\in\mathbb{R}\)) of the homodyne/heterodyne measurement satisfying \(f_{\text{suc}}(x)+f_{\text{suc}}(-x)\leq 1\), an odd integer \(m\) and a real \(r\) for the test function \(\Lambda_{m,r}(\nu)\coloneqq e^{-r\nu}(1+r)L_{m}^{(1)}((1+r)\nu)\) with \(L_{m}^{(1)}\) being the associated Laguerre polynomial [37], and the protocol parameters \((\mu,p_{\text{sig}},p_{\text{test}},p_{\text{trash}},\beta,s,\kappa,\gamma)\) satisfying \(p_{\text{sig}}+p_{\text{test}}+p_{\text{trash}}=1\) and \(\beta<\sqrt{\mu}\), where all the parameters are positive. Alice and Bob then run the protocol described in Box 1. Unless aborted, the protocol generates a shared final key of length
\[\hat{N}^{\text{fin}}=\max\left\{\hat{N}^{\text{suc}}-\left\lceil\hat{N}^{ \text{suc}}h\Big{(}U(\hat{F},\hat{N}^{\text{trash}})/\hat{N}^{\text{suc}}\Big{)} \right\rceil-s,0\right\}, \tag{1}\]
where \(\lceil\cdot\rceil\) is the ceiling function, the function \(h(x)\) is defined as
\[h(x)\coloneqq\begin{cases}-x\log_{2}(x)-(1-x)\log_{2}(1-x)&(x\leq 1/2)\\ 1&(x>1/2)\end{cases}, \tag{2}\]
and the function \(U(\hat{F},\hat{N}^{\text{trash}})\) will be specified later.
Figure 1: Setups of the protocols. In both protocols, the sender Alice modulates the optical phase of a laser pulse prepared in a coherent state \(|\mu\rangle\) with \(0\) or \(\pi\) according to her random bit \(\hat{a}=0\) or \(1\). (a) “Homodyne protocol”, which is similar to the protocol proposed in Ref. [37]. In this protocol, the receiver Bob randomly switches three types of measurements according to probability \(p_{\text{sig}}\), \(p_{\text{test}}\), and \(p_{\text{trash}}\), respectively. In “Signal”, Bob performs homodyne measurement and obtains the outcome \(\hat{x}\in\mathbb{R}\). Then he obtains \(\hat{b}\in\{0,1\}\) with probability \(f_{\text{suc}}\big{(}(-1)^{\hat{b}}\hat{x}\big{)}\), respectively, or announces “Failure” with probability \(1-f_{\text{suc}}(\hat{x})-f_{\text{suc}}(-\hat{x})\). In “Test”, Bob performs heterodyne measurement and obtains the outcome \(\hat{\omega}\in\mathbb{C}\). Then he computes \(\Lambda_{m,r}\big{(}|\hat{\omega}-(-1)^{\hat{a}}|^{2}\big{)}\) with Alice’s bit \(\hat{a}\) announced. In “Trash”, Bob discards the received optical pulse and produces no outcome. (b) “Heterodyne protocol”, which is similar to the protocol proposed in Ref. [38]. In this protocol, the receiver Bob performs heterodyne measurement, obtains the outcome \(\hat{\omega}\), and randomly switches three types of post-processings according to probability \(p_{\text{sig}}\), \(p_{\text{test}}\), and \(p_{\text{trash}}\), respectively. In “Signal”, Bob defines \(\hat{x}=\operatorname{Re}[\hat{\omega}]\) and follows the same procedure of obtaining the bit \(b\) or “Failure” as in Homodyne protocol. In “Test”, Bob follows the same procedure of computing \(\Lambda_{m,r}\big{(}|\hat{\omega}-(-1)^{\hat{a}}|^{2}\big{)}\) as in Homodyne protocol. In “Trash”, Bob discards the outcome \(\hat{\omega}\).
**Box 1: Actual protocol**
1. Alice generates a random bit \(\hat{a}\in\{0,1\}\) and sends an optical pulse \(\tilde{C}\) in a coherent state with amplitude \((-1)^{\hat{a}}\sqrt{\mu}\) to Bob. She repeats it for \(N\) rounds. Bob receives an optical pulse \(C\) for each of the \(N\) rounds.
2. For the received pulse \(C\) in each round, Bob chooses a label from \(\{\text{signal},\text{test},\text{trash}\}\) with probabilities \(p_{\text{sig}},p_{\text{test}}\), and \(p_{\text{trash}}\), respectively, and announces it. According to the label, Alice and Bob do one of the following procedures.
3. Bob performs a homodyne/heterodyne measurement on the received optical pulse \(C\) and obtains an outcome \(\hat{x}\in\mathbb{R}\). (For the heterodyne measurement, \(\hat{x}\) is defined as the real part of the outcome \(\hat{\omega}\in\mathbb{C}\).) Bob defines a sifted-key bit \(\hat{b}\) as \(\hat{b}=0\) with a probability \(f_{\text{suc}}(\hat{x})\) and \(\hat{b}=1\) with a probability \(f_{\text{suc}}(-\hat{x})\). When Bob has defined his sifted key bit, he announces "success", and otherwise, he announces "failure". In the case of a success, Alice (resp. Bob) records a bit \(\hat{a}\) (\(\hat{b}\)).
4. Bob performs a heterodyne measurement on the received optical pulse \(C\) and obtains an outcome \(\hat{\omega}\). Alice announces her bit \(a\). Bob calculates the value of \(\Lambda_{m,r}(|\hat{\omega}-(-1)^{\hat{a}}\beta|^{2})\).
5. Alice and Bob produce no outcomes.
6. We refer to the numbers of "success" and "failure" signal rounds, test rounds, and trash rounds as \(\hat{N}^{\text{suc}},\hat{N}^{\text{fail}},\hat{N}^{\text{test}}\), and \(\hat{N}^{\text{trash}}\), respectively. (\(N=\hat{N}^{\text{suc}}+\hat{N}^{\text{fail}}+\hat{N}^{\text{test}}+\hat{N}^{ \text{trash}}\) holds by definition.) Bob calculates the sum of \(\Lambda_{m,r}(|\hat{\omega}-(-1)^{\hat{a}}\beta|^{2})\) obtained in the \(\hat{N}^{\text{test}}\) test rounds, which is denoted by \(\hat{F}\).
7. For error correction, they use \(H_{\text{EC}}\)-bits of encrypted communication consuming a pre-shared secret key to do the following. According to (the upper bound on) the bit error rate \(e_{\text{qber}}\), Bob randomly chooses an error-correcting code and sends it with the \(H_{\text{EC}}\)-bits syndrome to Alice. Alice reconciles her sifted key accordingly.
8. Bob computes and announces the final key length \(\hat{N}^{\text{fin}}\) according to Eq. (1). Alice and Bob apply privacy amplification to obtain the final key.
For simplicity, we omitted the bit-error-sampling rounds in the above protocol. To satisfy the required correctness \(\varepsilon_{\text{cor}}\) for the final key, Alice and Bob randomly insert \(N_{\text{smp}}\) sampling rounds among \(N\) rounds in which Bob performs the same measurement as that of the signal round and estimate an upper bound \(e_{\text{qber}}\) on the bit error rate. Let \(\hat{N}^{\text{suc}}_{\text{smp}}\) be the number of "success" in \(N_{\text{smp}}\) sampling rounds, and let \(\hat{E}_{\text{obs}}\) be the number of discrepancies between Alice's and Bob's bits observed in the "success" sampling rounds. Then, Bob sets \(e_{\text{qber}}\) to
\[e_{\text{qber}}=\left.\left(\tilde{M}_{\hat{N}^{\text{suc}}+\hat{N}^{\text{suc }}_{\text{smp}},\hat{N}^{\text{suc}}_{\text{smp}},e_{\text{cor}}/2}(\hat{E}_{ \text{obs}})-\hat{E}_{\text{obs}}\right)\right/\hat{N}^{\text{suc}}, \tag{3}\]
where the function \(\tilde{M}_{N,n,\epsilon}\) is defined in Eq. (101) in Appendix A. The proof that this definition of \(e_{\text{qber}}\) upper-bounds the actual bit error rate with probability no smaller than \(1-\varepsilon_{\text{cor}}/2\) is also shown in Appendix A. The required amount \(H_{\text{EC}}\) of the error syndrome Bob sends to Alice in the bit error correction depends on the error correction method; here we assume
\[H_{\text{EC}}=\hat{N}^{\text{suc}}\left[f\,h(e_{\text{qber}})+(1-f)\right], \tag{4}\]
where \(f\in[0,1]\) denotes an error correction efficiency [48, 49, 16, 17, 18, 50] for the error correction to succeed with the probability no smaller than \(1-\varepsilon_{\text{cor}}/2\). The net key gain \(\hat{G}\) per pulse is thus given by
\[\hat{G}=(\hat{N}^{\text{fin}}-H_{\text{EC}})/(N+N_{\text{smp}}). \tag{5}\]
Here, we do not use verification in the post-processing, unlike Refs. [37; 38], due to the subtleties to incorporate it in our security proof. The acceptance probability \(f_{\text{suc}}(x)\) should be chosen to post-select the rounds with larger values of \(x\), for which the bit error probability is expected to be lower. The definition of \(f_{\text{suc}}(x)\) in this article follows Ref. [38] and is slightly more general than that of Ref. [37]. (Note that Ref. [37] can also use this definition of \(f_{\text{suc}}(x)\).) It is ideally a step function with a threshold \(x_{\text{th}}(>0)\), but our security proof applies to any form of \(f_{\text{suc}}(x)\). The test function \(\Lambda_{m,r}(\nu)\) is the same as the one defined in Ref. [37] where it is shown to satisfy
\[\mathbb{E}_{\rho}[\Lambda_{m,r}(|\hat{\omega}-\beta|^{2})]\leq\left\langle \beta\right|\rho\left|\beta\right\rangle \tag{6}\]
for any odd integer \(m\), positive real \(r\), and density operator \(\rho\) (see Corollary 1 in Ref. [37]). The parameter \(\beta\) is typically chosen to be \(\sqrt{\eta\mu}\) with \(\eta\) being a nominal transmissivity of the quantum channel, while the security proof itself holds for any choice of \(\beta\). The parameter \(s\) is related to the overall security parameter in the security proof below.
We determine a sufficient amount of the privacy amplification according to the complementarity, or in other words, the phase error correction [43; 45], which has been widely used for the DV-QKD protocols. We aim at showing the secrecy of Bob's final key against the adversary Eve. To do so, we consider a virtual protocol in which Bob has a qubit for each success signal round such that the outcome of the \(Z\)-basis measurement on it is equivalent to his sifted key bit \(b\). Alice can do arbitrary quantum operations in the virtual protocol as long as all the statistics and available information to the adversary Eve are the same as those in the actual protocol. Then, after Bob's \(Z\)-basis measurement on the qubit, the reduced classical-quantum state between Bob and Eve in the virtual protocol is the same as that in the actual protocol.
In the following, we explicitly describe the virtual protocol. For Alice, we introduce a qubit \(A\) and assume that she entangles it with an optical pulse \(\tilde{C}\) in a state
\[\left|\Psi\right\rangle_{A\tilde{C}}\coloneqq\frac{\left|0\right\rangle_{A} \left|\sqrt{\mu}\right\rangle_{\tilde{C}}+\left|1\right\rangle_{A}\left|- \sqrt{\mu}\right\rangle_{\tilde{C}}}{\sqrt{2}}, \tag{7}\]
where \(\left|\omega\right\rangle_{\tilde{C}}\) with \(\omega\in\mathbb{C}\) denotes the coherent state with the amplitude \(\omega\), which is defined as
\[\left|\omega\right\rangle_{\tilde{C}}\coloneqq e^{-\frac{\left|\omega\right| ^{2}}{2}}\sum_{n=0}^{\infty}\frac{\omega^{n}}{\sqrt{n!}}\left|n\right\rangle_ {\tilde{C}}. \tag{8}\]
Then, the optical pulse \(\tilde{C}\) emitted by Alice is in the same state as that in the actual protocol. For Bob, we construct a process of probabilistically converting the received optical pulse \(C\) to a qubit \(B\), which can be regarded as a coherent version of Bob's signal measurement. For Homodyne protocol, consider a map \(\mathcal{K}_{C\to B}^{\text{hom}}\) defined as [37]
\[\mathcal{K}_{C\to B}^{\text{hom}}(x)(\rho_{C})\coloneqq K_{\text{suc}}^{ \text{hom}}(x)\,\rho_{C}\left(K_{\text{suc}}^{\text{hom}}(x)\right)^{\dagger} \tag{9}\]
with
\[K_{\text{suc}}^{\text{hom}}(x)\coloneqq\sqrt{f_{\text{suc}}(x)}\big{(}|0 \rangle_{B}\langle x|_{C}+|1\rangle_{B}\langle-x|_{C}\big{)}, \tag{10}\]
where \(\langle x|\) maps a state vector to the value of its wave function at \(x\); i.e., for a coherent state vector \(\left|\omega\right\rangle\), \(\langle x|\) acts as
\[\langle x|\omega\rangle=\left(\frac{2}{\pi}\right)^{\frac{1}{4}}\exp\!\left[-( x-\omega_{r})^{2}+2i\omega_{i}x-i\omega_{r}\omega_{i}\right], \tag{11}\]
where \(\omega=\omega_{r}+i\omega_{i}\) with \(\omega_{r},\omega_{i}\in\mathbb{R}\). Let \(\Pi_{\text{ev(od)}}\) denote a projection operator onto the subspace of even(odd) photon numbers. Since \(\langle x|\left(\Pi_{\text{ev}}-\Pi_{\text{od}}\right)=\left\langle-x\right|\) holds, we have
\[K_{\text{suc}}^{\text{hom}}(x)=\sqrt{2f_{\text{suc}}(x)}\big{(}|+\rangle_{B} \langle x|_{C}\,\Pi_{\text{ev}}+|-\rangle_{B}\langle x|_{C}\,\Pi_{\text{od}} \big{)}. \tag{12}\]
This defines an instrument \(\mathcal{I}_{C\to B}^{\text{hom}}\) for the process of producing the outcome \(\hat{x}\) and leaving \(C\) in a post-measurement state; i.e., given a measurable set \(\Delta\subseteq\mathbb{R}\), the unnormalized post-measurement state is given by
\[\mathcal{I}_{C\to B}^{\text{hom}}(\Delta)(\rho_{C})=\int_{\Delta}dx\; \mathcal{K}_{C\to B}^{\text{hom}}(x)(\rho_{C}) \tag{13}\]
with \(\mathrm{Tr}[\mathcal{I}_{C\to B}^{\mathrm{hom}}(\Delta)(\rho_{C})]\) being a probability of "success" signal event with the outcome \(\hat{x}\in\Delta\). Similarly, for Heterodyne protocol, consider a map \(\mathcal{K}_{C\to B}^{\mathrm{het}}\) defined as [38]
\[\mathcal{K}_{C\to B}^{\mathrm{het}}(\omega)(\rho_{C})\coloneqq K_{ \mathrm{suc}}^{\mathrm{het}}(\omega)\,\rho_{C}\big{(}K_{\mathrm{suc}}^{ \mathrm{het}}(\omega)\big{)}^{\dagger} \tag{14}\]
with
\[K_{\mathrm{suc}}^{\mathrm{het}}(\omega) \coloneqq\sqrt{\frac{f_{\mathrm{suc}}(\omega_{r})}{\pi}}\big{(} \big{|}0_{B}\langle\omega|_{C}+\big{|}1\rangle_{B}\langle-\omega|_{C}\big{)} \tag{15}\] \[=\sqrt{\frac{2f_{\mathrm{suc}}(\omega_{r})}{\pi}}\left(\left|+ \right\rangle_{B}\langle\omega|_{C}\,\Pi_{\mathrm{ev}}+\left|-\right\rangle_ {B}\langle\omega|_{C}\,\Pi_{\mathrm{od}}\right),\]
where \(\left|\omega\right\rangle\) denotes a coherent state vector and \(\omega=\omega_{r}+i\omega_{i}\) with \(\omega_{r},\omega_{i}\in\mathbb{R}\). Similarly to Homodyne protocol, we can define an instrument \(\mathcal{I}_{C\to B}^{\mathrm{het}}\) composed of the heterodyne outcome and the (unnormalized) post-measurement state, which is given by
\[\mathcal{I}_{C\to B}^{\mathrm{het}}(\Delta^{\prime})(\rho_{C})= \int_{\Delta^{\prime}}d\omega_{r}d\omega_{i}\;\mathcal{K}_{C\to B}^{ \mathrm{het}}(\omega)(\rho_{C}), \tag{16}\]
where \(\Delta^{\prime}\subseteq\mathbb{R}^{2}\) is a measurable set. If Bob measures the qubit \(B\) on the \(Z\) basis after the instrument (13) (resp. (16)), he obtains the same sifted key bit with the same probability as in the actual protocol when \(\hat{x}\in\Delta\) (resp. \(\hat{\omega}\in\Delta^{\prime}\)) [37, 38].
At this point, one has a degree of freedom to perform quantum operations on the system \(AB\) for each outcome \(\hat{x}\) (resp. \(\hat{\omega}\)) as long as it does not change the \(Z\)-basis value of the qubit \(B\). This is because we aim at showing the secrecy of Bob's final key against the adversary Eve with Alice's system traced out. Thus, after applying the map \(\mathcal{K}_{C\to B}^{\mathrm{hom}}\) (resp. \(\mathcal{K}_{C\to B}^{\mathrm{het}}\)), we assume that Alice and Bob perform a controlled isometry \(V_{B;A\to R}^{\mathrm{hom}}(x)\) (resp. \(V_{B;A\to R}^{\mathrm{het}}(\omega)\)) of the form
\[V_{B;A\to R}^{\mathrm{hom}}(x) \coloneqq\Big{[}\lvert 0\rangle\langle 0\rvert_{B}\otimes V_{A \to R}^{(0)}(x)+\lvert 1\rangle\langle 1\rvert_{B}\otimes V_{A\to R}^{(1)}(x)\Big{]}\, \text{C-}X_{BA} \tag{17}\] \[V_{B;A\to R}^{\mathrm{het}}(\omega) \coloneqq\Big{[}\lvert 0\rangle\langle 0\rvert_{B}\otimes V_{A \to R}^{\prime(0)}(\omega)+\lvert 1\rangle\langle 1\rvert_{B}\otimes V_{A \to R}^{\prime(1)}(\omega)\Big{]}\, \text{C-}X_{BA}, \tag{18}\]
where \(\text{C-}X_{BA}\coloneqq\lvert 0\rangle\langle 0\rvert_{B}\otimes I_{A}+ \lvert 1\rangle\langle 1\rvert_{B}\otimes X_{A}\) denotes the Controlled-NOT gate and \(V_{A\to R}^{(j)}(x)\) (resp. \(V_{A\to R}^{\prime(j)}(\omega)\)) for \(j=0,1\) denotes an isometry from the system \(A\) to another system \(R\) that is no smaller than \(A\)1. If \(V_{A\to R}^{(j)}(x)\) (resp. \(V_{A\to R}^{\prime(j)}(\omega)\)) is an identity, then the analysis reduces to the previous results [37, 38]. Let \(\mathcal{V}_{B;A\to R}^{\mathrm{hom}}(x)\) (resp. \(\mathcal{V}_{B;A\to R}^{\mathrm{het}}(\omega)\)) be an adjoint action (i.e., a CPTP map) for the isometry \(V_{B;A\to R}^{\mathrm{hom}}(x)\) (resp. \(V_{B;A\to R}^{\mathrm{het}}(\omega)\)). The composition of the map \(\mathcal{V}_{B;A\to R}^{\mathrm{hom}}(x)\) and the map (9) (resp. the \(\mathrm{mad}\)\(\mathcal{V}_{B;A\to R}^{\mathrm{het}}(\omega)\) and the map (14)) with Alice's system traced out at the end defines a quantum operation \(\mathcal{F}_{AC\to B}^{\mathrm{hom}}\) (resp. \(\mathcal{F}_{AC\to B}^{\mathrm{het}}\)) that (probabilistically) outputs Bob's qubits for his sifted key as
Footnote 1: Here, a subtlety for using the verification comes in. In order to know whether verification succeeds or not, Alice has to confirm the syndrome bits for the verification. However, this procedure may not commute with the action of \(V_{B;A\to R}^{\mathrm{hom}}(x)\) (resp. \(V_{B;A\to R}^{\mathrm{het}}(\omega)\)). We do not currently have a method to evaluate how much the verification affects the secrecy condition.
\[\mathcal{F}_{AC\to B}^{\mathrm{hom}}(\rho_{AC}) =\int_{-\infty}^{\infty}dx\;\mathcal{K}_{AC\to B}^{\prime\, \mathrm{hom}}(x)(\rho_{AC}), \tag{19}\] \[\mathcal{F}_{AC\to B}^{\mathrm{het}}(\rho_{AC}) =\iint_{-\infty}^{\infty}d\omega_{r}d\omega_{i}\;\mathcal{K}_{AC \to B}^{\prime\,\mathrm{het}}(x)(\rho_{AC}), \tag{20}\]
with \(\mathcal{K}_{AC\to B}^{\prime\,\mathrm{hom}}(x)\) (resp. \(\mathcal{K}_{AC\to B}^{\prime\,\mathrm{het}}(\omega)\)) given by
\[\mathcal{K}_{AC\to B}^{\prime\,\mathrm{hom}}(x)(\rho_{AC}) \coloneqq\mathrm{Tr}_{R}\left[\mathcal{V}_{B;A\to R}^{\mathrm{hom}}(x) \circ\big{(}\mathrm{Id}_{A}\otimes\mathcal{K}_{C\to B}^{\mathrm{hom}}(x)\big{)}( \rho_{AC})\right], \tag{21}\] \[\mathcal{K}_{AC\to B}^{\prime\,\mathrm{het}}(\omega)(\rho_{AC}) \coloneqq\mathrm{Tr}_{R}\left[\mathcal{V}_{B;A\to R}^{\prime\, \mathrm{het}}(\omega)\circ\big{(}\mathrm{Id}_{A}\otimes\mathcal{K}_{C\to B}^{ \mathrm{het}}(\omega)\big{)}(\rho_{AC})\right], \tag{22}\]
where Id denotes the identity map. Note that the idea of acting the isometry \(\hat{V}^{\text{hom}}_{B;A\to R}(x)\) or \(V^{\text{het}}_{B;A\to R}(\omega)\) is closely related to the twisting operation on the shield system [51, 52, 53, 54, 55]. The difference is that in our case it acts on the system \(A\) in a way that is incompatible with the \(Z\)-basis measurement on \(A\). This is allowed in a security proof based on the complementarity since what we need to prove in the virtual protocol is that the outcome of the \(Z\)-basis measurement on \(B\) is secret to Eve when the system \(A\) is traced out [43]; i.e., the system \(A\) works as a shield system.
We then introduce a virtual protocol that explicitly incorporates the action of \(\mathcal{F}^{\text{hom}}_{AC\to B}\) in Eq. (19) (resp. \(\mathcal{F}^{\text{het}}_{AC\to B}\) in Eq. (20)) in Box 2.
**Box 2: Virtual protocol**
1. Alice prepares a qubit \(A\) and an optical pulse \(\tilde{C}\) in a state \(\ket{\Psi}_{A\tilde{C}}\) defined in (7) and sends the pulse \(\tilde{C}\) to Bob. She repeats it for \(N\) rounds. Bob receives an optical pulse \(C\) for each of the \(N\) rounds.
2. For the received pulse \(C\) in each round, Bob announces a label in the same way as that at Step 2. Alice and Bob do one of the following procedures according to the label.
3. Alice and Bob perform the quantum operation on the system \(A\) and the received pulse \(C\) specified by the map \(\mathcal{F}^{\text{hom}}_{AC\to B}\) defined in Eq. (19) (resp. \(\mathcal{F}^{\text{het}}_{AC\to B}\) defined in Eq. (20)) to determine success or failure of detection, obtain the qubit \(B\) upon success, and perform the controlled isometry given in Eq. (17) (resp. Eq. (18)). Bob announces the success or failure of the detection.
4. Bob performs a heterodyne measurement on the received optical pulse \(C\), and obtains an outcome \(\hat{\omega}\). Alice measures her qubit \(A\) on \(Z\) basis and announces the outcome \(\hat{a}\in\{0,1\}\). Bob calculates the value of \(\Lambda_{m,r}(\ket{\hat{\omega}-(-1)^{\hat{a}}\beta}^{2})\).
5. Alice measures her qubit \(A\) on \(X\) basis to obtain \(\hat{a}^{\prime}\in\{+,-\}\).
6. \(\hat{N}^{\text{suc}},\hat{N}^{\text{fail}},\hat{N}^{\text{test}},\hat{N}^{ \text{trash}},\) and \(\hat{F}\) are defined in the same way as those at Step 3. Let \(\hat{Q}_{-}\) be the number of rounds with \(\hat{a}^{\prime}=-\) among the \(\hat{N}^{\text{trash}}\) trash rounds.
7. According to (the upper bound on) the bit error rate \(e_{\text{qber}}\), Bob performs \(H_{\text{EC}}\) bits of encrypted communication consuming a pre-shared secret key to send a dummy message.
8. Bob computes and announces the final key length \(\hat{N}^{\text{fin}}\) according to Eq. (1). Bob performs a randomly chosen unitary on his qubits (see the main text), and measures the first \(\hat{N}^{\text{fin}}\) qubits on the \(Z\) bases.
In the last line of Step \(5^{\prime}\), the random choice of a unitary is constructed so that, along with the subsequent \(\hat{N}^{\text{fin}}\)-qubit measurement on the \(Z\) bases, it is equivalent to the privacy amplification. This is possible because for any \(n\times n\) linear transformation \(C\) on the \(n\)-bit sequence, there always exists a corresponding unitary \(U(C)\) that satisfies \(U(C)\ket{\mathbf{z}}=\ket{C\mathbf{z}}\) on the \(Z\) basis. As has already been claimed, if Eve performs the same attacks as those in the actual protocol, the resulting classical-quantum state between Bob and Eve is the same as that in the actual protocol.
The complementarity argument [43] in a reverse reconciliation scenario relates the amount of privacy amplification to the so-called phase error patterns of Bob's qubits. Suppose that, just before the \(Z\)-basis measurement at Step \(5^{\prime}\) of the virtual protocol, Bob's quantum state on the first \(\hat{N}^{\text{fin}}\) qubits is arbitrarily close to \(\ket{+}\bra{+}^{\otimes N^{\text{fin}}}\). Then, the secrecy condition of the final key is satisfied [43, 45, 56]. For this to be true, the errors on the \(X\) bases (i.e., the phase errors) on Bob's qubits should be corrected by the procedure at Step \(5^{\prime}\) of the virtual protocol. To see the correctability of the phase errors at Step \(5^{\prime}\), suppose that Bob measured his \(\hat{N}^{\text{suc}}\) qubits on the \(X\) basis \(\{\ket{+},\ket{-}\}\) at the end of Step \(3^{\prime}\), and obtained a sequence of \(+\) and \(-\). The minuses in the sequence are regarded as phase errors. It has already been known that, if we can find an upper
bound on the number of possible phase-error patterns, then we can prove the security [43]. To make the argument more precise, we introduce the estimation protocol in Box 3.
**Box 3: Estimation protocol**
1. Alice prepares a qubit \(A\) and an optical pulse \(\tilde{C}\) in a state \(\ket{\Psi}_{A\tilde{C}}\) defined in (7) and sends the pulse \(\tilde{C}\) to Bob. She repeats it for \(N\) rounds. Bob receives an optical pulse \(C\) for each of the \(N\) rounds.
2. For the received pulse \(C\) in the \(i\)th round (\(i=1,\dots,N\)), Bob announces a label in the same way as that at Step 2. Alice and Bob do one of the following procedures according to the label and obtain the values of random variables \(\hat{N}^{\text{suc}\,(i)}_{\text{ph}}\), \(\hat{F}^{(i)}\), and \(\hat{Q}^{(i)}_{-}\). Unless explicitly written, these random variables are set to be zeros.
3. Alice and Bob do the same procedure as that at "signal" of Step \(2^{\prime}\). Upon "success", Bob performs the \(X\)-basis measurement on qubit \(B\) and obtains \(\hat{b}^{\prime}\in\{+,-\}\). When \(\hat{b}^{\prime}=-\), \(\hat{N}^{\text{suc}\,(i)}_{\text{ph}}\) is set to be unity.
4. Alice and Bob do the same procedure as that at "test" of Step \(2^{\prime}\). Then \(\hat{F}^{(i)}\) is set to be \(\Lambda_{m,r}(|\hat{\omega}-(-1)^{\hat{a}}\beta|^{2})\).
5. Alice does the same procedure as that at "trash" of Step \(2^{\prime}\). When \(\hat{a}^{\prime}=-\), \(\hat{Q}^{(i)}_{-}\) is set to be unity.
6. Same as Steps \(3^{\prime}\) of the virtual protocol. Note that \(\hat{F}=\sum_{i=1}^{N}\hat{F}^{(i)}\) and \(\hat{Q}_{-}=\sum_{i=1}^{N}\hat{Q}^{(i)}_{-}\) hold.
7. Regarding \(+\) as zero and \(-\) as unity for each \(\hat{b}^{\prime}\) in success signal round, define the \(\hat{N}^{\text{suc}}\)-bit sequence \(\hat{\mathbf{x}}_{\text{ph}}\). Let \(\hat{N}^{\text{suc}}_{\text{ph}}\) be the Hamming weight of \(\hat{\mathbf{x}}_{\text{ph}}\), i.e., \(\hat{N}^{\text{suc}}_{\text{ph}}=\sum_{i=1}^{N}\hat{N}^{\text{suc}\,(i)}_{ \text{ph}}\).
The task of proving the security of the actual protocol is then reduced to constructing a function \(U(\hat{F},\hat{N}^{\text{trash}})\) that satisfies
\[\Pr\left[\hat{N}^{\text{suc}}_{\text{ph}}\leq U(\hat{F},\hat{N}^{\text{trash }})\right]\geq 1-\epsilon \tag{23}\]
for any attack in the estimation protocol and setting the final-key length to \(\hat{N}^{\text{fin}}=\hat{N}^{\text{suc}}-H_{\text{PA}}-s\), where \(H_{\text{PA}}\) is defined as
\[H_{\text{PA}}\coloneqq\left[\hat{N}^{\text{suc}}h\Big{(}U(\hat{F},\hat{N}^{ \text{trash}})/\hat{N}^{\text{suc}}\Big{)}\right]. \tag{24}\]
In fact, if the condition (23) is satisfied, then the number of possible phase-error patterns can be bounded from above by \(2^{H_{\text{PA}}}\)[57]. Therefore, by extracting the \((H_{\text{PA}}+s)\)-bit error syndrome of \(\hat{\mathbf{x}}_{\text{ph}}\) using the universal\({}_{2}\) hash function, Bob could uniquely identify \(\hat{\mathbf{x}}_{\text{ph}}\) with a failure probability no smaller than \(1-2^{-s}\)[43, 58, 46, 44]. In the virtual protocol, the quantum operations at Step \(5^{\prime}\) can be made equivalent to the \((\hat{N}^{\text{suc}}-\hat{N}^{\text{fin}})\)-bit syndrome extraction via the universal\({}_{2}\) hash function and the error correction on the \(X\) bases of \(\hat{N}^{\text{fin}}\) qubits. Since a unitary \(U(C^{-1})\) that acts as the matrix \(C^{-1}\) on the \(Z\) bases acts as \(C^{\top}\) on the \(X\) bases, i.e., \(U(C^{-1})\ket{\mathbf{x}_{X}}=|C^{\top}\mathbf{x}_{X}\rangle\) where \(\cdot_{X}\) denotes the \(X\) basis, this procedure corresponds to the privacy amplification via the dual universal\({}_{2}\) hashing on the \(Z\) bases [58, 44] (i.e., in the actual protocol). Combining these, the condition (23) implies that the actual protocol with the final key length given in Eq. (1) is \(\epsilon_{\text{sec}}\)-secure with a security parameter \(\epsilon_{\text{sec}}=\sqrt{2}\sqrt{\epsilon+2^{-s}}+\epsilon_{\text{cor}}\)[43, 45, 56]. From now on, we thus focus on the estimation protocol for finding a function \(U(\hat{F},\hat{N}^{\text{trash}})\) to satisfy Eq. (23).
### Phase error operator
In this section, we explain how our new security analysis can be reduced to the previous analyses carried out in Refs. [37, 38] with a tighter operator inequality. The number of phase errors depends
on the choice of the controlled isometry \(V_{B;A\to R}^{\mathrm{hom}}(x)\) (resp. \(V_{B;A\to R}^{\mathrm{het}}(\omega)\)) in the virtual and the estimation protocol. We here take a suboptimal strategy; fix \(V_{B;A\to R}^{\mathrm{hom}}(x)\) (resp. \(V_{B;A\to R}^{\mathrm{het}}(\omega)\)) so that the probability of the phase error event \(\hat{b}^{\prime}=-\) in the estimation protocol is minimized for an ideal pure-loss channel [47] with transmission \(\eta=\beta^{2}/\mu\). When the state \(\ket{\Psi}_{A\hat{C}}\) in Eq. (7) is put into a pure-loss channel with the channel output being \(\ket{\pm\beta}_{C}\), the resulting state \(\ket{\Phi}_{ACE}\) on systems \(A,C\), and an adversary's system \(E\) (i.e., an environment of the pure-loss channel) is given by
\[\ket{\Phi}_{ACE}=\frac{1}{\sqrt{2}}\left(\ket{0}_{A}\ket{\beta}_{C}\ket{\sqrt{ \mu-\beta^{2}}}_{E}+\ket{1}_{A}\ket{-\beta}_{C}\ket{-\sqrt{\mu-\beta^{2}}}_{E} \right). \tag{25}\]
Tracing out the system \(E\), the reduced state \(\Phi_{AC}\) is given by
\[\Phi_{AC}=\left(1-q_{\mu,\beta}\right)\ket{\phi_{+}}\bra{\phi_{+}}_{AC}+q_{\mu,\beta}\ket{\phi_{-}}\bra{\phi_{-}}_{AC}, \tag{26}\]
where
\[\ket{\phi_{+}}_{AC} \coloneqq\frac{1}{\sqrt{2}}(\ket{0}\ket{\beta}+\ket{1}\ket{-\beta })=\ket{+}_{A}\otimes\Pi_{\mathrm{ev}}\ket{\beta}_{C}+\ket{-}_{A}\otimes\Pi_{ \mathrm{od}}\ket{\beta}_{C}, \tag{27}\] \[\ket{\phi_{-}}_{AC} \coloneqq\frac{1}{\sqrt{2}}(\ket{0}\ket{\beta}-\ket{1}\ket{-\beta })=\ket{+}_{A}\otimes\Pi_{\mathrm{od}}\ket{\beta}_{C}+\ket{-}_{A}\otimes\Pi_{ \mathrm{ev}}\ket{\beta}_{C}=\left(Z_{A}\otimes I_{C}\right)\ket{\phi_{+}}_{AC}, \tag{28}\]
and
\[q_{\mu,\beta}\coloneqq\frac{1-e^{-2(\mu-\beta^{2})}}{2}(>0). \tag{29}\]
For Homodyne protocol, we observe that
\[\mathrm{C}\text{-}X_{BA}\left(\mathrm{Id}_{A}\otimes\mathcal{K}_ {C\to B}^{\mathrm{hom}}(x)\right)\!(\Phi_{AC})\,\mathrm{C}\text{-}X_{BA}\] (30) (31) (32) \[=f_{\mathrm{suc}}(x)\left[\left(1-q_{\mu,\beta}\right)\hat{P} \Big{(}\sqrt{g_{\beta,1/4}(x)}\ket{0}_{A}+\sqrt{g_{-\beta,1/4}(x)}\ket{1}_{A} \Big{)}\otimes\ket{+}_{B}\right.\] \[\left.+q_{\mu,\beta}\hat{P}\Big{(}\sqrt{g_{\beta,1/4}(x)}\ket{0}_ {A}-\sqrt{g_{-\beta,1/4}(x)}\ket{1}_{A}\Big{)}\otimes\ket{-}\bra{-}_{B}\right],\]
where \(\hat{P}(\psi)\coloneqq\psi\psi^{\dagger}\) (and thus \(\hat{P}(\ket{\psi})=\ket{\psi}\bra{\psi}\)), and \(g_{m,V}\) is the normal distribution with the mean \(m\) and the variance \(V\), i.e.,
\[g_{m,V}(x)\coloneqq\frac{1}{\sqrt{2\pi V}}\exp\left[-\frac{(x-m)^{2}}{2V} \right]. \tag{34}\]
We define \(\tau_{AB}^{\mathrm{hom}}(x)\) as
\[\tau_{AB}^{\mathrm{hom}}(x)\coloneqq (1-q_{\mu,\beta})\,\hat{P}\Big{(}\sqrt{g_{\beta,1/4}(x)}\ket{0}_{A }+\sqrt{g_{-\beta,1/4}(x)}\ket{1}_{A}\Big{)}\otimes\ket{+}\bra{+}_{B} \tag{35}\] \[+q_{\mu,\beta}\,\hat{P}\Big{(}\sqrt{g_{\beta,1/4}(x)}\ket{0}_{A}- \sqrt{g_{-\beta,1/4}(x)}\ket{1}_{A}\Big{)}\otimes\ket{-}\bra{-}_{B}.\]
From Eqs. (17), (21), (33), and (35), the probability density of an outcome \(x\) with occurrence of the phase error is given by
\[\begin{split}&\operatorname{Tr}\Bigl{[}\left|-\rangle\langle-\right|_ {B}\,K^{\prime\,\mathrm{hom}}_{AC\to B}(x)(\Phi_{AC})\Bigr{]}\\ &=\frac{f_{\mathrm{suc}}(x)}{2}\operatorname{Tr}\Bigl{[}\left\langle 0 \right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|0\right\rangle_{B}+\left\langle 1 \right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}\\ &\qquad-\left(V^{(1)}_{A\to R}(x)\right)^{\dagger}V^{(0)}_{A \to R}(x)\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}- \left\langle 1\right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|0\right\rangle_{B} \left(V^{(0)}_{A\to R}(x)\right)^{\dagger}V^{(1)}_{A\to R}(x) \Bigr{]}\\ &=f_{\mathrm{suc}}(x)\left[\frac{1}{2}\,\operatorname{Tr}\left( \tau^{\mathrm{hom}}_{AB}(x)\right)-\operatorname{Re}\left(\operatorname{Tr} \left[\left(V^{(1)}_{A\to R}(x)\right)^{\dagger}V^{(0)}_{A\to R}(x)\left\langle 0 \right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}\right]\right) \right]\\ &\geq f_{\mathrm{suc}}(x)\left[\frac{1}{2}\,\operatorname{Tr} \left(\tau^{\mathrm{hom}}_{AB}(x)\right)-\left\|\left\langle 0\right|_{B}\tau^{ \mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}\right\|_{1}\right],\end{split} \tag{38}\]
where the last inequality follows from the matrix Holder inequality. If we write the polar decomposition of \(\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}\) by \(W^{\mathrm{hom}}_{A}(x)\bigl{|}\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x) \left|1\right\rangle_{B}\bigr{|}\), the equality in (38) can be achieved by setting
\[\left(V^{(1)}_{A\to R}(x)\right)^{\dagger}V^{(0)}_{A\to R}=\left(W^{ \mathrm{hom}}_{A}(x)\right)^{\dagger}. \tag{39}\]
From Eq. (35), \(\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}\) is given by
\[\begin{split}\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x) \left|1\right\rangle_{B}=\frac{1}{2}\left[\left(1-q_{\mu,\beta}\right)\hat{P} \Bigl{(}&\sqrt{g_{\beta,1/4}(x)}\left|0\right\rangle_{A}+\sqrt{g_ {-\beta,1/4}(x)}\left|1\right\rangle_{A}\Bigr{)}\\ &-q_{\mu,\beta}\,\hat{P}\Bigl{(}\sqrt{g_{\beta,1/4}(x)}\left|0 \right\rangle_{A}-\sqrt{g_{-\beta,1/4}(x)}\left|1\right\rangle_{A}\Bigr{)} \Bigr{]},\end{split} \tag{40}\]
which is hermitian with two eigenvalues having opposite signs. Let \(\left|u^{\mathrm{hom}}_{+}(x)\right\rangle_{A}\) and \(\left|u^{\mathrm{hom}}_{-}(x)\right\rangle_{A}\) be eigenvectors of \(\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x)\left|1\right\rangle_{B}\) with positive and negative eigenvalues, respectively. Then, \(W^{\mathrm{hom}}_{A}(x)\) is given by
\[W^{\mathrm{hom}}_{A}(x)=\left|u^{\mathrm{hom}}_{+}(x)\right\rangle\langle u^{ \mathrm{hom}}_{+}(x)\bigr{|}_{A}-\left|u^{\mathrm{hom}}_{-}(x)\right\rangle \langle u^{\mathrm{hom}}_{-}(x)\bigr{|}_{A}. \tag{41}\]
The explicit form of \(\left|u^{\mathrm{hom}}_{\pm}(x)\right\rangle_{A}\) is given in Eq. (151) in Appendix B. The choice of the isometry \(V^{(j)}_{A\to R}(x)\) to satisfy Eq. (39) is not unique; one of the reasons is the arbitrariness of the dimension of the system \(R\). Here, we set \(R=A\) and set
\[\begin{split} V^{(0)}_{A\to R}(x)&=I_{A},\\ V^{(1)}_{A\to R}(x)&=W^{\mathrm{hom}}_{A}(x),\end{split} \tag{42}\]
which, with Eqs. (17) and (41), leads to
\[V^{\mathrm{hom}}_{B;A\to A}(x)=\bigl{[}\left|u^{\mathrm{hom}}_{+}(x) \right\rangle\langle u^{\mathrm{hom}}_{+}(x)\bigr{|}_{A}\otimes I_{B}+\left|u^ {\mathrm{hom}}_{-}(x)\right\rangle\langle u^{\mathrm{hom}}_{-}(x)\bigr{|}_{A} \otimes Z_{B}\bigr{]}\,\text{C-}X_{BA}. \tag{43}\]
For Heterodyne protocol, the calculation similar to Eqs. (30)-(35) leads to
\[\text{C-}X_{BA}\left(\mathrm{Id}_{A}\otimes\mathcal{K}^{\mathrm{ net}}_{C\to B}(\omega)\right)\!\left(\Phi_{AC}\right)\text{C-}X_{BA} \tag{44}\] \[=\frac{2f_{\mathrm{suc}}(\omega_{r})}{\pi}\,\text{C-}X_{BA}\, \Bigl{[}\left(1-q_{\mu,\beta}\right)\hat{P}\bigl{(}\left\langle\omega\right| \Pi_{\mathrm{ev}}\left|\beta\right\rangle\left|++\right\rangle_{AB}+\left\langle \omega\right|\Pi_{\mathrm{od}}\left|\beta\right\rangle\left|--\right\bigr{)}_{ AB}\Bigr{)}\] (45) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+q_{ \mu,\beta}\,\hat{P}\bigl{(}\left\langle\omega\right|\Pi_{\mathrm{od}}\left| \beta\right\rangle\left|+-\bigr{)}_{AB}+\left\langle\omega\right|\Pi_{ \mathrm{ev}}\left|\beta\right\rangle\left|-+\bigr{)}_{AB}\Bigr{)}\Big{]}\, \text{C-}X_{BA}\] \[=\frac{f_{\mathrm{suc}}(\omega_{r})}{\pi}\,\Bigl{[}\left(1-q_{ \mu,\beta}\right)\hat{P}\bigl{(}\left\langle\omega\right|\beta\right\rangle \left|0\right\rangle_{A}+\left\langle-\omega\right|\beta\rangle\left|1\right\rangle_{A} \bigr{)}\otimes\left|+\rangle\langle+\right|_{B}\] (46) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+q_{ \mu,\beta}\,\hat{P}\bigl{(}\left\langle\omega\right|\beta\rangle\left|0 \right\rangle_{A}-\left\langle-\omega\right|\beta\rangle\left|1\right\rangle_{A} \bigr{)}\otimes\left|-\rangle\langle-\right|_{B}\Bigr{]}\,,\]
Since \(\langle\omega|\beta\rangle=e^{-\frac{1}{2}[(\omega_{r}-\beta)^{2}+\omega_{r}^{2} +2i\omega_{i}\beta]}\) is not real in general, we insert a \(\theta\)-rotation around the \(Z\) basis
\[R^{Z}_{A}(\theta)\coloneqq\exp(-i\theta Z_{A}/2) \tag{47}\]
in order to have
\[\left(R_{A}^{Z}(2\omega_{i}\beta)\right)^{\dagger}\text{C-}X_{BA} \left(\text{Id}_{A}\otimes\mathcal{K}_{C\to B}^{\text{het}}(\omega)\right) (\Phi_{AC})\text{C-}X_{BA}\,R_{A}^{Z}(2\omega_{i}\beta) \tag{48}\] \[=\frac{e^{-\omega_{i}^{2}}f_{\text{suc}}(\omega_{r})}{\sqrt{\pi}} \left[(1-q_{\mu,\beta})\,\hat{P}\Big{(}\sqrt{g_{\beta,1/2}(\omega_{r})}\,|0 \rangle_{A}+\sqrt{g_{-\beta,1/2}(\omega_{r})}\,|1\rangle_{A}\Big{)}\otimes|+ \rangle\langle+|_{B}\right.\] (49) \[\left.+q_{\mu,\beta}\,\hat{P}\Big{(}\sqrt{g_{\beta,1/2}(\omega_{r} )}\,|0\rangle_{A}-\sqrt{g_{-\beta,1/2}(\omega_{r})}\,|1\rangle_{A}\Big{)} \otimes|-\rangle\langle-|_{B}\right].\]
We define \(\tau_{AB}^{\text{het}}(\omega_{r})\) as
\[\tau_{AB}^{\text{het}}(\omega_{r})\coloneqq(1-q_{\mu,\beta})\, \hat{P}\Big{(}\sqrt{g_{\beta,1/2}(\omega_{r})}\,|0\rangle_{A}+\sqrt{g_{-\beta,1/2}(\omega_{r})}\,|1\rangle_{A}\Big{)}\otimes|+\rangle\langle+|_{B} \tag{50}\]
Thus, the structure of the matrix \(\tau_{AB}^{\text{het}}(\omega_{r})\) is essentially the same as \(\tau_{AB}^{\text{hom}}(x)\) of Homodyne protocol. In the same way as Homodyne protocol, the probability density of outcome \(\omega\) with the occurrence of a phase error is given by
\[\text{Tr}\Big{[}|-\rangle\langle-|_{B}\,\,\mathcal{K}_{AC\to B}^{ \prime\,\text{het}}(\omega)(\Phi_{AC})\Big{]} \tag{51}\] \[=\frac{e^{-\omega_{i}^{2}}f_{\text{suc}}(\omega_{r})}{\sqrt{\pi}} \left[\frac{1}{2}\,\operatorname{Tr}\left(\tau_{AB}^{\text{het}}(\omega_{r}) \right)\right.\] (52) \[\left.-\text{Re}\left(\operatorname{Tr}\Big{[}\left(V_{A\to R}^{ \prime(1)}(\omega_{r})\right)^{\dagger}V_{A\to R}^{\prime(0)}(\omega_{r})R_{A} ^{Z}(2\omega_{i}\beta)\,\langle 0|_{B}\,\tau_{AB}^{\text{het}}(\omega_{r})\,|1 \rangle_{B}\left(R_{A}^{Z}(2\omega_{i}\beta)\right)^{\dagger}\right]\right) \Big{]}\] (53) \[\geq\frac{e^{-\omega_{i}^{2}}f_{\text{suc}}(\omega_{r})}{\sqrt{ \pi}}\left[\frac{1}{2}\,\operatorname{Tr}\left(\tau_{AB}^{\text{het}}(\omega_{ r})\right)-\big{\|}\langle 0|_{B}\,\tau_{AB}^{\text{het}}(\omega_{r})\,\,|1\rangle_{B} \big{\|}_{1}\right]. \tag{54}\]
If we write the polar decomposition of \(\langle 0|_{B}\,\tau_{AB}^{\text{het}}(\omega_{r})\,\,|1\rangle_{B}\) by \(W_{A}^{\text{het}}(\omega_{r})\big{|}\langle 0|_{B}\,\tau_{AB}^{\text{het}}( \omega_{r})\,\,|1\rangle_{B}\big{|}\), then the equality of Eq. (54) can be achieved by setting
\[\left(R_{A}^{Z}(2\omega_{i}\beta)\right)^{\dagger}\left(V_{A\to R}^{ \prime(1)}(\omega)\right)^{\dagger}V_{A\to R}^{\prime(0)}(\omega)R_{A}^{Z}(2 \omega_{i}\beta)=\left(W_{A}^{\text{het}}(\omega_{r})\right)^{\dagger}. \tag{55}\]
From Eq. (50), \(\langle 0|_{B}\,\tau_{AB}^{\text{het}}(\omega_{r})\,\,|1\rangle_{B}\) is given by
\[\begin{split}\langle 0|_{B}\,\tau_{AB}^{\text{het}}(\omega_{r})\,\,|1 \rangle_{B}=\frac{1}{2}\left[(1-q_{\mu,\beta})\,\hat{P}\Big{(}& \sqrt{g_{\beta,1/2}(\omega_{r})}\,|0\rangle_{A}+\sqrt{g_{-\beta,1/2}(\omega_{r} )}\,|1\rangle_{A}\Big{)}\right.\\ &\left.-q_{\mu,\beta}\,\hat{P}\Big{(}\sqrt{g_{\beta,1/2}(\omega_{r} )}\,|0\rangle_{A}-\sqrt{g_{-\beta,1/2}(\omega_{r})}\,|1\rangle_{A}\Big{)} \right],\end{split} \tag{56}\]
which is hermitian. Let \(\left|u_{+}^{\text{het}}(\omega_{r})\right\rangle_{A}\) and \(\left|u_{-}^{\text{het}}(\omega_{r})\right\rangle_{A}\) be eigenvectors of \(\langle 0|_{B}\,\tau_{AB}^{\text{het}}(\omega_{r})\,\,|1\rangle_{B}\) with positive and negative eigenvalues, respectively. Then, \(W_{A}^{\text{het}}(\omega_{r})\) is given by
\[W_{A}^{\text{het}}(\omega_{r})=\left|u_{+}^{\text{het}}(\omega_{r})\right\rangle \langle u_{+}^{\text{het}}(\omega_{r})\big{|}_{A}-\left|u_{-}^{\text{het}}( \omega_{r})\right\rangle\langle u_{-}^{\text{het}}(\omega_{r})\big{|}_{A}. \tag{57}\]
We can choose \(V_{A\to R}^{\prime(j)}(\omega)\) to satisfy Eq. (55) in the same way as Homodyne protocol. We set \(R=A\) and set
\[\begin{split}& V_{A\to R}^{\prime(0)}(\omega)=\left(R_{A}^{Z}(2 \omega_{i}\beta)\right)^{\dagger},\\ & V_{A\to R}^{\prime(1)}(\omega)=W_{A}^{\text{het}}(\omega_{r}) \left(R_{A}^{Z}(2\omega_{i}\beta)\right)^{\dagger},\end{split} \tag{58}\]
which, with Eqs. (18) and (57), leads to
\[V_{B;A\to A}^{\text{het}}(\omega)=\left[|u_{+}^{\text{het}}(\omega_{r})\rangle \langle u_{+}^{\text{het}}(\omega_{r})|_{A}\otimes I_{B}+|u_{-}^{\text{het}}( \omega_{r})\rangle\langle u_{-}^{\text{het}}(\omega_{r})|_{A}\otimes Z_{B} \right]\left(R_{A}^{Z}(2\omega_{i}\beta)\right)^{\dagger}\text{C-}X_{BA}. \tag{59}\]
As explained previously, we set \(V^{(j)}_{A\to R}(x)\) to the one in Eq. (42) (resp. \(V^{(j)}_{A\to R}(\omega)\) to the one in Eq. (57)) also for arbitrary channels, i.e., arbitrary coherent attacks by Eve. This choice is suboptimal for general channels but is expected to be close to optimal for channels that are close to the pure-loss one. Now that the controlled isometry \(V^{\mathrm{hom}}_{B;A\to A}(x)\) (resp. \(V^{\mathrm{het}}_{B;A\to A}(\omega)\)) is fixed, we can interpret the event that Bob announces "success" and obtains \(\hat{b}^{\prime}=-\) (i.e., the phase error) at the signal round of Estimation protocol as the outcome of a generalized measurement on Alice's qubit \(A\) and the optical pulse \(C\) and define the corresponding POVM element \(M^{\mathrm{hom}/\mathrm{het}}_{\mathrm{ph}}\) through Eq. (19) (resp. Eq. (20)) as
\[M^{\mathrm{hom}}_{\mathrm{ph}} \coloneqq\mathcal{F}^{\mathrm{hom}\;\ddagger}_{AC\to B}\big{(}| -\rangle\langle-|_{B}\big{)}=\int_{-\infty}^{\infty}dx\,\left(\mathcal{K}^{ \mathrm{hom}}_{AC\to B}(x)\right)^{\ddagger}\big{(}|-\rangle\langle-|_{B} \big{)}, \tag{59}\] \[M^{\mathrm{het}}_{\mathrm{ph}} \coloneqq\mathcal{F}^{\mathrm{het}\;\ddagger}_{AC\to B}\big{(}| -\rangle\langle-|_{B}\big{)}=\iint_{-\infty}^{\infty}d\omega_{r}\,d\omega_{i} \,\left(\mathcal{K}^{\prime\,\mathrm{het}}_{AC\to B}(\omega)\right)^{ \ddagger}\big{(}|-\rangle\langle-|_{B}\big{)}, \tag{60}\]
where \(\ddagger\) denotes the adjoint map. Then, for any density operator \(\rho\) on the joint system \(AC\), \(M^{\mathrm{hom}}_{\mathrm{ph}}\) (resp. \(M^{\mathrm{het}}_{\mathrm{ph}}\)) satisfies
\[\mathbb{E}_{p}\left[\hat{N}^{\mathrm{suc}\;(i)}_{\mathrm{ph}}\right]=p_{ \mathrm{sig}}\mathrm{Tr}\Big{[}\rho\,M^{\mathrm{hom}/\mathrm{het}}_{\mathrm{ ph}}\Big{]} \tag{61}\]
in Homodyne (resp. Heterodyne) protocol. For Homodyne protocol, by using Eqs. (9), (21), and (43), we have
\[M^{\mathrm{hom}}_{\mathrm{ph}} =\int_{-\infty}^{\infty}dx\,\left[I_{A}\otimes\big{(}K^{ \mathrm{hom}}_{\mathrm{suc}}(x)\big{)}^{\dagger}\right]\big{(}V^{\mathrm{hom}}_ {B;A\to A}(x)\big{)}^{\dagger}\big{(}I_{A}\otimes|-\rangle\langle-|_{B}\big{)} V^{\mathrm{hom}}_{B;A\to A}(x)\left[I_{A}\otimes K^{\mathrm{hom}}_{ \mathrm{suc}}(x)\right] \tag{62}\] \[=\int_{-\infty}^{\infty}dx\,\Big{[}\hat{P}\Big{(}\big{[}I_{A} \otimes\big{(}K^{\mathrm{hom}}_{\mathrm{suc}}(x)\big{)}^{\dagger}\big{]}\, \mathrm{C}\!\!-\!X_{BA}\,|u^{\mathrm{hom}}_{+}(x)\rangle_{A}\otimes|-\rangle_{ B}\Big{)}\] (63) \[\qquad\qquad\qquad\qquad+\hat{P}\left(\big{[}I_{A}\otimes\big{(} K^{\mathrm{hom}}_{\mathrm{suc}}(x)\big{)}^{\dagger}\big{]}\,\mathrm{C}\!\!-\!X_{BA}\,|u^{ \mathrm{hom}}_{-}(x)\rangle_{A}\otimes|+\rangle_{B}\Big{)}\Big{]}\,,\]
where we used the fact that the adjoint map of the tracing-out \(\mathrm{Tr}_{A}\) is taking the tensor product with \(I_{A}\). Using the relation \(\mathrm{C}\!\!-\!X_{BA}=|+\rangle\langle+|_{A}\otimes I_{B}+|-\rangle\langle-|_ {A}\otimes Z_{B}\) as well as Eq. (12), we have
\[M^{\mathrm{hom}}_{\mathrm{ph}} =\int_{-\infty}^{\infty}2f_{\mathrm{suc}}(x)dx\,\Big{[}\hat{P} \Big{(}\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}\,|u^{\mathrm{hom}}_{+}(x) \rangle_{A}\otimes|x\rangle_{C}\Big{)} \tag{64}\] \[\qquad\qquad\qquad\qquad\qquad+\hat{P}\Big{(}\Pi^{(-,\mathrm{od }),(+,\mathrm{ev})}_{AC}\,|u^{\mathrm{hom}}_{-}(x)\rangle_{A}\otimes|x\rangle_{ C}\Big{)}\Big{]}\,,\]
where two orthogonal projections \(\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}\) and \(\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}\) are defined as
\[\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC} \coloneqq|+\rangle\langle+|_{A}\otimes\Pi_{\mathrm{od}}+|- \rangle\langle-|_{A}\otimes\Pi_{\mathrm{ev}}, \tag{65}\] \[\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC} \coloneqq|-\rangle\langle-|_{A}\otimes\Pi_{\mathrm{od}}+|+\rangle \langle+|_{A}\otimes\Pi_{\mathrm{ev}}. \tag{66}\]
A similar relation holds for Heterodyne protocol by replacing \(K^{\mathrm{hom}}_{\mathrm{suc}}(x)\) with \(K^{\mathrm{het}}_{\mathrm{suc}}(\omega)\) and \(V^{\mathrm{hom}}_{B;A\to A}(x)\) with \(V^{\mathrm{het}}_{B;A\to A}(\omega)\) as well as using Eqs. (15), (58), (65), and (66):
\[M^{\mathrm{het}}_{\mathrm{ph}} =\iint_{-\infty}^{\infty}d\omega_{r}d\omega_{i}\,\Big{[}\hat{P} \Big{(}\Big{[}I_{A}\otimes\big{(}K^{\mathrm{het}}_{\mathrm{suc}}(\omega)\big{)} ^{\dagger}\Big{]}\,\mathrm{C}\!\!-\!X_{BA}\,R^{Z}_{A}(2\omega_{i}\beta)\,|u^{ \mathrm{het}}_{+}(\omega_{r})\rangle_{A}\otimes|-\rangle_{B}\Big{)} \tag{67}\] \[\qquad\qquad\qquad\qquad+\hat{P}\Big{(}\big{[}I_{A}\otimes\big{(} K^{\mathrm{het}}_{\mathrm{suc}}(\omega)\big{)}^{\dagger}\Big{]}\,\mathrm{C}\!\!-\!X_{BA}\,R^{Z}_{A}( 2\omega_{i}\beta)\,|u^{\mathrm{het}}_{-}(\omega_{r})\rangle_{A}\otimes|+\rangle _{B}\Big{)}\Big{]}\] \[=\iint_{-\infty}^{\infty}\frac{2f_{\mathrm{suc}}(\omega_{r})}{\pi} d\omega_{r}d\omega_{i}\,\Big{[}\hat{P}\Big{(}\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}R^{Z}_{A}( 2\omega_{i}\beta)\,|u^{\mathrm{het}}_{+}(\omega_{r})\rangle_{A}\otimes|\omega \rangle_{C}\Big{)}\] (68) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\hat {P}\Big{(}\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}R^{Z}_{A}(2\omega_{i}\beta) \,|u^{\mathrm{het}}_{-}(\omega_{r})\rangle_{A}\otimes|\omega\rangle_{C}\Big{)} \Big{]}\,.\]
Using Eq. (11), we observe that
\[\frac{1}{\pi}\int d\omega_{i}\exp(\pm 2i\omega_{i}\beta)\ket{ \omega}\langle\omega| =\frac{1}{\pi}\iiint d\omega_{i}dxdx^{\prime}\sqrt{\frac{2}{\pi}}e^{ \pm 2i\omega_{i}\beta-(x-\omega_{r})^{2}+2i\omega_{i}x-(x^{\prime}-\omega_{r} )^{2}-2i\omega_{i}x^{\prime}}\ket{x}\langle x^{\prime}| \tag{69}\] \[=2\iint dxdx^{\prime}\,\delta(2(x\pm\beta-x^{\prime}))\ket{x} \langle x|\omega_{r}\rangle\langle\omega_{r}|x^{\prime}\rangle\langle x^{ \prime}|\] (70) \[=\int dx\ket{x}\langle x|\omega_{r}\rangle\langle\omega_{r}|x\pm \beta\rangle\langle x\pm\beta|\,. \tag{71}\]
Applying this to Eq. (68) and changing the integration variable appropriately, we have
\[\begin{split} M^{\rm bet}_{\rm ph}&=\iint_{-\infty }^{\infty}2f_{\rm suc}(\omega_{r})d\omega_{r}dx\left[\hat{P}\left(\Pi^{(+,{\rm od }),(-,{\rm ev})}_{AC}O^{\beta}_{AC}(x)\ket{u^{\rm het}_{+}(\omega_{r})}_{A} \otimes\ket{\omega_{r}}_{C}\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.+\hat{P}\left(\Pi^{(-, {\rm od}),(+,{\rm ev})}_{AC}O^{\beta}_{AC}(x)\ket{u^{\rm het}_{-}(\omega_{r})}_ {A}\otimes\ket{\omega_{r}}_{C}\right)\right],\end{split} \tag{72}\]
where the operator \(O^{\beta}_{AC}(x)\) is defined as
\[O^{\beta}_{AC}(x)\coloneqq\ket{0}\langle 0|_{A}\otimes\ket{x}\langle x|_{C}+ \ket{1}\langle 1|_{A}\otimes\ket{x-\beta}\langle x-\beta|_{C}\,. \tag{73}\]
### Finite-size analysis
Since the phase error operator was defined on systems \(A\) and \(C\), we can follow essentially the same analysis as that in Ref. [37]. Let us define the following operators:
\[\Pi^{\rm fid} \coloneqq\ket{0}\langle 0|_{A}\otimes\ket{\beta}\langle\beta|_{C} +\ket{1}\langle 1|_{A}\otimes\ket{-\beta}\langle-\beta|_{C} \tag{74}\] \[=\ket{\phi_{-}}\langle\phi_{-}|_{AC}+\ket{\phi_{+}}\langle\phi_{+ }|_{AC}\,,\] (75) \[\Pi^{\rm transl}_{-} \coloneqq\ket{-}\langle-|_{A}\otimes I_{C}, \tag{76}\]
where \(\ket{\phi_{\pm}}_{AC}\) are defined in Eqs. (27) and (28). For any density operator \(\rho\) on the joint system \(AC\), these operators satisfy
\[\mathbb{E}_{\rho}\left[\hat{F}^{(i)}\right] \leq p_{\rm test}{\rm Tr}\big{[}\rho\,\Pi^{\rm fid}\big{]}\,, \tag{77}\] \[\mathbb{E}_{\rho}\left[\hat{Q}^{(i)}_{-}\right] =p_{\rm trash}{\rm Tr}\big{[}\rho\,\Pi^{\rm transl}_{-}\big{]}\,, \tag{78}\]
where the first inequality follows from Theorem 1 in Ref. [37] as well as the definition of \(\hat{F}^{(i)}\). Let \(M^{\rm hom/het}[\kappa,\gamma]\) for positive numbers \(\kappa\) and \(\gamma\) determined prior to the protocol be defined as
\[M^{\rm hom/het}[\kappa,\gamma]\coloneqq M^{\rm hom/het}_{\rm ph}+\kappa\Pi^{ \rm fid}-\gamma\Pi^{\rm transl}_{-}. \tag{79}\]
In Corollaries 2 and 3 in Appendix B, we show an inequality
\[M^{\rm hom/het}[\kappa,\gamma]\leq B^{\rm hom/het}(\kappa,\gamma)\,I_{AC} \tag{80}\]
with a computable convex function \(B^{\rm hom/het}(\kappa,\gamma)\). Let \(\hat{T}^{(i)}[\kappa,\gamma]\) be a linear combination of random variables at \(i\)th round in Estimation protocol given by
\[\hat{T}^{(i)}[\kappa,\gamma]\coloneqq p_{\rm sig}^{-1}\hat{N}_{\rm ph}^{\rm suc \;(i)}+p_{\rm test}^{-1}\kappa\hat{F}^{(i)}-p_{\rm trash}^{-1}\gamma\hat{Q}^{ (i)}_{-}. \tag{81}\]
Furthermore, let \(\hat{T}^{(0)}[\kappa,\gamma]\) be zero. Then, by applying Azuma's inequality [59, 60, 61] with Doob decomposition to \(\{\hat{T}^{(k)}[\kappa,\gamma]\}_{k=0,\ldots,N}\) and using Eqs. (61), (77), (78), and (80), we observe that
\[\sum_{k=1}^{N}\hat{T}^{(k)}[\kappa,\gamma]=p_{\rm sig}^{-1}\hat{N}_{\rm ph}^{ \rm suc}+p_{\rm test}^{-1}\kappa\hat{F}-p_{\rm trash}^{-1}\gamma\hat{Q}_{-}\leq NB ^{\rm hom/het}(\kappa,\gamma)+\delta_{1}(\epsilon/2), \tag{82}\]
holds with a probability no smaller than \(1-\epsilon/2\). (See Proposition 1 as well as Eqs. (92)-(105) in Ref. [37].) Here, \(\delta_{1}(\epsilon)\) is defined as [37]
\[\delta_{1}(\epsilon)\coloneqq\left(\max\Bigl{\{}p_{\text{sig}}^{-1},p_{\text{ test}}^{-1}\kappa\,\max_{\nu\geq 0}\Lambda_{m,r}(\nu)\Bigr{\}}-\min\Bigl{\{}p_{ \text{test}}^{-1}\kappa\,\min_{\nu\geq 0}\Lambda_{m,r}(\nu),-p_{\text{trash}}^{-1} \gamma\Bigr{\}}\right)\sqrt{\frac{N}{2}\ln\biggl{(}\frac{1}{\epsilon}\biggr{)}}. \tag{83}\]
Since \(\hat{Q}_{-}\) is determined solely by Alice's qubits, each in the state \(\operatorname{Tr}_{\hat{C}}(\left|\Phi\right\rangle\!\left\langle\Phi\right|_{ A\hat{C}})\) with \(\left|\Phi\right\rangle_{A\hat{C}}\) given in Eq. (7), it follows the same statistics as a tally of \(\hat{N}^{\text{trash}}\) Bernoulli trials with a probability \(q_{-}\coloneqq\|\left\langle-\right|_{A}\left|\Psi\right\rangle_{A\hat{C}}\|^ {2}=(1-e^{-2\mu})/2\). Hence we observe that
\[\hat{Q}_{-}\leq q_{-}\hat{N}^{\text{trash}}+\delta_{2}(\epsilon/2;\hat{N}^{ \text{trash}}) \tag{84}\]
holds with a probability no smaller than \(1-\epsilon/2\). (See Eq. (31) in Ref. [37].) Here, \(\delta_{2}(\epsilon;n)\) is defined as [37]
\[\begin{cases}D(q_{-}+\delta_{2}(\epsilon;n)/n\|q_{-})=-\frac{1}{n}\log_{2}( \epsilon)&(\epsilon>q_{-}^{n})\\ \delta_{2}(\epsilon;n)=(1-q_{-})n&(\epsilon\leq q_{-}^{n})\end{cases}, \tag{85}\]
where
\[D(x\|y)\coloneqq x\log_{2}\frac{x}{y}+(1-x)\log_{2}\frac{1-x}{1-y} \tag{86}\]
is the Kullback-Leibler divergence. Combining Eqs. (81), (82), and (84), by setting
\[U(\hat{F},\hat{N}^{\text{trash}})=p_{\text{sig}}\bigl{(}NB^{\text{hom/het}}( \kappa,\gamma)+\delta_{1}(\epsilon/2)\bigr{)}-\frac{p_{\text{sig}}}{p_{\text{ test}}}\kappa\hat{F}+\frac{p_{\text{sig}}}{p_{\text{trash}}}\gamma\bigl{(}q_{-} \hat{N}^{\text{trash}}+\delta_{2}(\epsilon/2;\hat{N}^{\text{trash}})\bigr{)}, \tag{87}\]
we observe that Eq. (23) holds from the union bound.
## 3 Numerical simulations
We compute (the lower bound on) the net key gain per pulse (i.e., key rate \(\hat{G}\)) against the transmission distance with various values of excess noise at the channel output. In this model, Bob receives Gaussian states \(\rho_{\text{model}}^{(\hat{a})}\) obtained by randomly displacing attenuated coherent states \(\left|(-1)^{\hat{a}}\sqrt{\eta\mu}\right\rangle\) with attenuation rate \(\eta\) to increase their variances via factor of \((1+\xi)\), i.e.,
\[\rho_{\text{model}}^{(\hat{a})}\coloneqq\frac{2}{\pi\xi}\int_{\mathbb{C}}e^{- 2\left|\gamma\right|^{2}/\xi}\left|(-1)^{\hat{a}}\sqrt{\eta\mu}+\gamma\right\rangle \!\left\langle(-1)^{\hat{a}}\sqrt{\eta\mu}+\gamma\right|d^{2}\gamma. \tag{88}\]
For simplicity, the number \(N_{\text{smp}}\) of the sampling rounds is set to be \(N/100\), and the bit error correction efficiency \(f\) in Eq. (4) is to be \(0.95\)2. The acceptance probability \(f_{\text{suc}}(x)\) is assumed to be a step function \(\Theta(x-x_{\text{th}})\) with a threshold \(x_{\text{th}}(>0)\), where \(\Theta(x)\) denotes the Heaviside step function. The expected amplitude of the coherent state \(\beta\) is chosen to be \(\sqrt{\eta\mu}\). We set the security parameter \(\epsilon_{\text{sec}}=2^{-50}\), and set \(\epsilon_{\text{cor}}=\epsilon_{\text{sec}}/2\) and \(\epsilon=2^{-s}=\epsilon_{\text{sec}}^{2}/16\).
Footnote 2: Currently, this level of efficiency may be too optimistic because the bit error correction in our protocol must succeed with probability no smaller than \(1-\varepsilon_{\text{cor}}/2\) without the use of the verification.
We assume that the number of "success" signal rounds \(\hat{N}^{\text{suc}}\) is equal to its expectation, i.e.,
\[\mathbb{E}[\hat{N}^{\text{suc}}] =p_{\text{sig}}N\int_{-\infty}^{\infty}\left(f_{\text{suc}}(x)+f_ {\text{suc}}(-x)\right)\left\langle x\right|\frac{1}{2}\sum_{a\in\{0,1\}}\rho _{\text{model}}^{(a)}\left|x\right\rangle dx \tag{89}\] \[=p_{\text{sig}}N(P_{\text{hom}}^{+}+P_{\text{hom}}^{-}), \tag{90}\]
where
\[P_{\text{hom}}^{\pm} \coloneqq\int_{-\infty}^{\infty}\frac{f_{\text{suc}}(\pm x)}{2} \sum_{a\in\{0,1\}}\left\langle(-1)^{a}x\right|\rho_{\text{model}}^{(a)}\left| (-1)^{a}x\right\rangle dx \tag{91}\] \[=\frac{1}{2}\text{erfc}\biggl{(}(x_{\text{th}}\mp\sqrt{\eta\mu}) \sqrt{\frac{2}{1+\xi}}\biggr{)}, \tag{92}\]
for Homodyne protocol [37]. For Heterodyne protocol [38], it is given by
\[\mathbb{E}[\hat{N}^{\text{suc}}] =p_{\text{sig}}N(P_{\text{het}}^{+}+P_{\text{het}}^{-}), \tag{93}\] \[P_{\text{het}}^{\pm} \coloneqq\iint_{-\infty}^{\infty}\frac{f_{\text{suc}}(\pm\omega_{ r})}{2\pi}\sum_{a\in\{0,1\}}\left\langle(-1)^{a}\omega\right|\rho_{\text{model}}^{(a)} \left|(-1)^{a}\omega\right\rangle d\omega_{r}d\omega_{i}\] (94) \[=\frac{1}{2}\text{erfc}\bigg{(}(x_{\text{th}}\mp\sqrt{\eta\mu}) \sqrt{\frac{2}{2+\xi}}\bigg{)}. \tag{95}\]
We also assume that the number of "success" sampling rounds is equal to \((P_{\text{hom/het}}^{+}+P_{\text{hom/het}}^{-})N_{\text{smp}}\), the number of test rounds \(\hat{N}^{\text{test}}\) is equal to \(p_{\text{test}}N\), and the number of trash rounds \(\hat{N}^{\text{trash}}\) is equal to \(p_{\text{trash}}N\). The test outcome \(\hat{F}\) is assume to be equal to its expectation given by [37]
\[\mathbb{E}[\hat{F}] =p_{\text{test}}N\,\frac{1}{2}\sum_{a\in\{0,1\}}\mathbb{E}_{\rho_ {\text{model}}^{(a)}}[\Lambda_{m,r}(|\hat{\omega}-(-1)^{a}\sqrt{\eta\mu}|^{2})] \tag{96}\] \[=\frac{p_{\text{test}}N}{1+\xi/2}\left[1-(-1)^{m+1}\left(\frac{ \xi/2}{1+r(1+\xi/2)}\right)^{m+1}\right]. \tag{97}\]
For the test function \(\Lambda_{m,r}\) in the above, we adopt \(m=1\) and \(r=0.4120\), which leads to \((\max_{\nu\geq 0}\Lambda_{m,r}(\nu),\min_{\nu\geq 0}\Lambda_{m,r}(\nu))=(2.824,-0.9932)\). We assume that the number \(\hat{E}_{\text{obs}}\) of bit errors observed in the "success" sampling rounds is equal to its expectation \(\hat{E}_{\text{obs}}=P_{\text{hom/het}}^{-}N_{\text{smp}}\). The upper-bound \(e_{\text{qber}}\) on the bit error rate is thus given by Eq. (3) with the parameters \(\hat{N}^{\text{suc}}\), \(\hat{N}^{\text{suc}}_{\text{smp}}\), and \(\hat{E}_{\text{obs}}\) given above. Under these assumptions, the remaining parameters to be determined are six parameters \((\mu,x_{\text{th}},p_{\text{sig}},p_{\text{test}},\kappa,\gamma)\). We determined \((\kappa,\gamma)\) via a convex optimization using CVXPY 1.2.1 and \((\mu,x_{\text{th}},p_{\text{sig}},p_{\text{test}})\) via the Nelder-Mead in the scipy.minimize library in Python, for each transmission distance \(L\) with the attenuation rate \(\eta\) assumed to be \(10^{-0.02L}\).
Figure 2 shows the key rates of Homodyne protocol for the channel model explained above. Figures show that under the condition of low excess noise, our refined analysis results in significantly
Figure 2: Key rates of the Homodyne protocol against transmission distance over an optical fiber. The attenuation rate of the optical fiber is assumed to be \(10^{-0.02L}\) with transmission distance \(L\) km, an error correction efficiency \(f\) in Eq. (4) is set to be \(0.95\), and the number of sampling rounds \(N_{\text{smp}}\) is set to be \(N/100\). a) Key rates when the excess noise \(\xi\) at the channel output is zero; that is, the channel is pure loss. The bold solid lines show the key rates with our refined analysis developed here, the broken lines show those with the previous analysis [37], and the black thin line shows the PLOB bound, which is the ultimate limit of the key rate of one-way QKD [47]. One can see that the logarithm of the asymptotic key rate decreases in parallel to the PLOB bound with our refined analysis against the transmission distance (\(\gg 1\) km) as opposed to the previous results [37]. Improvement in the key rate is sustained in the finite-size case. b) Key rates when \(N=10^{12}\) with various values of excess noise parameter \(\xi\). (The detail of the noise model is given in the main text.) The solid lines show the key rates with our refined analysis, and the broken lines show those with the previous results [37]. One can see that, although the key rate significantly improves for the pure-loss channel, the excess noise as high as \(\xi=10^{-3}\)-\(10^{-2}\) degrades the performance to almost the same level as that of the previous results.
higher key rates and longer transmission distance than that of the previous results [37] even in the finite-key case. Furthermore, the logarithm of the asymptotic key rate in the pure-loss case (i.e., \(\xi=0\)) is in parallel to the PLOB bound [47] against the transmission distance; that is, it achieves a linear scaling against the channel transmission, which is known to be optimal for one-way QKD in the pure-loss channel. When the excess noise \(\xi\) is around \(10^{-3.0}\)-\(10^{-2.0}\), however, the improvements in our refined analysis are lost. The result of the parameter optimization implies that our refined analysis generates the key with relatively small intensity \(\mu\) of the input coherent states compared to the previous analyses; e.g., the optimized input intensity \(\mu\) of Homodyne protocol is \(\sim 0.04\) in our refined analysis compared to \(\sim 0.2\) in the previous analysis [37] at \(\eta=0.1\) (i.e., 50 km) for the asymptotic pure-loss case.
The key rate of Heterodyne protocol has a similar behavior. Figure 3 shows the key rates of Heterodyne protocol with the same noise model as above. Figures show that our refined analysis significantly improves the key rate against the pure-loss channel, but is fragile against excess noise. One can see, however, that, while the key rate of Heterodyne protocol is still low compared to that of Homodyne protocol, the achievable distance (i.e., the distance with a non-zero key rate) now becomes comparable with our refined analysis. This implies that our refined analysis based on the reverse reconciliation is more effective for Heterodyne protocol.
## 4 Discussion
We propose a refined security analysis for the protocol proposed in Ref. [37] based on the reverse reconciliation. The motivating ideas of our refinement come from the facts that the distillability of a secret key from a quantum state is a looser condition than the distillability of an entanglement from it [51, 52, 53, 42, 54, 43] and the reverse reconciliation can increase the key rate for CV QKD protocols [10]. To exploit the ideas, we developed the procedure of "twisting" Alice's system with \(V_{B;A\to R}^{\rm hom}(x)\) (resp. \(V_{B;A\to R}^{\rm het}(\omega)\)) controlled by Bob's qubit, while the similar techniques have already appeared in previous works [51, 42, 53, 54, 43, 55]. Our finding is that by using the twisting operation that minimizes the phase error probability for the pure-loss channel, the protocol has asymptotically optimal scaling in the key rates both for Homodyne and Heterodyne protocols. This is a clear distinction from the previous results [37, 38]; there, the asymptotic key rate non-linearly decreases against the channel transmission. The improvement in the performance remains
Figure 3: Key rates of the Heterodyne protocol against transmission distance over an optical fiber. The noise models are the same as those of Homodyne protocol. a) Key rates when the excess noise \(\xi\) at the channel output is zero; that is, the channel is pure loss. The bold solid lines show the key rates with our refined analysis developed here, the broken lines show those with the previous analysis [38], and the black thin line shows the PLOB bound, which is the ultimate limit of the key rate of one-way QKD [47]. One can see that the logarithm of the asymptotic key rate is in parallel to the PLOB bound when the transmission distance is large in the same way as that of Homodyne protocol. The key rate is still less (about half) than that of Homodyne protocol. b) Key rates when \(N=10^{12}\) with various values of excess noise parameter \(\xi\). The solid lines show the key rates with our refined analysis, and the broken lines show those with the previous result [38].
in the finite-key case but is lost under the existence of excess noise as high as \(\xi=10^{-3}\)-\(10^{-2}\) at the channel output. This may limit the feasibility of our binary-modulation protocol, but current theoretical progress in CV QKD reveals that the discrete-modulation CV-QKD protocols with four types of modulation have more tolerance against excess noise than those with binary modulation [16, 17, 18]. What is important is that our security proof can be extended to the four-state protocols with binary outcomes, such as Protocol 2 in Ref. [17] and a protocol in Ref. [18], by replacing the bit-extracting measurements of these protocols with the qubit-extracting maps as shown in Eq. (9) and constructing the corresponding phase error operator. This is, however, much more complicated than the previous analysis, and we leave the problem as future work.
There are several remaining questions with our present results. The first and foremost is whether we can obtain higher tolerance against excess noise by extending our analysis to the four-state protocols. As explained above, our analysis can be extended to the four-state protocols with binary outputs [17, 18], i.e., protocols that use homodyne measurement to distinguish signals. With the same type of argument based on the phase error estimation, we can carry out the finite-size security proof for these protocols in principle. However, developing the analyses that preserve the robustness against excess noise for these protocols still has non-triviality. A more challenging problem is to apply our finite-size security proof to the four-state protocols with more than two outputs, such as a protocol in Ref. [16] and Protocol 1 in Ref. [17]. In this case, the definition of phase errors is already non-trivial as opposed to those with binary outputs, and we have to develop more elaborated finite-size security proof. Whether we can extend our techniques to these protocols or protocols with even more constellations [19] is still open.
Another important theoretical question is whether the trusted-noise model can be applied to our security analysis. In practice, even the excess noise of \(\xi=10^{-3}\) at the channel output is difficult to realize if all the noises are untrusted. Recently, efforts have been made in the field of CV QKD on how to incorporate noises that are intrinsic to apparatuses and thus inaccessible to Eve into the security proof as trusted noises. This effectively eases the requirement on the experimental apparatuses. In the present security analysis as well as ones in Refs. [37, 38], the fidelity test measures the fidelity to a pure coherent state, which cannot be naively generalized to the fidelity to a mixed state. Whether we can incorporate trusted noises into the fidelity test may be crucial in this direction.
From the viewpoint of the feasibility of the protocol, the total number of \(10^{12}\) of rounds to obtain a tolerable finite-size performance may be demanding. The finite-size performance may be improved by applying recently developed refinement [62] of the Azuma's inequality [59] that utilizes unconfirmed knowledge. What is non-trivial for the application of this is that the random variable in our application of Azuma's inequality can not directly be observed even at the end of the protocol. Whether we can apply the refined concentration inequality [62] with the information accessible in our protocol (in a similar fashion to Ref. [63]) may be an interesting problem.
## Acknowledgments
This work was supported by the Ministry of Internal Affairs and Communications (MIC) under the initiative Research and Development for Construction of a Global Quantum Cryptography Network (grant number JPMI00316); Cross-ministerial Strategic Innovation Promotion Program (SIP) (Council for Science, Technology and Innovation (CSTI)); JSPS KAKENHI Grant Number JP22K13977.
## Appendix A Bit error sampling
In this section, we summarize how to determine an upper bound on the bit error rate from the given sample. As explained in the main text, \(N_{\text{smp}}\) sampling rounds are randomly inserted in the actual protocol in which Alice and Bob announce their bit values if Bob's detection succeeds (in the same way as in the signal round). The number of "success" sampling rounds is denoted by \(\hat{N}_{\text{smp}}^{\text{suc}}\), and the observed number of discrepancies between Alice and Bob is denoted by \(\hat{E}_{\text{obs}}\).
Let us first introduce a Chernoff-type bound for the hypergeometric distribution.
**Lemma 1** (Tail bound for the hypergeometric distribution [64]).: _Let \(X_{1},\ldots,X_{N}\) be a binary sequence, and \(M\) be the number of elements with \(X_{i}=1\), i.e, \(M\coloneqq\sum_{i=1}^{N}X_{i}\). Let \(\hat{Y}_{1},\ldots,\hat{Y}_{n}\)\((n\leq N)\) be randomly sampled from \(X_{1},\ldots,X_{N}\) without replacement. Let \(\hat{m}\coloneqq\sum_{i=1}^{n}\hat{Y}_{i}\) be the number of ones in \(\hat{Y}_{1},\ldots,\hat{Y}_{n}\). Then, for any \(\delta\in[0,M/N]\), the following inequality holds:_
\[\Pr\left(\frac{\hat{m}}{n}\leq\frac{M}{N}-\delta\right)\leq 2^{-nD\left(\frac{M }{N}-\delta\right\|\frac{M}{N}\right)}, \tag{98}\]
_where \(D(\cdot\|\cdot)\) is defined in Eq. (86)._
Then, the following corollary is essential for the bit error sampling.
**Corollary 1** (Estimation by the simple random sampling without replacement).: _Let \(X_{1},\ldots,X_{N}\) be a binary sequence with \(M\coloneqq\sum_{i=1}^{N}X_{i}\). Let \(\hat{Y}_{1},\ldots,\hat{Y}_{n}\)\((n\leq N)\) be randomly sampled from \(X_{1},\ldots,X_{N}\) without replacement, and define \(\hat{m}\coloneqq\sum_{i=1}^{n}\hat{Y}_{i}\). Then, for any \(\epsilon\in(0,1)\), the following inequality holds:_
\[\Pr\left(\tilde{M}_{N,n,\epsilon}(\hat{m})<M\right)\leq\epsilon, \tag{99}\]
_where the function \(\tilde{M}_{N,n,\epsilon}(m)\) is defined to satisfy_
\[\frac{m}{n}\leq\frac{\tilde{M}_{N,n,\epsilon}(m)}{N}\leq 1 \tag{100}\]
_and for \(0\leq m<n\),_
\[D\left(m/n\big{\|}\tilde{M}_{N,n,\epsilon}(m)/N\right)=-\frac{1}{n}\log\epsilon. \tag{101}\]
Proof.: Let \(f(M)\) be a function of \(M\) satisfying \(0\leq f(M)/n\leq M/N\). Then, from Lemma 1, we have
\[\Pr\left(\frac{\hat{m}}{n}\leq\frac{M}{N}-\left(\frac{M}{N}-\frac{f(M)}{n} \right)\right)\leq 2^{-nD\left(\frac{f(M)}{n}\right)\left\|\frac{M}{N}\right)}. \tag{102}\]
We set the function \(f(M)\) to the restriction of the function \(f_{N,n,\epsilon}(\tilde{M})\) of the real number \(\tilde{M}\) that satisfies
\[D\left(f_{N,n,\epsilon}(\tilde{M})/n\|\tilde{M}/N\right)=-\frac{1}{n}\log\epsilon, \tag{103}\]
for \(\tilde{M}\in[(1-\sqrt[\epsilon]{\epsilon})N,N)\). The function \(f_{N,n,\epsilon}(\tilde{M})\) increases monotonically with increasing \(\tilde{M}\) in \([(1-\sqrt[\epsilon]{\epsilon})N,N)\), and its image lies in \([0,n)\). Thus, from Eq. (102), we have
\[\Pr\left(f_{N,n,\epsilon}^{-1}(\hat{m})\leq M\right)\leq\epsilon \tag{104}\]
for any \(\hat{m}\in[0,n)\). We define the function \(\tilde{M}_{N,n,\epsilon}(m)\coloneqq f_{N,n,\epsilon}^{-1}(m)\) for \(m\in[0,n)\). To incorporate the case \(\hat{m}=n\), we use the following weaker condition that trivially follows from Eq. (104):
\[\Pr\left(\tilde{M}_{N,n,\epsilon}(\hat{m})<M\right)\leq\epsilon, \tag{105}\]
and define \(\tilde{M}_{N,n,\epsilon}(n)=N\) so that the above holds also for \(\hat{m}=n\). These show that Eq. (99) holds while \(\tilde{M}_{N,n,\epsilon}(m)\) satisfies Eqs. (100) and (101) by construction in Eq. (103).
With Corollary 1, we can bound the number of total bit-error events from the sample under the given failure probability \(\varepsilon_{\text{cor}}/2\) by setting \(N=\hat{N}^{\text{suc}}+\hat{N}^{\text{suc}}_{\text{smp}}\), \(n=\hat{N}^{\text{suc}}_{\text{smp}}\), and \(\epsilon=\varepsilon_{\text{cor}}/2\) for \(\tilde{M}_{N,n,\epsilon}\). As a result, we have the following statement; the number \(E\) of bit errors in \(\hat{N}^{\text{suc}}\)-bit sifted key is bounded from above by
\[\Pr\left(E\leq\tilde{M}_{\hat{N}^{\text{suc}}+\hat{N}^{\text{suc}}_{\text{smp }},\hat{N}^{\text{suc}}_{\text{smp}},\varepsilon_{\text{cor}}/2}(\hat{E}_{ \text{obs}})-\hat{E}_{\text{obs}}\right)\geq 1-\varepsilon_{\text{cor}}/2. \tag{106}\]
Thus, we can define an upper bound \(e_{\text{aber}}\) of the bit error rate as in Eq. (3), which holds with probability no smaller than \(1-\varepsilon_{\text{cor}}/2\).
Proof of the operator inequality
In this section, we prove the inequality (80) used in the security proof in the main text. We first prove the following lemma.
**Lemma 2**.: _Let \(\Pi_{\pm}\) be orthogonal projections that have the rank no smaller than three or infinite. Let \(M\) be a self-adjoint operator satisfying \(M=(\Pi_{+}+\Pi_{-})M(\Pi_{+}+\Pi_{-})\leq\alpha(\Pi_{+}+\Pi_{-})\), where \(\alpha\) is a real constant. Let \(\ket{\psi}\) be a vector satisfying \((\Pi_{+}+\Pi_{-})\ket{\psi}=\ket{\psi}\) and \(\Pi_{\pm}\ket{\psi}\neq 0\). Assume \(\Pi_{\pm}\ket{\psi}\) are not proportional to eigenvectors of \(\Pi_{\pm}M\Pi_{+}\) (if they have). Define the following quantities with respect to \(\ket{\psi}\):_
\[C_{\pm} \coloneqq\bra{\psi}\Pi_{\pm}\ket{\psi}\,(>0), \tag{107}\] \[\lambda_{\pm\pm} \coloneqq C_{\pm}^{-1}\bra{\psi}M_{\pm\pm}\ket{\psi},\] (108) \[\lambda_{+-} \coloneqq(C_{+}C_{-})^{-\frac{1}{2}}\bra{\psi}M_{+-}\ket{\psi}, \qquad\lambda_{-+}\coloneqq\lambda_{+-}^{*},\] (109) \[\sigma_{\pm+} \coloneqq\left(C_{+}^{-1}\|M_{\pm}\ket{\psi}\|^{2}-|\lambda_{\pm +}|^{2}\right)^{\frac{1}{2}},\] (110) \[\sigma_{\pm-} \coloneqq\sigma_{\pm+}^{-1}\left(\left(C_{+}C_{-}\right)^{-\frac {1}{2}}\bra{\psi}M_{+\pm}M_{\pm-}\ket{\psi}-\lambda_{+-}\lambda_{\pm\pm} \right),\] (111) \[\Delta_{\pm-} \coloneqq\left(C_{-}^{-1}\|M_{\pm-}\ket{\psi}\|^{2}-|\lambda_{\pm -}|^{2}-|\sigma_{\pm-}|^{2}\right)^{\frac{1}{2}}, \tag{112}\]
_where \(M_{++},M_{--},M_{+-},\) and \(M_{-+}\) are given respectively by_
\[M_{\pm\pm} \coloneqq\Pi_{\pm}M\Pi_{\pm},\quad M_{+-}\coloneqq\Pi_{+}M\Pi_{-}, \quad M_{-+}\coloneqq M_{+-}^{\dagger}. \tag{113}\]
_Then, for any real numbers \(\gamma_{\pm}\), we have_
\[\sigma_{\sup}(M+|\psi)\langle\psi|-\gamma_{+}\Pi_{+}-\gamma_{-}\Pi_{-})\leq \sigma_{\sup}(M_{\mathrm{6d}}), \tag{114}\]
_where \(\sigma_{\sup}(X)\) denotes the supremum of the spectrum of the operator \(X\), and \(M_{\mathrm{6d}}\) is given by_
\[M_{\mathrm{6d}}\coloneqq\begin{pmatrix}\alpha-\gamma_{+}&0&0&\Delta_{+-}&0&0 \\ 0&\alpha-\gamma_{+}&\sigma_{++}&\sigma_{+-}&0&0\\ 0&\sigma_{++}&C_{+}+\lambda_{++}-\gamma_{+}&\sqrt{C_{+}C_{-}}+\lambda_{+-}& \sigma_{-+}&0\\ \Delta_{+-}&\sigma_{+-}^{*}&\sqrt{C_{+}C_{-}}+\lambda_{-+}&C_{-}+\lambda_{--}- \gamma_{-}&\sigma_{--}^{*}&\Delta_{--}\\ 0&0&\sigma_{-+}&\sigma_{--}&\alpha-\gamma_{-}&0\\ 0&0&0&\Delta_{--}&0&\alpha-\gamma_{-}\end{pmatrix}. \tag{115}\]
Proof.: We choose orthonormal vectors \(\{\ket{e_{\pm}^{(1)}},\ket{e_{\pm}^{(2)}},\ket{e_{\pm}^{(3)}}\}\) in the domains of \(\Pi_{\pm}\), respectively, to satisfy
\[\sqrt{C_{\pm}}\ket{e_{\pm}^{(1)}} =\Pi_{\pm}\ket{\psi}, \tag{116}\] \[M\ket{e_{+}^{(1)}} =(M_{++}+M_{-+})\ket{e_{+}^{(1)}} =\lambda_{++}\ket{e_{+}^{(1)}}+\sigma_{++}\ket{e_{+}^{(2)}}+ \lambda_{-+}\ket{e_{-}^{(1)}}+\sigma_{-+}\ket{e_{-}^{(2)}},\] (117) \[M\ket{e_{-}^{(1)}} =(M_{+-}+M_{--})\ket{e_{-}^{(1)}} =\lambda_{+-}\ket{e_{+}^{(1)}}+\sigma_{+-}\ket{e_{+}^{(2)}}+ \Delta_{+-}\ket{e_{+}^{(3)}}\] (118) \[\qquad+\lambda_{--}\ket{e_{-}^{(1)}}+\sigma_{--}\ket{e_{-}^{(2)}}+ \Delta_{--}\ket{e_{-}^{(3)}}, \tag{119}\]
which is well-defined due to Eqs. (107)-(113) and \(M=(\Pi_{+}+\Pi_{-})M(\Pi_{+}+\Pi_{-})\). Actually, Eqs. (110)-(112) are derived by taking inner product of appropriate pairs among \(M_{\pm\pm}\ket{\psi}\) and \(M_{\pm\mp}\ket{\psi}\). Overall phases of \(\ket{e_{\pm}^{(2)}}\) and \(\ket{e_{\pm}^{(3)}}\) are taken so that \(\sigma_{\pm+}\) and \(\Delta_{\pm-}\) are positive. From \((\Pi_{+}+\Pi_{-})\ket{\psi}=\ket{\psi}\), we have
\[\ket{\psi}=\sqrt{C_{+}}\ket{e_{+}^{(1)}}+\sqrt{C_{-}}\ket{e_{-}^{(1)}}. \tag{120}\]
Let us now define the following projection operators:
\[\Pi_{\pm}^{(j)} \coloneqq\ket{e_{\pm}^{(j)}}\!\Big{\langle}e_{\pm}^{(j)}\Big{|} \quad(j=1,2,3), \tag{121}\] \[\Pi_{\pm}^{(\geq 2)} \coloneqq\Pi_{\pm}-\Pi_{\pm}^{(1)},\] (122) \[\Pi_{\pm}^{(\geq 4)} \coloneqq\Pi_{\pm}^{(\geq 2)}-\Pi_{\pm}^{(2)}-\Pi_{\pm}^{(3)}. \tag{123}\]
Since Eqs. (117) and (119) imply \((\Pi_{+}^{(\geq 4)}+\Pi_{-}^{(\geq 4)})M(\Pi_{+}^{(1)}+\Pi_{-}^{(1)})=0\), we have
\[M =(\Pi_{+}+\Pi_{-})M(\Pi_{+}+\Pi_{-}) \tag{124}\] \[=(\Pi_{+}^{(1)}+\Pi_{-}^{(1)})M(\Pi_{+}^{(1)}+\Pi_{-}^{(1)})+(\Pi_ {+}^{(2)}+\Pi_{+}^{(3)}+\Pi_{-}^{(3)})M(\Pi_{+}^{(1)}+\Pi_{-}^{(1)})\] (125) \[\qquad+(\Pi_{+}^{(1)}+\Pi_{-}^{(1)})M(\Pi_{+}^{(2)}+\Pi_{+}^{(3) }+\Pi_{-}^{(2)}+\Pi_{-}^{(3)})+(\Pi_{+}^{(2)}+\Pi_{-}^{(2)})M(\Pi_{+}^{(\geq 2 )}+\Pi_{-}^{(\geq 2)})\] \[\leq\lambda_{++}\Pi_{+}^{(1)}+\lambda_{--}\Pi_{+}^{(1)}+\lambda_ {+-}\left|e_{+}^{(1)}\right\rangle\!\left\langle e_{-}^{(1)}\right|+\lambda_{ -+}\left|e_{-}^{(1)}\right\rangle\!\left\langle e_{+}^{(1)}\right| \tag{126}\]
where h.c. denotes the hermitian conjugate of the terms in the preceding parenthesis. The last inequality comes from \(M\leq\alpha(\Pi_{+}+\Pi_{-})\). Using Eq. (126), we have
\[M+|\psi\rangle\langle\psi|-\gamma_{+}\Pi_{+}-\gamma_{-}\Pi_{-}\leq M_{64}\oplus (\alpha-\gamma_{+})\Pi_{+}^{(\geq 4)}\oplus(\alpha-\gamma_{-})\Pi_{-}^{( \geq 4)}, \tag{127}\]
where \(M_{64}\) is given in Eq. (115) with the basis \(\{\left|e_{+}^{(3)}\right\rangle,\left|e_{+}^{(2)}\right\rangle,\left|e_{+}^{( 1)}\right\rangle,\left|e_{-}^{(1)}\right\rangle,\left|e_{-}^{(2)}\right\rangle,\left|e_{-}^{(3)}\right\rangle\}\). Since \(\alpha-\gamma_{\pm}=\left\langle e_{\pm}^{(3)}\right|M_{64}\left|e_{\pm}^{(3) }\right\rangle\leq\sigma_{\sup}(M_{64})\), the supremum of the spectrum of the right-hand side of Eq. (127) is equal to the maximum eigenvalue of the six-dimensional matrix \(M_{64}\). We then obtain Eq. (114).
As a corollary of this lemma, we obtain the followings. First, we consider Homodyne protocol.
**Corollary 2**.: _Let \(|\beta\rangle\) be a coherent state and \(\theta_{\mu,\beta}^{\rm hom}(x)\) be defined to satisfy_
\[|\theta_{\mu,\beta}^{\rm hom}(x)|\leq\frac{\pi}{2},\qquad\tan\theta_{\mu, \beta}^{\rm hom}(x)=e^{-2(\mu-\beta^{2})}\sinh(4\beta x). \tag{128}\]
_Let \(\Pi_{\rm ev(od)}\) and \(M^{\rm hom}[\kappa,\gamma]\) be as defined in the main text, and let \(M_{\rm oo}^{\rm hom}\), \(M_{\rm ee}^{\rm hom}\), \(M_{(\pm,{\rm o})(\mp,e)}^{\rm hom}\), and \(M_{(\mp,e)(\pm,{\rm o})}^{\rm hom}\) be defined as follows:_
\[M_{\rm oo}^{\rm hom} \coloneqq\int_{-\infty}^{\infty}f_{\rm suc}(x)[1+\cos\theta_{\mu, \beta}^{\rm hom}(x)]dx\;\Pi_{\rm od}\left|x\right\rangle\!\left\langle x \right|\Pi_{\rm od}, \tag{129}\] \[M_{\rm ee}^{\rm hom} \coloneqq\int_{-\infty}^{\infty}f_{\rm suc}(x)[1-\cos\theta_{\mu, \beta}^{\rm hom}(x)]dx\;\Pi_{\rm ev}\left|x\right\rangle\!\left\langle x \right|\Pi_{\rm ev},\] (130) \[M_{(+,{\rm o})(-,e)}^{\rm hom} \coloneqq\int_{-\infty}^{\infty}f_{\rm suc}(x)\sin\theta_{\mu, \beta}^{\rm hom}(x)\,dx\;\Pi_{\rm od}\left|x\right\rangle\!\left\langle x \right|\Pi_{\rm ev},\] (131) \[M_{(-,e)(+,{\rm o})}^{\rm hom} \coloneqq\left(M_{(+,{\rm o})(-,e)}^{\rm hom}\right)^{\dagger},\] (132) \[M_{(-,{\rm o})(+,e)}^{\rm hom} \coloneqq\int_{-\infty}^{\infty}-f_{\rm suc}(x)\,\sin\theta_{\mu, \beta}^{\rm hom}(x)\,dx\;\Pi_{\rm od}\left|x\right\rangle\!\left\langle x \right|\Pi_{\rm ev}=-M_{(+,{\rm o})(-,e)}^{\rm hom},\] (133) \[M_{(+,{\rm e})(-,{\rm o})}^{\rm hom} \coloneqq\left(M_{(-,{\rm o})(+,{\rm e})}^{\rm hom}\right)^{ \dagger}=-\left(M_{(+,{\rm o})(-,e)}^{\rm hom}\right)^{\dagger}. \tag{134}\]
_Define the following (real) parameters:_
\[C_{\rm o}\coloneqq\left\langle\beta\right|\Pi_{\rm od}\left|\beta \right\rangle=e^{-\left|\beta\right|^{2}}\sinh|\beta|^{2},\quad C_{\rm e} \coloneqq\left\langle\beta\right|\Pi_{\rm ev}\left|\beta\right\rangle=e^{-| \beta|^{2}}\cosh|\beta|^{2}, \tag{135}\] \[\lambda_{\rm oo}^{\rm hom}\coloneqq C_{\rm o}^{-1}\left\langle \beta\right|M_{\rm oo}^{\rm hom}\left|\beta\right\rangle,\quad\lambda_{\rm ee} ^{\rm hom}\coloneqq C_{\rm e}^{-1}\left\langle\beta\right|M_{\rm ee}^{\rm hom} \left|\beta\right\rangle,\] (136) \[\lambda_{(+,o)(-,e)}^{\rm hom}\coloneqq\left(C_{\rm o}C_{\rm e} \right)^{-\frac{1}{2}}\left\langle\beta\right|M_{(+,o)(-,e)}^{\rm hom}\left| \beta\right\rangle=\left(\lambda_{(+,o)(-,e)}^{\rm hom}\right)^{*},\] (137) \[\sigma_{\rm oo}^{\rm hom}\coloneqq\left(C_{\rm o}^{-1}\|M_{\rm oo }^{\rm hom}\left|\beta\right\rangle\|^{2}-\left(\lambda_{\rm oo}^{\rm hom} \right)^{2}\right)^{\frac{1}{2}},\] (138) \[\sigma_{\rm eo}^{\rm hom}\coloneqq\left(C_{\rm o}^{-1}\|M_{(-,e)( +,o)}^{\rm hom}\left|\beta\right\rangle\|^{2}-\left|\lambda_{(+,o)(-,e)}^{\rm hom }\right|^{2}\right)^{\frac{1}{2}},\] (139) \[\sigma_{(+,o)(-,e)}^{\rm hom}\coloneqq\left(\sigma_{\rm oo}^{\rm hom }\right)^{-1}\left(\left(C_{\rm o}C_{\rm e}\right)^{-\frac{1}{2}}\left\langle \beta\right|M_{\rm oo}^{\rm hom}M_{(+,o)(-,e)}^{\rm hom}\left|\beta\right\rangle -\lambda_{\rm oo}^{\rm hom}\lambda_{(+,o)(-,e)}^{\rm hom}\right)=(\sigma_{(+,o )(-,e)}^{\rm hom})^{*},\] (140) \[\sigma_{(-,e)(-,e)}^{\rm hom}\coloneqq(\sigma_{(-,e)(+,o)}^{\rm hom })^{-1}\left(\left(C_{\rm o}C_{\rm e}\right)^{-\frac{1}{2}}\left\langle\beta \right|M_{(+,o)(-,e)}^{\rm hom}M_{\rm ee}^{\rm hom}\left|\beta\right\rangle- \lambda_{(+,o)(-,e)}^{\rm hom}\right)=(\sigma_{(-,e)(-,e)}^{\rm hom})^{*},\] (141) \[\Delta_{\rm ee}^{\rm hom}\coloneqq\left(C_{\rm e}^{-1}\|M_{\rm ee }^{\rm hom}\left|\beta\right\rangle\|^{2}-\left(\lambda_{\rm ee}^{\rm hom} \right)^{2}-\left|\sigma_{(-,e)(-,e)}^{\rm hom}\right|^{2}\right)^{\frac{1}{2}}. \tag{143}\]
_Define the following two matrices \(M_{\rm 6d}^{(0)}\) and \(M_{\rm 6d}^{(1)}\)._
\[M_{\rm 6d}^{(0)}\coloneqq\left(\begin{array}{cccc}1&0&0&\Delta_{\rm oo }^{\rm hom}&0&0\\ 0&1&\sigma_{\rm oo}^{\rm hom}&\alpha_{(+,o)(-,e)}^{\rm hom}&0&0\\ 0&\sigma_{\rm oo}^{\rm hom}&\kappa C_{\rm o}+\lambda_{\rm oo}^{\rm hom}& \kappa\sqrt{C_{\rm o}C_{\rm e}}+\lambda_{(+,o)(-,e)}^{\rm hom}&\sigma_{\rm oo }^{\rm hom}&0\\ \Delta_{\rm ee}^{\rm hom}&\sigma_{(+,o)(-,e)}^{\rm hom}&\kappa\sqrt{C_{\rm o}C _{\rm e}}+\lambda_{(+,o)(-,-,e)}^{\rm hom}&\kappa C_{\rm e}+\lambda_{\rm ee}^{ \rm hom}-\gamma&\sigma_{(-,e)(-,e)}^{\rm hom}&\Delta_{\rm ee}^{\rm hom}\\ 0&0&\sigma_{\rm oo}^{\rm hom}&\sigma_{(-,e)(-,e)}^{\rm hom}&1-\gamma&0\\ 0&0&0&\Delta_{\rm ee}^{\rm hom}&0&1-\gamma\end{array}\right), \tag{144}\]
\[M_{\rm 6d}^{(1)}\coloneqq\left(\begin{array}{cccc}1-\gamma&0&0&\Delta_{\rm ee }^{\rm hom}&0&0\\ 0&1-\gamma&\sigma_{\rm oo}^{\rm hom}&-\sigma_{(+,o)(-,e)}^{\rm hom}&0&0\\ 0&\sigma_{\rm oo}^{\rm hom}&\kappa C_{\rm o}+\lambda_{\rm oo}^{\rm hom}- \gamma&\kappa\sqrt{C_{\rm o}C_{\rm e}}-\lambda_{(+,o)(-,e)}^{\rm hom}&\sigma_ {\rm ee}^{\rm hom}&0\\ \Delta_{\rm ee}^{\rm hom}&-\sigma_{(+,o)(-,e)}^{\rm hom}&\kappa\sqrt{C_{\rm o} C_{\rm e}}-\lambda_{(+,o)(-,e)}^{\rm hom}&\kappa C_{\rm e}+\lambda_{\rm ee}^{\rm hom}&- \sigma_{(-,e)(-,e)}^{\rm hom}&\Delta_{\rm ee}^{\rm hom}\\ 0&0&\sigma_{\rm ee}^{\rm hom}&-\sigma_{(-,e)(-,e)}^{\rm hom}&1&0\\ 0&0&0&\Delta_{\rm ee}^{\rm hom}&0&1\end{array}\right). \tag{145}\]
_Define a convex function_
\[B^{\rm hom}(\kappa,\gamma)\coloneqq\max\{\sigma_{\rm sup}(M_{\rm 6d}^{(0)}), \sigma_{\rm sup}(M_{\rm 6d}^{(1)})\}. \tag{146}\]
_Then, for \(\kappa,\gamma\geq 0\), we have_
\[M^{\rm hom}[\kappa,\gamma]\leq B^{\rm hom}(\kappa,\gamma)I_{AC}. \tag{147}\]
Proof.: We first derive the explicit form of \(\left|u_{\pm}^{\rm hom}(x)\right\rangle_{A}\) introduced in Eq. (41). Notice that
\[1-2q_{\mu,\beta}=e^{-2(\mu-\beta^{2})}, \tag{148}\] \[\sqrt{\frac{g_{\beta,1/4}(x)}{g_{-\beta,1/4}(x)}}=e^{4\beta x}. \tag{149}\]
Let \(\theta(x)\) be defined to satisfy
\[\left|\theta(x)\right|<\frac{\pi}{2},\qquad\tan\theta(x)={\rm Tr}\left(Z_{A} \left\langle 0\right|_{B}T_{AB}^{\rm hom}(x)\left|1\right\rangle_{B}\right)\Big{/}{ \rm Tr}\left(X_{A}\left\langle 0\right|_{B}\tau_{AB}^{\rm hom}(x)\left|1\right\rangle_{B}\right). \tag{150}\]
Noticing that \(\operatorname{Tr}\big{(}Y_{A}\left\langle 0\right|_{B}\tau^{\mathrm{hom}}_{AB}(x) \left|1\right\rangle_{B}\big{)}=0\), we have
\[\left|u^{\mathrm{hom}}_{\pm}(x)\right\rangle_{A}=\cos\frac{\theta(x)}{2}\left| \pm\right\rangle_{A}\pm\sin\frac{\theta(x)}{2}\left|\mp\right\rangle_{A}. \tag{151}\]
From Eqs. (40), (148), (149), and (150), we can see that \(\theta(x)\) coincides with \(\theta^{\mathrm{hom}}_{\mu,\beta}(x)\) defined in Eq. (128). We now observe that
\[\left|\left\langle+\right|u^{\mathrm{hom}}_{+}(x)\right\rangle \left|{}^{2}=\left|\left\langle-\right|u^{\mathrm{hom}}_{-}(x)\right\rangle \left|{}^{2}=\cos^{2}\!\left(\frac{\theta^{\mathrm{hom}}_{\mu,\beta}(x)}{2} \right)\right.=\frac{1+\cos\theta^{\mathrm{hom}}_{\mu,\beta}(x)}{2}, \tag{152}\] \[\left|\left\langle-\right|u^{\mathrm{hom}}_{+}(x)\right\rangle \left|{}^{2}=\left|\left\langle+\right|u^{\mathrm{hom}}_{-}(x)\right\rangle \left|{}^{2}=\sin^{2}\!\left(\frac{\theta^{\mathrm{hom}}_{\mu,\beta}(x)}{2} \right)\right.=\frac{1-\cos\theta^{\mathrm{hom}}_{\mu,\beta}(x)}{2},\] (153) \[\left\langle+\right|u^{\mathrm{hom}}_{+}(x)\rangle\langle u^{ \mathrm{hom}}_{+}(x)|-\rangle=\langle-|u^{\mathrm{hom}}_{+}(x)\rangle\langle u ^{\mathrm{hom}}_{+}(x)|+\rangle=\sin\!\left(\frac{\theta^{\mathrm{hom}}_{\mu, \beta}(x)}{2}\right)\cos\!\left(\frac{\theta^{\mathrm{hom}}_{\mu,\beta}(x)}{2} \right)\] (154) \[=-\left\langle+\right|u^{\mathrm{hom}}_{-}(x)\rangle\langle u^{ \mathrm{hom}}_{-}(x)|-\rangle=-\left\langle-\right|u^{\mathrm{hom}}_{-}(x) \rangle\langle u^{\mathrm{hom}}_{-}(x)|+\rangle=\frac{\sin\theta^{\mathrm{hom }}_{\mu,\beta}(x)}{2}.\]
From Eq. (64) as well as Eqs. (74)-(79), it is obvious that
\[M^{\mathrm{hom}}[\kappa,\gamma]=\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}M^ {\mathrm{hom}}[\kappa,\gamma]\,\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}+\Pi ^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}M^{\mathrm{hom}}[\kappa,\gamma]\,\Pi^{( -,\mathrm{od}),(+,\mathrm{ev})}_{AC}, \tag{155}\]
where the two orthogonal projections \(\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}\) and \(\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}\) are defined in Eqs. (65) and (66). Then we apply Lemma 2 respectively to the operators \(\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}M^{\mathrm{hom}}[\kappa,\gamma]\,\Pi^ {(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}\) and \(\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}M^{\mathrm{hom}}[\kappa,\gamma]\,\Pi^ {(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}\). For \(\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}M^{\mathrm{hom}}[\kappa,\gamma]\,\Pi^ {(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}\), we set, by using Eqs. (152)-(154), that
\[\Pi_{\pm} =\left|\pm\right\rangle\!\left\langle\pm\right|_{A}\otimes\Pi_{ \mathrm{od}(\mathrm{ev})}, \tag{156}\] \[M =\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}M^{\mathrm{hom}}_{ \mathrm{ph}}\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}\] (157) \[=\left|+\right\rangle\!\left\langle+\right|_{A}\otimes M^{ \mathrm{hom}}_{\mathrm{oo}}+\left|-\right\rangle\!\left\langle-\right|_{A} \otimes M^{\mathrm{hom}}_{\mathrm{ee}}+\left(\left|+\right\rangle\!\left\langle -\right|_{A}\otimes M^{\mathrm{hom}}_{(+,\mathrm{o})(-,\mathrm{e})}+\left|- \right\rangle\!\left\langle+\right|_{A}\otimes M^{\mathrm{hom}}_{(-,\mathrm{e}) (+,\mathrm{o})}\right),\] (158) \[\left|\psi\right\rangle =\sqrt{\kappa}\left|\phi_{-}\right\rangle_{AC},\] (159) \[\alpha =1,\quad\gamma_{+}=0,\quad\gamma_{-}=\gamma, \tag{160}\]
where \(\left|\phi_{-}\right\rangle_{AC}\) is defined in Eq. (28). Since so-defined \(M\) only has continuous spectrum, we can apply Lemma 2 and obtain
\[\sigma_{\mathrm{sup}}\!\left(\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC}M^{ \mathrm{hom}}[\kappa,\gamma]\,\Pi^{(+,\mathrm{od}),(-,\mathrm{ev})}_{AC} \right)\leq\sigma_{\mathrm{sup}}(M^{(0)}_{\mathrm{6d}}). \tag{161}\]
In the same way, we apply Lemma 2 to \(\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}M^{\mathrm{hom}}[\kappa,\gamma]\,\Pi^ {(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}\). Using Eqs. (152)-(154), we set
\[\Pi_{\pm} =\left|\mp\right\rangle\!\left\langle\mp\right|_{A}\otimes\Pi_{ \mathrm{od}(\mathrm{ev})}, \tag{162}\] \[M =\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}M^{\mathrm{hom}}_{ \mathrm{ph}}\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}\] (163) \[=\left|-\right\rangle\!\left\langle-\right|_{A}\otimes M^{ \mathrm{hom}}_{\mathrm{oo}}+\left|+\right\rangle\!\left\langle+\right|_{A}\otimes M ^{\mathrm{hom}}_{\mathrm{ee}}+\left(\left|-\right\rangle\!\left\langle+\right|_{A} \otimes M^{\mathrm{hom}}_{(-,\mathrm{o})(+,\mathrm{e})}+\left|+\right\rangle \!\left\langle-\right|_{A}\otimes M^{\mathrm{hom}}_{(+,\mathrm{e})(-,\mathrm{o} )}\right),\] (164) \[=\left|-\right\rangle\!\left\langle-\right|_{A}\otimes M^{ \mathrm{hom}}_{\mathrm{oo}}+\left|+\right\rangle\!\left\langle+\right|_{A}\otimes M ^{\mathrm{hom}}_{\mathrm{ee}}-\left(\left|-\right\rangle\!\left\langle+\right|_{A} \otimes M^{\mathrm{hom}}_{(+,\mathrm{o})(-,\mathrm{e})}+\left|+\right\rangle \!\left\langle-\right|_{A}\otimes M^{\mathrm{hom}}_{(-,\mathrm{e})(+,\mathrm{o} )}\right),\] (165) \[\left|\psi\right\rangle =\sqrt{\kappa}\left|\phi_{+}\right\rangle_{AC},\] (166) \[\alpha =1,\quad\gamma_{+}=\gamma,\quad\gamma_{-}=0, \tag{167}\]
where \(\left|\phi_{+}\right\rangle_{AC}\) is defined in Eq. (27). Then, we observe
\[\sigma_{\mathrm{sup}}\!\left(\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC}M^{ \mathrm{hom}}[\kappa,\gamma]\,\Pi^{(-,\mathrm{od}),(+,\mathrm{ev})}_{AC} \right)\leq\sigma_{\mathrm{sup}}(M^{(1)}_{\mathrm{6d}}). \tag{168}\]
Combining inequalities (161) and (168) completes the proof.
Next, we consider Heterodyne protocol.
**Corollary 3**.: _Let \(|\beta\rangle\) be a coherent state and \(\theta^{\rm hom}_{\mu,\beta}(x)\) be defined to satisfy_
\[|\theta^{\rm het}_{\mu,\beta}(\omega_{r})|\leq\frac{\pi}{2},\qquad\tan\theta^{ \rm het}_{\mu,\beta}(\omega_{r})=e^{-2(\mu-\beta^{2})}\sinh(2\beta\omega_{r}). \tag{169}\]
_Let \(\Pi_{\rm ev(od)}\) and \(M^{\rm het}[\kappa,\gamma]\) be as defined in the main text, and let \(M^{\rm het}_{\rm oo}\), \(M^{\rm het}_{\rm ee}\), \(M^{\rm het}_{(\pm,o)(\mp,e)}\), and \(M^{\rm het}_{(\mp,e)(\pm,o)}\) be defined as follows:_
\[M^{\rm het}_{\rm oo} \coloneqq\iint_{-\infty}^{\infty}f_{\rm succ}(\omega_{r})d\omega_{ r}dx\;\Pi_{\rm od}\Big{[}|x\rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x \rangle\langle x|\] \[\qquad\qquad+\frac{\cos\theta^{\rm het}_{\mu,\beta}(\omega_{r})}{ 2}\big{(}|x\rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x-\beta \rangle\langle x-\beta|+|x-\beta\rangle\langle x-\beta|\omega_{r}\rangle \langle\omega_{r}|x\rangle\langle x|\big{)}\Big{]}\Pi_{\rm od}, \tag{170}\] \[M^{\rm het}_{\rm ee} \coloneqq\iint_{-\infty}^{\infty}f_{\rm succ}(\omega_{r})d\omega_ {r}dx\;\Pi_{\rm ev}\Big{[}|x\rangle\langle x|\omega_{r}\rangle\langle\omega_{r }|x\rangle\langle x|\] \[\qquad\qquad-\frac{\cos\theta^{\rm het}_{\mu,\beta}(\omega_{r})}{ 2}\big{(}|x\rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x-\beta\rangle \langle x-\beta|+|x-\beta\rangle\langle x-\beta|\omega_{r}\rangle\langle\omega_ {r}|x\rangle\langle x|\big{)}\Big{]}\Pi_{\rm ev},\] (171) \[M^{\rm het}_{(+,o)(-,e)} \coloneqq\iint_{-\infty}^{\infty}f_{\rm succ}(\omega_{r})d\omega_ {r}dx\;\Pi_{\rm od}\Big{[}\sin\theta^{\rm het}_{\mu,\beta}(\omega_{r})\,|x \rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x\rangle\langle x|\] \[\qquad\qquad-\frac{\cos\theta^{\rm het}_{\mu,\beta}(\omega_{r})}{ 2}\big{(}|x\rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x-\beta\rangle \langle x-\beta|-|x-\beta\rangle\langle x-\beta|\omega_{r}\rangle\langle\omega _{r}|x\rangle\langle x|\big{)}\Big{]}\Pi_{\rm ev},\] (172) \[M^{\rm het}_{(-,e)(+,o)} \coloneqq\left(M^{\rm het}_{(+,o)(-,e)}\right)^{\dagger},\] (173) \[M^{\rm het}_{(-,o)(+,e)} \coloneqq\iint_{-\infty}^{\infty}f_{\rm succ}(\omega_{r})d\omega_ {r}dx\;\Pi_{\rm od}\Big{[}-\sin\theta^{\rm het}_{\mu,\beta}(\omega_{r})\,|x \rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x\rangle\langle x|\] \[\qquad\qquad-\frac{\cos\theta^{\rm het}_{\mu,\beta}(\omega_{r})}{ 2}\big{(}|x\rangle\langle x|\omega_{r}\rangle\langle\omega_{r}|x-\beta\rangle \langle x-\beta|-|x-\beta\rangle\langle x-\beta|\omega_{r}\rangle\langle\omega _{r}|x\rangle\langle x|\big{)}\Big{]}\Pi_{\rm ev},\] (174) \[M^{\rm het}_{(+,e)(-,o)} \coloneqq\left(M^{\rm het}_{(-,o)(+,e)}\right)^{\dagger}, \tag{175}\]
_Define the following parameters:_
\[C_{\rm o}\coloneqq\left\langle\beta\right|\Pi_{\rm od}\left|\beta \right\rangle=e^{-|\beta|^{2}}\sinh\left|\beta\right|^{2},\quad C_{\rm e} \coloneqq\left\langle\beta\right|\Pi_{\rm ev}\left|\beta\right\rangle=e^{-| \beta|^{2}}\cosh\left|\beta\right|^{2}, \tag{176}\] \[\lambda_{\rm oo}^{\rm het}\coloneqq C_{\rm o}^{-1}\left\langle\beta \right|M_{\rm oo}^{\rm het}\left|\beta\right\rangle,\quad\lambda_{\rm ee}^{\rm het }\coloneqq C_{\rm e}^{-1}\left\langle\beta\right|M_{\rm ee}^{\rm het}\left| \beta\right\rangle,\] (177) \[\lambda_{(+,\rm o)(-,e)}^{\rm het}\coloneqq\left(C_{\rm o}C_{\rm e }\right)^{-\frac{1}{2}}\left\langle\beta\right|M_{(+,\rm o)(-,e)}^{\rm het} \left|\beta\right\rangle=\left(\lambda_{(+,\rm o)(-,e)}^{\rm het}\right)^{*},\] (178) \[\lambda_{(-,\rm o)(+,e)}^{\rm het}\coloneqq\left(C_{\rm o}C_{\rm e }\right)^{-\frac{1}{2}}\left\langle\beta\right|M_{\rm ee}^{\rm het}\left| \beta\right\rangle=\left(\lambda_{(-,\rm o)(+,e)}^{\rm het}\right)^{*},\] (179) \[\sigma_{\rm oo}^{\rm het}\coloneqq\left(C_{\rm o}^{-1}\left\|M_{ \rm oo}^{\rm het}\left|\beta\right\rangle\right\|^{2}-\left(\lambda_{\rm oo}^ {\rm het}\right)^{2}\right)^{\frac{1}{2}},\] (180) \[\sigma_{(-,\rm e)(+,o)}^{\rm het}\coloneqq\left(C_{\rm o}^{-1} \left\|M_{(-,\rm e)(+,o)}^{\rm het}\left|\beta\right\rangle\right\|^{2}-| \lambda_{(+,\rm o)(-,e)}^{\rm het}|^{2}\right)^{\frac{1}{2}},\] (181) \[\sigma_{(+,\rm e)(-,e)}^{\rm het}\coloneqq\left(C_{\rm o}^{-1} \left\|M_{(+,\rm e)(-,o)}^{\rm het}\left|\beta\right\rangle\right\|^{2}-| \lambda_{(-,\rm o)(+,e)}^{\rm het}|^{2}\right)^{\frac{1}{2}},\] (182) \[\sigma_{(+,\rm o)(-,e)}^{\rm het}\coloneqq\sigma_{\rm oo}^{-1} \left(\left(C_{\rm o}C_{\rm e}\right)^{-\frac{1}{2}}\left\langle\beta\right|M_ {\rm oo}^{\rm het}M_{(+,\rm o)(-,e)}^{\rm het}\left|\beta\right\rangle-\lambda_ {\rm oo}^{\rm het}\lambda_{(+,\rm o)(-,e)}^{\rm het}\right)=(\sigma_{(+,\rm e )(-,e)}^{\rm het})^{*},\] (183) \[\sigma_{(-,\rm o)(+,e)}^{\rm het}\coloneqq\sigma_{\rm oo}^{-1} \left(\left(C_{\rm o}C_{\rm e}\right)^{-\frac{1}{2}}\left\langle\beta\right|M_ {\rm oo}^{\rm het}M_{(-,\rm o)(+,e)}^{\rm het}\left|\beta\right\rangle-\lambda_ {\rm oo}^{\rm het}\lambda_{(-,\rm o)(+,e)}^{\rm het}\right)=(\sigma_{(-,\rm e )(+,e)}^{\rm het})^{*},\] (184) \[\sigma_{(-,\rm e)(-,e)}^{\rm het}\coloneqq\left[\sigma_{(-,\rm e )(+,o)}^{\rm het}\right]^{-1}\left(\left(C_{\rm o}C_{\rm e}\right)^{-\frac{1}{2 }}\left\langle\beta\right|M_{(+,\rm o)(-,e)}^{\rm het}M_{\rm ee}^{\rm het} \left|\beta\right\rangle-\lambda_{(+,\rm o)(-,e)}^{\rm het}\lambda_{\rm ee}^{ \rm het}\left|\beta\right\rangle-\lambda_{\rm ee}^{\rm het}\right)=(\sigma_{( -,\rm e)(-,e)}^{\rm het})^{*},\] (185) \[\sigma_{(+,\rm e)(+,e)}^{\rm het}\coloneqq\left[\sigma_{(+,\rm e )(-,e)}^{\rm het}\right]^{-1}\left(\left(C_{\rm o}C_{\rm e}\right)^{-\frac{1}{2 }}\left\langle\beta\right|M_{(-,\rm o)(+,e)}^{\rm het}M_{\rm ee}^{\rm het} \left|\beta\right\rangle-\lambda_{(-,\rm o)(+,e)}^{\rm het}\lambda_{\rm ee}^{ \rm het}\right)=(\sigma_{(+,\rm e)(+,e)}^{\rm het})^{*},\] (186) \[\Delta_{(+,\rm o)(-,e)}^{\rm het}\coloneqq\left(C_{\rm e}^{-1} \left\|M_{(+,\rm o)(-,e)}^{\rm het}\left|\beta\right\rangle\right\|^{2}-| \lambda_{(+,\rm o)(-,e)}^{\rm het}|^{2}-|\sigma_{(+,\rm o)(-,e)}^{\rm het}|^{2} \right)^{\frac{1}{2}},\] (187) \[\Delta_{(-,\rm o)(+,e)}^{\rm het}\coloneqq\left(C_{\rm e}^{-1} \left\|M_{(-,\rm o)(+,e)}^{\rm het}\left|\beta\right\rangle\right\|^{2}-| \lambda_{(-,\rm o)(+,e)}^{\rm het}|^{2}-|\sigma_{(-,\rm o)(+,e)}^{\rm het}|^{2} \right)^{\frac{1}{2}},\] (188) \[\Delta_{(-,\rm e)(-,e)}^{\rm het}\coloneqq\left(C_{\rm e}^{-1} \left\|M_{\rm ee}^{\rm het}\left|\beta\right\rangle\right\|^{2}-(\lambda_{\rm ee }^{\rm het})^{2}-|\sigma_{(-,e)(-,e)}^{\rm het}|^{2}\right)^{\frac{1}{2}},\] (189) \[\Delta_{(+,\rm e)(+,e)}^{\rm het}\coloneqq\left(C_{\rm e}^{-1} \left\|M_{\rm ee}^{\rm het}\left|\beta\right\rangle\right\|^{2}-(\lambda_{\rm ee }^{\rm het})^{2}-|\sigma_{(+,e)(+,e)}^{\rm het}|^{2}\right)^{\frac{1}{2}}. \tag{190}\]
_Define the following two matrices \(M_{\rm dd}^{\prime(0)}\) and \(M_{\rm dd}^{\prime(1)}\)._
\[M_{\rm dd}^{\prime(0)}\coloneqq\begin{pmatrix}1&0&0&\Delta_{(+, \rm o)(-,e)}^{\rm het}&0&0\\ 0&1&\sigma_{\rm oo}^{\rm het}&\sigma_{(+,\rm o)(-,e)}^{\rm het}&0&0\\ 0&\sigma_{\rm oo}^{\rm het}&\kappa C_{\rm o}+\lambda_{\rm oo}^{\rm het}&\kappa \sqrt{C_{\rm o}C_{\rm e}}+\lambda_{(+,\rm o)(-,e)}^{\rm het}&\sigma_{(-,\rm e )(+,o)}^{\rm het}&0\\ \Delta_{(+,\rm o)(-,e)}^{\rm het}&\sigma_{(+,\rm o)(-,e)}^{\rm het}&\kappa \sqrt{C_{\rm o}C_{\rm e}}+\lambda_{(+,\rm o)(-,e)}^{\rm het}&\kappa C_{\rm e }+\lambda_{(-,\rm e)(-,e)}^{\rm het}-\gamma&\sigma_{(-,\rm e)(-,e)}^{\rm het}& \Delta_{(-,\rm e)(-,e)}^{\rm het}\\ 0&0&\sigma_{(-,\rm e)(+,e)}^{\rm het}&\sigma_{(-,\rm e)(-,e)}^{\rm het}&0&1- \gamma\\ 0&0&0&\Delta_{(-,e)(-,e)}^{\rm het}&0&1-\gamma\\ \end{pmatrix}, \tag{191}\]
\[M_{\rm dd}^{\prime(1)}\coloneqq\begin{pmatrix}1-\gamma&0&0&\Delta_{(-, \rm o)(+,e)}^{\rm het}&0&0\\ 0&1-\gamma&\sigma_{\rm oo}^{\rm het}&\sigma_{(-,\rm o)(+,e)}^{\rm het}&0&0\\ 0&\sigma_{\rm oo}^{\rm het}&\kappa C_{\rm o}+\lambda_{\rm oo}^{\rm het}- \gamma&\kappa\sqrt{C_{\rm o}C_{\rm e}}+\lambda_{(-,\rm o)(+,e)}^{\rm het}& \sigma_{(+,\rm e)(-,o)}^{\rm het}&0\\ \Delta_{(-,\rm o)(+,e)}^{\rm het}&\sigma_{(-,\rm o)(+,e)}^{\rm het}&\kappa \sqrt{C_{\rm o}C_{\rm e}}+\lambda_{(-,\rm o)(+,e)}^{\rm het}&\kappa C_{\rm e }+\lambda_{\rm ee}^{\rm het}&\sigma_{(+,\rm e)(+,e)}^{\rm het}&\Delta_{(+,
_Then, for \(\kappa,\gamma\geq 0\), we have_
\[M^{\rm het}[\kappa,\gamma]\leq B^{\rm het}(\kappa,\gamma)I_{AC}. \tag{194}\]
Proof.: In the same way as Homodyne protocol, we have from Eqs. (55) and (56) that
\[\left|u_{\pm}^{\rm het}(\omega_{r})\right\rangle_{A}=\cos\!\left(\frac{\theta _{\mu,\beta}^{\rm het}(\omega_{r})}{2}\right)\left|\pm\right\rangle_{A}\pm\sin \!\left(\frac{\theta_{\mu,\beta}^{\rm het}(\omega_{r})}{2}\right)\left|\mp \right\rangle_{A}. \tag{195}\]
Combining this with Eqs. (72) and (73), we observe that
\[(\left\langle+\right|_{A}\otimes\Pi_{\rm od})M_{\rm ph}^{\rm het} (\left|+\right\rangle_{A}\otimes\Pi_{\rm od}) =(\left\langle-\right|_{A}\otimes\Pi_{\rm od})M_{\rm ph}^{\rm het }(\left|-\right\rangle_{A}\otimes\Pi_{\rm od})=M_{\rm oo}^{\rm het}, \tag{196}\] \[(\left\langle-\right|_{A}\otimes\Pi_{\rm ev})M_{\rm ph}^{\rm het }(\left|-\right\rangle_{A}\otimes\Pi_{\rm ev}) =(\left\langle+\right|_{A}\otimes\Pi_{\rm ev})M_{\rm ph}^{\rm het }(\left|+\right\rangle_{A}\otimes\Pi_{\rm ev})=M_{\rm ee}^{\rm het}\] (197) \[(\left\langle+\right|_{A}\otimes\Pi_{\rm od})M_{\rm ph}^{\rm het }(\left|-\right\rangle_{A}\otimes\Pi_{\rm ev}) =\left[(\left\langle-\right|_{A}\otimes\Pi_{\rm ev})M_{\rm ph}^{\rm het }(\left|+\right\rangle_{A}\otimes\Pi_{\rm od})\right]^{\dagger}=M_{(+,o)(-,e)}^ {\rm het}\] (198) \[(\left\langle-\right|_{A}\otimes\Pi_{\rm od})M_{\rm ph}^{\rm het }(\left|+\right\rangle_{A}\otimes\Pi_{\rm ev}) =\left[(\left\langle+\right|_{A}\otimes\Pi_{\rm ev})M_{\rm ph}^{\rm het}( \left|-\right\rangle_{A}\otimes\Pi_{\rm od})\right]^{\dagger}=M_{(-,o)(+,e)}^ {\rm het} \tag{199}\]
As can be seen from Eq. (72) as well as Eqs. (74)-(79), we have
\[M^{\rm het}[\kappa,\gamma]=\Pi_{AC}^{(+,\rm od),(-,ev)}M^{\rm het}[\kappa, \gamma]\,\Pi_{AC}^{(+,\rm od),(-,ev)}+\Pi_{AC}^{(-,\rm od),(+,ev)}M^{\rm het} [\kappa,\gamma]\,\Pi_{AC}^{(-,\rm od),(+,ev)}, \tag{200}\]
where \(\Pi_{AC}^{(+,\rm od),(-,ev)}\) and \(\Pi_{AC}^{(-,\rm od),(+,ev)}\) are defined in Eqs. (65) and (66). Then we apply Lemma 2 to the operators \(\Pi_{AC}^{(+,\rm od),(-,ev)}M^{\rm het}[\kappa,\gamma]\,\Pi_{AC}^{(+,\rm od),( -,ev)}\) and \(\Pi_{AC}^{(-,\rm od),(+,ev)}M^{\rm het}[\kappa,\gamma]\,\Pi_{AC}^{(-,\rm od),( +,ev)}\), respectively. For \(\Pi_{AC}^{(+,\rm od),(-,ev)}M^{\rm het}[\kappa,\gamma]\,\Pi_{AC}^{(+,\rm od),( -,ev)}\), using Eqs. (196), (197), and (198), we set
\[\Pi_{\pm} =\left|\pm\right\rangle\!\left\langle\pm\right|_{A}\otimes\Pi_{ \rm od(ev)}, \tag{201}\] \[M =\Pi_{AC}^{(+,\rm od),(-,ev)}M_{\rm ph}^{\rm het}\Pi_{AC}^{(+,\rm od ),(-,ev)}\] (202) \[=\left|+\right\rangle\!\left\langle+\right|_{A}\otimes M_{\rm oo} ^{\rm het}+\left|-\right\rangle\!\left\langle-\right|_{A}\otimes M_{\rm ee}^{ \rm het}+\left|+\right\rangle\!\left\langle-\right|_{A}\otimes M_{(+,\rm o)(-,e)}^{\rm het}+\left|-\right\rangle\!\left\langle+\right|_{A}\otimes M_{(-,e)( +,o)}^{\rm het},\] (203) \[\left|\psi\right\rangle =\sqrt{\kappa}\left|\phi_{-}\right\rangle_{AC},\] (204) \[\alpha =1,\quad\gamma_{+}=0,\quad\gamma_{-}=\gamma, \tag{205}\]
where \(\left|\phi_{-}\right\rangle_{AC}\) is defined in Eq. (28). Since so-defined \(M\) only has continuous spectrum, we can apply Lemma 2 and obtain
\[\sigma_{\rm sup}\!\left(\Pi_{AC}^{(+,\rm od),(-,ev)}M^{\rm het}[\kappa,\gamma] \,\Pi_{AC}^{(+,\rm od),(-,ev)}\right)\leq\sigma_{\rm sup}(M_{\rm 6d}^{\prime(0)}). \tag{206}\]
We also apply Lemma 2 to \(\Pi_{AC}^{(-,\rm od),(+,ev)}M^{\rm het}[\kappa,\gamma]\,\Pi_{AC}^{(-,\rm od),(+, ev)}\). Using Eqs. (196), (197), and (199), we set
\[\Pi_{\pm} =\left|\mp\right\rangle\!\left\langle\mp\right|_{A}\otimes\Pi_{\rm od (ev)}, \tag{207}\] \[M =\Pi_{AC}^{(-,\rm od),(+,ev)}M_{\rm ph}^{\rm het}\Pi_{AC}^{(-,\rm od ),(+,ev)}\] (208) \[=\left|-\right\rangle\!\left\langle-\right|_{A}\otimes M_{\rm oo} ^{\rm het}+\left|+\right\rangle\!\left\langle+\right|_{A}\otimes M_{\rm ee}^{\rm het }+\left|-\right\rangle\!\left\langle+\right|_{A}\otimes M_{(-,o)(+,e)}^{\rm het }+\left|+\right\rangle\!\left\langle-\right|_{A}\otimes M_{(+,e)(-,o)}^{\rm het },\] (209) \[\left|\psi\right\rangle =\sqrt{\kappa}\left|\phi_{+}\right\rangle_{AC},\] (210) \[\alpha =1,\quad\gamma_{+}=\gamma,\quad\gamma_{-}=0. \tag{211}\]
where \(\left|\phi_{+}\right\rangle_{AC}\) is defined in Eq. (27). Then, we observe
\[\sigma_{\rm sup}\!\left(\Pi_{AC}^{(-,\rm od),(+,ev)}M^{\rm het}[\kappa,\gamma] \,\Pi_{AC}^{(-,\rm od),(+,ev)}\right)\leq\sigma_{\rm sup}(M_{\rm 6d}^{\prime(1)}). \tag{212}\]
Combining inequalities (206) and (212) completes the proof. |
2305.11082 | Topological and conventional nano-photonic waveguides for chiral
integrated quantum optics | Chirality in integrated quantum photonics has emerged as a promising route
towards achieving scalable quantum technologies with quantum nonlinearity
effects. Topological photonic waveguides, which utilize helical optical modes,
have been proposed as a novel approach to harnessing chiral light-matter
interactions on-chip. However, uncertainties remain regarding the nature and
strength of the chiral coupling to embedded quantum emitters, hindering the
scalability of these systems. In this work, we present a comprehensive
investigation of chiral coupling in topological photonic waveguides using a
combination of experimental, theoretical, and numerical analyses. We
quantitatively characterize the position-dependence nature of the light-matter
coupling on several topological photonic waveguides and benchmark their chiral
coupling performance against conventional line defect waveguides for chiral
quantum optical applications. Our results provide crucial insights into the
degree and characteristics of chiral light-matter interactions in topological
photonic quantum circuits and pave the way towards the implementation of
quantitatively-predicted quantum nonlinear effects on-chip. | N. J Martin, M. Jalali Mehrabad, X. Chen, R. Dost, E. Nussbaum, D. Hallett, L. Hallacy, A. Foster, E. Clarke, P. K. Patil, S. Hughes, M. Hafezi, A. M Fox, M. S. Skolnick, L. R. Wilson | 2023-05-18T16:09:56Z | http://arxiv.org/abs/2305.11082v3 | # Topological and conventional nano-photonic waveguides for chiral integrated quantum optics
###### Abstract
Chirality in integrated quantum photonics has emerged as a promising route towards achieving scalable quantum technologies with quantum nonlinearity effects. Topological photonic waveguides, which utilize helical optical modes, have been proposed as a novel approach to harnessing chiral light-matter interactions on-chip. However, uncertainties remain regarding the nature and strength of the chiral coupling to embedded quantum emitters, hindering the scalability of these systems. In this work, we present a comprehensive investigation of chiral coupling in topological photonic waveguides using a combination of experimental, theoretical, and numerical analyses. We quantitatively characterize the position-dependence nature of the light-matter coupling on several topological photonic waveguides and benchmark their chiral coupling performance against conventional line defect waveguides for chiral quantum optical applications. Our results provide crucial insights into the degree and characteristics of chiral light-matter interactions in topological photonic quantum circuits and pave the way towards the implementation of quantitatively-predicted quantum nonlinear effects on-chip.
## Introduction
Integrated nano-photonic platform, in which embedded quantum emitters are interfaced with optical waveguides and cavities on-chip, is a promising route to scalable quantum technologies. An attractive property of nanophotonic waveguides is their support for chiral light-matter interactions, whereby an emitter with a circularly polarised transition dipole moment couples unidirectionally at the single photon level to a single photonic waveguide mode [123]. Such interactions have previously been demonstrated on-chip using semiconductor quantum dots (QDs) coupled to photonic crystal (PhC) line defects such as W [14] and glide plane [56] waveguides.
Recently, PhC topological waveguides have received significant interest for integrated nanophotonics due to their attractive properties, which include robust transmission around tight bends [7, 8, 9, 10, 11, 12, 13]. More pertinently for chiral quantum optics, the edge modes which arise at the interface between two topologically-distinct PhCs are intrinsically helical, and therefore appealing for chiral coupling of embedded emitters [7, 14]. Pairing this with the robust transmission properties, these these waveguides allow for Purcell-enhanced chiral coupling in resonator geometries with sharp bends [15, 16, 17, 18].
However, as in conventional waveguides, chiral coupling of an embedded emitter to a topological photonic waveguide is position dependent. In the most extreme case, the direction of emission from a circularly polarised transition can be completely reversed by moving it within a unit cell of the photonic crystal lattice. While the properties of conventional waveguides and topological waveguides have been compared in simulations [19], here we use both simulations and experiment to compare these (specifically, W1 and glide plane and valley-Hall waveguide) approaches to realising chiral light-matter interactions on-chip. We calculate the fraction of the cross-sectional area of each waveguide which supports high chiral contrast, whilst accounting for the influence of surface proximity on the QD emission. We then evaluate experimentally the chiral contrast for a large number of QDs in each type of structure, showing good agreement with the simulations. While the topological waveguide demonstrates promising chiral properties, our results also serve to highlight the limitations of current approaches. In particular, more research is required to develop waveguides which support high Beta and Purcell factors [6] whilst simultaneously showing near-unity chiral coupling.
## Photonic crystal waveguide design for chiral quantum optics
The PhC waveguides considered in this work are described schematically in Fig 1 (a-e) (i). The first conventional design is that of a W1 waveguide, comprising a triangular lattice of circular holes etched into a thin dielectric membrane, with one row of holes omitted in the \(\Gamma\)-K direction to form a line defect. The corresponding dispersion diagram for transverse electric (TE) polarised light (electric field in the plane of the
Figure 1: (a-e) (i), schematics of the (a) W1 waveguide, (b) Glide plane waveguide, (c) Zig-Zag interface valley-Hall topological waveguide, (d) Un-optimised Bearded interface valley-Hall topological waveguide optimised using an inverse design technique for a more favourable band structure and improve electric field and \(S_{3}\) overlap. (a-e) (ii) simulated band structures of the waveguides, with the single mode region of interest highlighted in blue, and the specific frequency of single mode operation chosen for the electric field plots in the rest of the figure highlighted on the band structures in red. (a-e) (iii) Simulated electric field profiles and (a-e) (iv) \(S_{3}\) maps for the waveguides. Within (a-e) (iv), Encircled white regions indicate regions where chiral contrast is expected to be 95% and above. Band structures and electric field plots were simulated within guided mode expansion. (a-e) (v) Shows the spatial dependence within the waveguides of the Beta factor at the same frequency as the electric field plots, simulated using FDTD. Black regions indicate the position of the waveguide apertures (a-e) (vi) Probability density plots showing the likelihood of a set of randomly positioned dots possessing a given \(S_{3}\) value for different electric field strengths within the waveguide interface. Areas of high probability density indicate combinations of chiral contrasts and electric field strengths that are more likely. The apertures of the waveguides are excluded from the calculations.
membrane) is shown in Fig 1 (a) (ii). This was obtained using guided mode expansion [20]. Several guided Bloch modes can be observed within the PhC bandgap; we focus on the lowest frequency mode, with field profile shown in Fig 1 (a) (iii), which has an electric field antinode at the centre of the waveguide. The second conventional design studied here is the glide plane waveguide, which is formed by displacing the holes on one side of a W1 waveguide by half the lattice period along the waveguide. In this work, we use a slightly narrower version of the glide plane defect (\(1.5a_{\text{GP}}\) instead of \(\sqrt{3}a_{\text{GP}}\)) (Fig 1 (b) (i) ) which increases the confinement of the electric field at the interface. The TE mode dispersion diagram for the glide plane structure is shown in Fig 1 (b) (i). We focus on the single mode region of the higher frequency mode, as the multimode region where modes the two modes overlap spectrally prevents chiral coupling being realised. The field profile for this mode is shown in Fig 1 (b) (iii).
The conventional waveguides introduced above are compared in this work with three topologically non-trivial structures. Valley-Hall interfaces support guided optical modes lying below the light line, unlike the alternative spin-Hall approach [7], [16], [21]. Valley-Hall waveguides can be formed by interfacing two valley-Hall photonic crystals where the unit cell is a rhombus containing a pair of apertures of differing sizes, in two distinct ways. The first, the zig-zag interface, is an interface of these two crystals with mirror symmetry. For this comparison a design for the zig-zag interface comprised of triangular apertures, with the larger triangles at the interface, this design is chosen for its favourable band structure, with the dispersion diagram in Fig 1 (c) (ii) showings that the waveguide supports a single guided TE mode, with the mode profile given in Fig 1 (c) (iii). The bearded interface in contrast, is formed by interfacing two valley hall photonic crystals together in a way that forms a glide plane symmetric interface, a schematic for this design can be seen in Fig 1 (d) (i). For this work, the original bearded interface waveguide is formed of circular apertures, with the smaller of the apertures at the interface.to demonstrate how the chiral statistics of the waveguide can be improved, we have also included an optimised version of the bearded interface within our investigation. Here, an inverse design algorithm was used to optimize the geometrical parameters of a bearded interface valley-Hall nano-photonic waveguide for high beta factors and strong chiral light-matter interactions. This approach is particularly useful for chiral integrated quantum optics, where an ideal waveguide as it allows us to optimise for multiple figures of merit, with respect to many parameters. To optimise the bearded interface waveguide, three figures of merit were used in combination to yield more favourable properties for the topologically no-trivial mode including higher group indexes within a single mode region, and better overlap of the electric field and chiral points. The improved guided mode group index can bee seen in the flatter band of Fig 1 (e) (ii) in comparison to (d) ii. More information about this optimisation can be found in the supplementary information (SI) and Ref [22].
### \(\beta\)-Factor, E-field and chirality
The beta factor is calculated from the spontaneous emission rate of an emitter into an intended guided mode over the total decay rate of emission, within the context of QD integrated photonic crystal waveguides, this defines the degree of coupling between the QD and the electromagnetic field in the waveguide. The strength of the beta factor plays a crucial role in the degree of the non-linear QD light-matter interactions [23]. The strength of the nonlinear optical response of the system is crucial for the implementation of various quantum optical applications such as photon blockade and entanglement generation. To calculate the beta factor we utilised finite-difference-time-domain (FDTD) simulations to calculate the fraction of radiative power coupled into propagating modes over the total power injected from the emitter. To determine the spatial extent of the beta factor, we vary the dipole position within the least irreducible unit, up to \(1.5a\sim 2a\) away from the waveguide centre, excluding regions within the apertures of the waveguides.
Considering the comparison between the electric field profiles of Fig 1 (a-e) (iii), and the beta factors of Fig 1 (a-e) (v) the relationship between the intensity of the electric field at a given point to the beta factor at that point can be seen, with areas of high electric field intensity resulting in higher beta factors. The enhancement of the beta factor arising from the slow light regions of the waveguides can result in regions with relatively low electric field intensities having high \(\beta\) factors, with this effect most notable witin the W1 waveguide and Optimised bearded waveguide. Slow light can facilitate high beta factors for regions within the waveguide that within the fast light have relatively low beta factors, this wavelength dependence of the beta factor is shown within the SI.
\[S_{3}=\frac{-2\text{Im}(E_{x}E_{y}^{*})}{|E_{x}|^{2}+|E_{y}|^{2}}, \tag{1}\]
The chiral properties of each waveguide were evaluated using guided mode expansion simulations [20]. The simulated Stokes s3 parameter (Equation 1 which characterises the degree of circular polarisation of the waveguides electric field, is shown in Fig 1 (a-e) (iv) for the five waveguides. The stokes parameter at the position of an QD determines the maximum chiral contrast of the emission. All five waveguides show regions of high contrast, with contrasts above 95% indicated by the in-circled white regions of figure (a-e) (iv). However, Fig 1 (a-e) (vi) shows that these regions of high contrast don't necessarily correspond to regions of high electric field concentration.
## Experimental comparison of the photonic crystal waveguides
Experimentally, devices were fabricated in a 175nm-thin GaAs _p-i-n_ membrane containing a single layer of InAs QDs.
An Al\({}_{0.6}\)Ga\({}_{0.4}\)As sacrificial layer below the membrane was removed using a hydrofluoric acid wet etch to release the final free-standing structures. Representative scanning electron microscope (SEM) images of the waveguide interfaces are shown in Fig 2 (a-e). With each waveguide was terminated on both ends with a grating coupler for light extraction into external optics, in a setup shown in 2 (k).
The sample was cooled to 4.2K in a superconducting magnet cryostat. To determine the operation bandwidth of each waveguide, broadband photoluminescence (PL) was generated by exciting the ensemble of QDs located within one grating coupler using non-resonant excitation (\(\lambda_{\text{laser}}=810\)nm). PL emission was then detected from the other outcoupler, with representative transmission spectra shown in SI Fig S2. A combination of the identification of a sharp decreases in transmission resulting from the termination of a guided mode and an analysis of the Fabry-Perot fringes of the waveguide were used to identify the single mode regions of interest. Fabry-Perot oscillations in the transmitted intensities are the result of reflections at the waveguide terminations. These fringes allow for the identification of slow light regions, with the fringes related to the group index of the waveguide at that frequency. For example, in the case of the glide plane waveguide, high finesse modes are seen for \(\lambda=955-975\)nm when QD PL is generated within the waveguide and collected from one outcoupler. These are ascribed to the overlapping slow light (flat band) spectral window for the two modes of the waveguide, allowing for the identification of the single mode region, from which chiral data is exclusively collected. (\(\lambda\sim 955\)nm).
Next, we measured the chiral contrast independently for a large number of individual QDs in each waveguide. To do so, we used low power micro-photoluminescence (\(\mu\)PL) measurements, exciting non-resonantly above the waveguide and collecting emission independently from the two out-couplers. We focused on QDs spectrally located in the single mode regions of the waveguides, hilighted in blue in Fig 1 (a-e). In the presence of a Faraday-geometry magnetic field, the circularly polarised dipole transitions of the QD split energetically, allowing for their spectral resolution. To negate the possibility of dissimilar out-coupler collection efficiency, the chiral contrast was evaluated independently for each out-coupler using the relationship
\[C_{i}=\frac{I_{i}^{\sigma^{+}}-I_{i}^{\sigma^{-}}}{I_{i}^{\sigma^{+}}+I_{i}^{ \sigma^{-}}}, \tag{2}\]
where \(I_{i}^{\sigma^{j}}\) (\(j=+,-\) ) represents the intensity of \(\sigma^{j}\) polarised light emitted by the QD and collected from OC \(i\) (= left, right).
In order to compare the different types of PhC waveguides, we perform similar experiments and simulations for them. To model the chiral statistics of the waveguides we consider the \(S_{3}\) field maps for the waveguides over the selected waveguide regions hilighted in blue in Fig 1 (a-e) (i). By calculating the electric field data using guided mode expansion for the middle of the slab (where the QDs are located) for multiple points within a by 10a super cell of the waveguide, the distribution of the chirality can be calculated. With FDTD simulations providing data for the Beta factor (see Fig 1 (a-e) (v) and Purcell factor, a threshold was applied to the points that were included within the chiral statistics. This threshold was set so that only points within the waveguide that had a combined value for Purcell Factor multiplied by Beta factor above 0.5
Figure 2: SEM images of the waveguide for (a) W1, (b) Glide plane, (c) zig-zag valley-Hall, (d) original bearded valley-Hall and (e) optimised bearded valley-Hall. Scale bars are \(0.4\mu m\). Examples of dot lines split by B\(\neq\)0 magnetic fields, with (f) high chiral contrast, (g) medium contrast, (h) low contrast and (i) asymmetric contrast. (j) FDTD simulations showing the positional dependence of emission from a circularly polarised dipole at points 1-5 within the zig-zag valley-Hall waveguide interface (k) Schematic of the on chip device layout, indicating the location of the left out-coupler collection (red) and the right out-coupler collection (blue)
were included ( \(F_{P}\cdot\beta\geq 0.5\)). Additionally, to account for the 'dead-zones' for QD emission that exist around the apertures of the waveguides, we have calculated the expected statistics for the waveguides excluding a region of 15 and 30 nm around the apertures. In general we can see that the introduction of the dead-zones into the model, reduced the proportion of active dots, with the original bearded valley-Hall waveguide the most effected by the dead-zones.
We then experimentally quantify the degree of chiral coupling in each nanophotonic waveguide, using randomly distributed self-assembled InAs QDs, positioned by growth in the \(xy-\)plane at \(z=0\) of the GaAs membrane. The guided PL of single QDs is detected from right and left out-couplers (denoted as Left OC and Right OC in Fig 3 (f-j)) at opposite ends of each waveguide. Self-assembled growth leads to random positions of QDs, which enables coupling of QDs placed at areas of the waveguide with variety of chiral contrast. To compare the chirality of topological and conventional waveguides, we calculate the degree of chiral contrast for a large number of randomly positioned QDs in each waveguide. The results are shown in histogram graphs for the experimentally measured chiral contrasts for each case. The binned histograms from the \(S_{3}\) parameters calculated via FDTD simulations are also shown for comparison. Good overall agreement between the measured and simulated results is seen, with overall trends seen in the simulated chiral statistics present in the experimental data.
For the W1 waveguide we see that in general, there is predicted to be a low proportion of QDS with a chirality of 80%+, but a high proportion of dots with low chiral contrast. This can be explained by both the concentration of the electric field being at points of low circular polarisation, as seen in Fig 1 (f,k), and the annihilation of C-points at the band-edge in a W1 waveguide [24]. The experimental results broadly agree with this prediction.
The glide plane PhC waveguide, which exhibits the best chiral coupling for quantum dots out of the waveguides in both simulation and experiment. Nevertheless there is a lower proportion of high contrast QDs in experiment than expected. This may be explained by the glide plane's C point being located near the waveguides' etched areas.
The Zig-Zag interface performs well, with similar characteristics to those predicted by simulations. The topological valley-Hall Zig-Zag interface achieves a fairly high proportion of QDs with high contrast, but as can be seen in Fig 1 (e) (vi), these high contrast dots are unlikely to be at points of high electric field concentration. The original bearded interface however, while predicted to have a better distribution than the W1 waveguide, performs the worst out of all of the waveguides in experiment. We believe this is likely due to the interface holes in the design, being a source of fabrication error and obstruction to the ideal behaviour of the QDs due to the surface proximity issues they introduce. The optimised bearded valley-Hall waveguide shows a significant improvement in the experimental results in comparison to its un-optimised counterpart, but not the improvement implied by the simulation data. This is likely due a combination of two factors. The first is the high group indexes of the ideal design,
Figure 3: (a-e) Modeled predictions for the chiral statistics of the waveguides for a 0nm and 15nm and 30nm dead-zone with the plots normalised to the 0nm case to show the reduction in the expected number of dots that arise from these dead-zones. (f-j) Chiral contrast measured experimentally for randomly positioned QDs within the different waveguides. Left out-coupler data is presented in red and Right out-coupler data in blue. All experimental data were recorded when an external magnetic field of \(B_{Z}\)=3T was applied.
not being replicated in experiment, resulting in a lower probability of high contrast QDs being measured and the second is the interface holes again leading to a reduction in the contrast for reasons described above.
## Conclusion
We have presented a comparative analysis between conventional and topological waveguides (waveguides) for chiral coupling of embedded quantum emitters. Our investigation has allowed us to quantitatively characterize the chiral coupling performance of several topological waveguides and benchmark it against conventional waveguides. Our results demonstrate that both conventional waveguides and topologically non trivial waveguides provide a promising platform for scalable chiral quantum optical circuits, but are suited for different approaches and applications. Amongst the waveguides investigated, the glide plane is most suited to achieving high contrast QDs within a linear waveguide system with randomly positioned QDs as it had the highest portion of QDs with a high chiral contrast. From our simulation work, we have showed that these high contrast QDs are likely to be located within a region of the waveguide that also has a high electric field concentration, and so a higher Purcell-factor and \(\beta\)-factor.
We note that combining the topological waveguide approach with either QD registration [25, 26, 27, 2] or site-controlled growth [28, 29] techniques may enable deterministic positioning of QDs at highly chiral points in the waveguide, addressing the scalability challenge. Furthermore, the helical nature of the interface allows for the realisation of separation-independent QD-QD interactions [23, 30], in contrast to the nonchiral case. Exciting future prospects of this research include the realization of super and subradiant many-body states [31, 32, 33], and the formation of large-scale chiral spin networks [34] using a topologically-protected photonic platform.
## Acknowledgements
This work was supported by EPSRC Grant No. EP/N031776/1, EP/V026496/1 and the natural sciences and engineering council Canada (NSERC). The authors would like to acknowledge helpful discussions with Nir Rotenberg and Hamidreza Siampour.
## Author contributions statement
N.J.M., M.J.M., and E.N. designed the photonic structures, which R.D. fabricated. E.C. and P.K.P. grew the sample. N.J.M., M.J.M., X.C., L.H., and D.H. carried out the measurements and simulations. L.R.W., A.M.F, S.H., and M.S.S. provided supervision and expertise. N.J.M., M.J.M. and X.C. wrote the manuscript, with input from all authors.
## Data Availability
Data supporting this study are openly available from the authors upon reasonable request.
|
2305.18918 | Proximity effect of time-reversal symmetry broken non-centrosymmetric
superconductors | In non-centrosymmetric superconductors the pair potential has both
even-parity singlet and odd-parity triplet components. If time-reversal
symmetry is broken, the superconducting phase of these components is not the
same, for example in anapole superconductors. In this paper it is shown that
breaking time-reversal symmetry by a phase difference between the two
components significantly alters both the density of states and the conductance
in s+helical p-wave superconductors. The density of states and conductance in
s+chiral p-wave superconductors are less influenced by adding a phase
difference because time reversal symmetry is already broken in the s+p-wave
superconductor. The Tanaka-Nazarov boundary conditions are extended to 3D
superconductors, allowing to investigate a greater variety of superconductors,
such as B-W superconductors, in which the direction of the d-vector is parallel
to the direction of momentum. The results are important for the determination
of pair potentials in potentially time-reversal symmetry broken
non-centrosymmetric superconductors. | Tim Kokkeler, Alexander Golubov, Sebastián Bergeret, Yukio Tanaka | 2023-05-30T10:14:47Z | http://arxiv.org/abs/2305.18918v1 | # Proximity effect of time-reversal symmetry broken non-centrosymmetric superconductors.
###### Abstract
In non-centrosymmetric superconductors the pair potential has both even-parity singlet and odd-parity triplet components. If time-reversal symmetry is broken, the superconducting phase of these components is not the same, for example in anapole superconductors. In this paper it is shown that breaking time-reversal symmetry by a phase difference between the two components significantly alters both the density of states and the conductance in s+helical p-wave superconductors. The density of states and conductance in s+chiral p-wave superconductors are less influenced by adding a phase difference because time reversal symmetry is already broken in the s+p-wave superconductor. The Tanaka-Nazarov boundary conditions are extended to 3D superconductors, allowing to investigate a greater variety of superconductors, such as B-W superconductors, in which the direction of the d-vector is parallel to the direction of momentum. The results are important for the determination of pair potentials in potentially time-reversal symmetry broken non-centrosymmetric superconductors.
## I Introduction
Ever since the discovery of high temperature superconductors, much attention has been paid to unconventional superconductors [1; 2; 3; 4; 5], with for example triplet [1; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23] or odd-frequency [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] pairing. Historically, most attention has been paid to superconductors in which time reversal symmetry and inversion symmetry are not broken. In such superconductors the pair potential is either even-parity or odd-parity. However, if inversion symmetry is broken, a type of superconductivity can emerge that is neither even-parity or odd-parity [36]. Such superconductivity, with both singlet and triplet components appears in materials whose crystal structure breaks inversion symmetry [37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. In certain materials even the mixing parameter, the ratio between the singlet and triplet components, can be varied using electron irradiation [47].
Moreover, there exists several unconventional superconductors, including possibly Sr\({}_{2}\)RuO\({}_{4}\), in which time reversal symmetry is broken [1; 14; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123]. For example, in chiral superconductors [7], time-reversal symmetry is broken in the bulk. Next to this, time reversal symmetry has been predicted to be spontaneously broken near the surface of d-wave superconductors [75; 76; 77; 78; 79; 80; 81; 82; 83]. Similarly, in so-called anapole superconductors there exists a nonzero phase difference between the singlet and the triplet components [84; 85; 86; 87; 88]. In these superconductors, both time-reversal symmetry and inversion symmetry is broken while the product of time reversal and inversion symmetry is preserved, thereby providing an analogy to axion electrodynamics [89; 90]. An example of such superconductor is the is+p-wave superconductor.
The proximity effect of non-centrosymmetric or time reversal symmetry broken superconductors has been studied in detail in several limits [91; 92; 93; 94; 95; 96; 97; 98]. Recently a theory has been developed to calculate the proximity effect of non-centrosymmetric superconductors in dirty normal metals using the Keldysh Usadel formalism [99; 100; 101]. Those works focus on time-reversal symmetric s+helical p-wave superconductors and on s+chiral superconductors in which there is no phase difference between the singlet and triplet components of the mode of normal incidence. In this work, we use this theory to explore the density of states, pair amplitudes, and conductance in the presence of an arbitrary phase difference between the s-wave and p-wave components of the pair potential in the superconductor. We will refer to such superconductors as (i)s+p-wave
Figure 1: The three different types of p-wave pair potentials used in this paper. The d-vector is defined by its direction \(\vec{d}\) and its phase \(\psi\), which is illustrated using colour. For both the 2D helical and the 3D B-W superconductor the phase the d-vector is real, \(\psi\) is constant, but the direction of the d-vector is momentum dependent. On the other hand, for the 2D chiral superconductor the phase depends on momentum, while the direction of the d-vector is constant over the Fermi surface. In each case the magnitude of the gap is isotropic. |
2310.03630 | Model-based Clustering for Network Data via a Latent Shrinkage Position
Cluster Model | Low-dimensional representation and clustering of network data are tasks of
great interest across various fields. Latent position models are routinely used
for this purpose by assuming that each node has a location in a low-dimensional
latent space, and enabling node clustering. However, these models fall short in
simultaneously determining the optimal latent space dimension and the number of
clusters. Here we introduce the latent shrinkage position cluster model
(LSPCM), which addresses this limitation. The LSPCM posits a Bayesian
nonparametric shrinkage prior on the latent positions' variance parameters
resulting in higher dimensions having increasingly smaller variances, aiding in
the identification of dimensions with non-negligible variance. Further, the
LSPCM assumes the latent positions follow a sparse finite Gaussian mixture
model, allowing for automatic inference on the number of clusters related to
non-empty mixture components. As a result, the LSPCM simultaneously infers the
latent space dimensionality and the number of clusters, eliminating the need to
fit and compare multiple models. The performance of the LSPCM is assessed via
simulation studies and demonstrated through application to two real Twitter
network datasets from sporting and political contexts. Open source software is
available to promote widespread use of the LSPCM. | Xian Yao Gwee, Isobel Claire Gormley, Michael Fop | 2023-10-05T16:04:48Z | http://arxiv.org/abs/2310.03630v1 | # Model-based Clustering for Network Data via a Latent Shrinkage Position Cluster Model
###### Abstract
Low-dimensional representation and clustering of network data are tasks of great interest across various fields. Latent position models are routinely used for this purpose by assuming that each node has a location in a low-dimensional latent space, and enabling node clustering. However, these models fall short in simultaneously determining the optimal latent space dimension and the number of clusters. Here we introduce the latent shrinkage position cluster model (LSCCM), which addresses this limitation. The LSPCM posits a Bayesian nonparametric shrinkage prior on the latent positions' variance parameters resulting in higher dimensions having increasingly smaller variances, aiding in the identification of dimensions with non-negligible variance. Further, the LSPCM assumes the latent positions follow a sparse finite Gaussian mixture model, allowing for automatic inference on the number of clusters related to non-empty mixture components. As a result, the LSPCM simultaneously infers the latent space dimensionality and the number of clusters, eliminating the need to fit and compare multiple models. The performance of the LSPCM is assessed via simulation studies and demonstrated through application to two real Twitter network datasets from sporting and political contexts. Open source software is available to promote widespread use of the LSPCM.
_Keywords:_ Latent position model, Network analysis, Mixture models, Bayesian nonparametric priors, Multiplicative gamma process
Introduction
Network data are an important class of structured data where objects are represented as nodes and the relationships between these objects are represented as edges. Network data arise in various fields, including epidemiology (Jo et al., 2021), neuroscience (Yang et al., 2020), brain connectivity (Aliverti and Durante, 2019) and sociology (D'Angelo et al., 2019). A key interest in the analysis of network data is the task of clustering the nodes in the network, where the aim is to group together nodes that share similar characteristics or behaviours.
While a variety of network models exist, many originate from the influential Erdos Renyi random graph model (Erdos and Renyi, 1959; Gilbert, 1959), including the widely utilised stochastic block model (Holland et al., 1983; Snijders and Nowicki, 1997) and the latent position model (LPM, Hoff et al., 2002). Here, we focus on the LPM which assumes each node in a network has a position in a latent \(p\)-dimensional space. Under the LPM, the probability of an edge between two nodes is determined by their proximity in the latent space. The latent position cluster model (LPCM, Handcock et al., 2007) extended the LPM to allow for model-based clustering of nodes in a network, by assuming the latent positions are generated from a finite mixture of Gaussian distributions. Various extensions to the LPCM have been proposed, including approaches that account for random effects (Krivitsky et al., 2009), that incorporate covariates through a mixture of experts framework (Gormley and Murphy, 2010), that employ a variational approach to Bayesian inference (Salter-Townshend and Murphy, 2013), that model longitudinal networks (Sewell and Chen, 2017), that allow for edge clustering (Sewell, 2020), and that model multiplex networks (D'Angelo et al., 2023); see Kaur et al. (2023) for a comprehensive review.
Although the LPCM is widely used, inferring the optimal number of clusters and the optimal dimensionality of the latent space remains a challenging task. In practice, the numbers of clusters and dimensions are typically treated as user-defined parameters, with the latter set as 2 in the majority of cases to allow for simple visualisation and interpretation (D'Angelo et al., 2023; Liu and Chen, 2023). Often, a set of LPCMs with different numbers
of clusters and dimensions are fitted to the network data and a model selection criterion is used to select the optimal model. Many model selection criteria have been used for this purpose e.g., Handcock et al. (2007) use a variant of the Bayesian information criterion (BIC) to select optimal numbers of clusters and dimensions, but highlight its lack of robustness. Other model selection criteria such as the Watanabe-Akaike information criterion (WAIC) (Ng et al., 2021; Sosa and Betancourt, 2022), the Akaike information criterion (AIC) and the integrated completed likelihood (ICL) (Sewell, 2020) have also been used. While these criteria have found some success, they can give conflicting inference on the same data and their use requires fitting a large set of models, each with a different combination of number of clusters and number of dimensions, which becomes computationally prohibitive as the set of possible models grows.
Alternative, automated strategies for inferring the numbers of clusters and dimensions from the network data have emerged. In the context of stochastic block models, Yang et al. (2020) utilize a frequentist framework, while Passino and Heard (2020) employ a Bayesian framework for this purpose. In the LPCM setting, automatic inference on the number of clusters has been considered in D'Angelo et al. (2023) via an infinite mixture model, while Durante and Dunson (2014) and Gwee et al. (2023) employed a nonparametric shrinkage prior to infer the latent space dimension. However, automated, simultaneous inference of both the number of clusters and the number of dimensions in the LPCM setting has not yet been considered.
Here the latent shrinkage position cluster model (LSPCM) is introduced, which simultaneously infers the node clustering structure and facilitates automated inference of the latent space dimension. To achieve clustering of the nodes, a sparse finite mixture model (Malsiner-Walli et al., 2014, 2017) is employed that overfits the number of components in the mixture model. The adoption of a sparse prior on the mixture weights encourages emptying of redundant components, thereby allowing inference on the number of clusters. Within each cluster, a Bayesian nonparametric truncated gamma process shrinkage prior (Bhattacharya and Dunson, 2011; Gwee et al., 2023) is placed on the variance of the nodes'
positions in the latent space. While the latent space is assumed to have infinitely many dimensions, the shrinkage prior implies that higher dimensions have negligible variance and therefore are non-informative. The LSPCM eliminates the need for choosing a model selection criterion and only requires the fitting of a single model to simultaneously infer both the optimal number of clusters and the optimal latent space dimension, reducing the required computation time. Additionally, the Bayesian framework naturally provides uncertainty quantification for the number of clusters and the number of dimensions through their posterior distributions.
The remainder of this article is structured as follows: Section 2 describes the proposed LSPCM while Section 3 outlines the inferential process along with practical implementation details. Section 4 describes simulation studies conducted to explore the performance of the LSPCM on clustering networks in a variety of settings where the numbers of nodes, of dimensions and of clusters vary. Section 5 applies the proposed LSPCM to two real Twitter network data sets: one concerning football players in the English premier league and one concerning the Irish political context. Section 6 concludes and discusses potential extensions. R (R Core Team, 2023) code with which all results presented herein were produced is freely available from the lspm GitLab repository.
## 2 The latent shrinkage position cluster model
To cluster nodes in a network, we introduce the LSPCM which draws together ideas underpinning both the LSPM (Gwee et al., 2023) and the LPCM (Handcock et al., 2007).
### The latent shrinkage position model
Network data typically take the form of an \(n\times n\) adjacency matrix, \(\mathbf{Y}\), where \(n\) is the number of nodes and entry \(y_{i,j}\) denotes the relationship or edge between nodes \(i\) and \(j\). Self-loops are not permitted and thus the diagonal elements of \(\mathbf{Y}\) are zero. Here we consider binary edges but a variety of edge types can be considered. Under the LSPM, edges are assumed
independent, conditional on the latent positions of the nodes. The sampling distribution is
\[\mathbb{P}(\mathbf{Y}\mid\alpha,\mathbf{Z})=\prod_{i\neq j}\mathbb{P}(y_{i,j}\mid \alpha,\mathbf{z}_{i},\mathbf{z}_{j})\]
where \(\mathbf{Z}\) is the matrix of latent positions, with \(\mathbf{z}_{i}\) denoting the latent position of node \(i\); \(\alpha\) is a global parameter that captures the overall connectivity level in the network. Denoting by \(q_{i,j}\) the probability of an edge between nodes \(i\) and \(j\), i.e. \(\mathbb{P}(y_{i,j}=1\mid\alpha,\mathbf{z}_{i},\mathbf{z}_{j})\), a logistic regression model formulation is used where the log odds of an edge between nodes \(i\) and \(j\) depends on the Euclidean distance between their respective positions \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\) in the latent space,
\[\log\frac{q_{i,j}}{1-q_{i,j}}=\alpha-\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}. \tag{1}\]
As in Gollini and Murphy (2016) and D'Angelo et al. (2019), in (1) the distance is taken to be the squared Euclidean distance.
The LSPM is a Bayesian nonparametric extension of the LPM which allows automatic inference on \(p\), the number of effective latent space dimensions i.e., the number of dimensions necessary to fully describe the network. Under the LSPM, the latent positions are assumed to have a zero-centred Gaussian distribution with diagonal precision matrix \(\mathbf{\Omega}\), whose entries \(\omega_{\ell}\) denote the precision of the latent positions in dimension \(\ell\), for \(\ell=1,\ldots,\infty\). The LSPM employs a multiplicative truncated gamma process (MTGP) prior on the precision parameters: the latent dimension \(h\) has an associated shrinkage strength parameter \(\delta_{h}\), where the cumulative product of \(\delta_{1}\) to \(\delta_{\ell}\) gives the precision \(\omega_{\ell}\). An unconstrained gamma prior is assumed for \(\delta_{1}\), while a truncated gamma distribution is assumed for the remaining dimensions to ensure shrinkage. Specifically, for \(i=1,\ldots,n\)
\[\mathbf{z}_{i}\sim\text{MVN}(\mathbf{0},\mathbf{\Omega}^{-1})\qquad\mathbf{ \Omega}=\begin{bmatrix}\omega_{1}^{-1}&\ldots&0\\ \vdots&\ddots&\\ 0&&\omega_{\infty}^{-1}\end{bmatrix}\qquad\omega_{\ell}=\prod_{h=1}^{\ell} \delta_{h}\text{ for }\ell=1,\ldots,\infty\]
\[\delta_{1}\sim\text{Gam}(a_{1},b_{1}=1)\qquad\delta_{h}\sim\text{Gam}^{\text {T}}(a_{2},b_{2}=1,t_{2}=1)\text{ for }h>1.\]
Here \(a_{1}\) and \(b_{1}\) are the gamma prior's shape and rate parameters on the first dimension's shrinkage parameter, while \(a_{2}\) is the shape parameter, \(b_{2}\) is the rate parameter, and \(t_{2}\) is
the left truncation point (here set to 1) of the truncated gamma prior for dimensions \(h>1\). This MTGP prior results in an increasing precision and therefore a shrinking variance of the positions in the higher dimensions of the latent space. Under this MTGP prior, the LSPM is nonparametric with infinitely many dimensions, where unnecessary higher dimensions' variances are increasingly shrunk towards zero. Dimensions that have variance very close to zero will then have little meaningful information encoded in them as the distances between nodes will be close to zero. Thus, the effective dimensions are those in which the variance is non-negligible.
### The latent shrinkage position cluster model
To infer the unknown number of clusters of nodes, we fuse the LSPM with the latent position cluster model (LPCM) of Handcock et al. (2007) and the sparse finite mixture model framework of Malsiner-Walli et al. (2014, 2017) to give the latent shrinkage position cluster model (LSPCM). The LSPCM assumes that the latent positions arise from a finite mixture of \(G\) spherical multivariate normal distributions, with an MTGP shrinkage prior assumed for the precision parameters i.e., for \(i=1,\ldots,n\)
\[\mathbf{z}_{i}\sim\sum_{g=1}^{G}\tau_{g}\mathrm{MVN}(\boldsymbol{\mu}_{g}, \boldsymbol{\Omega}^{-1})\qquad\boldsymbol{\Omega}=\begin{bmatrix}\omega_{1}^ {-1}&\ldots&0\\ \vdots&\ddots&\\ 0&&\omega_{\infty}^{-1}\end{bmatrix}\qquad\omega_{\ell}=\prod_{h=1}^{\ell} \delta_{h}\text{ for }\ell=1,\ldots,\infty\]
\[\delta_{1}\sim\mathrm{Gam}(a_{1},b_{1}=1)\quad\delta_{h}\sim\mathrm{Gam}^{ \mathrm{T}}(a_{2},b_{2}=1,t_{2}=1)\text{ for }h>1\]
\[\boldsymbol{\mu}_{g}|\boldsymbol{\Omega}\sim\mathrm{MVN}(\boldsymbol{0},\xi \boldsymbol{\Omega}^{-1})\qquad\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{G}) \sim\mathrm{Dir}(\nu,\ldots,\nu). \tag{2}\]
Here \(\tau_{g}\) denotes the probability that a node belongs to the \(g\)-th component, so that \(\tau_{g}\geq 0\) (\(g=1,\ldots,G\)) and \(\sum_{g=1}^{G}\tau_{g}=1\). While Handcock et al. (2007) and Ryan et al. (2017) consider a component specific precision matrix in the Gaussian mixture, here, to facilitate simultaneous shrinkage of the variance across higher dimensions for all components, a single precision matrix is assumed. Importantly, for the mean latent position \(\boldsymbol{\mu}_{g}\) of the \(g\)-th component, a multivariate Gaussian prior with zero mean and precision matrix having
a MTGP prior is assumed. Consequently, the component means are increasingly shrunk towards zero, resulting in higher dimensions characterized by increasingly overlapping mixture components, and, as a result, becoming less informative for cluster separation. Further, as in Ryan et al. (2017), the prior covariance on \(\mathbf{\mu}_{g}\) is inflated by a scaling factor, \(\xi=9\), so that component means are modelled as more dispersed than the component members.
While the LPCM assumes a finite mixture of spherical multivariate normal distributions and casts the problem of inferring the number of clusters as one of model selection, the LSPCM considers an overfitted (or sparse) finite mixture model. Here, following Malsiner-Walli et al. (2014); Fruhwirth-Schnatter and Malsiner-Walli (2018), a distinction between the number of mixture components \(G\), the number of non-empty mixture components \(G_{+}\), and the number of clusters in the data \(G^{*}\) is made. In addition, a symmetric Dirichlet prior is placed on the mixture weights \(\mathbf{\tau}\) with the Dirichlet's hyperparameter \(\nu\) playing an influential role in inducing sparsity. To define an overfitting mixture distribution, a large number \(G\) of initial components is specified and small values of \(\nu\) are considered, ensuring that unnecessary components are emptied out during the inferential process. The number of non-empty components \(G_{+}\) serves then as an estimate of the number of clusters \(G^{*}\). Thus, the LSPCM allows for automatic, simultaneous inference of the number of clusters and of the latent space dimension, and it requires only a single model to be fit to the network data.
## 3 Inference
To facilitate inference, we introduce \(\mathbf{C}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{n})\) which contains the latent component membership vectors \(\mathbf{c}_{i}=(c_{i1},\ldots,c_{iG})\), for \(i=1,\ldots,n\), where \(c_{ig}=1\) if node \(i\) belongs to component \(g\) and \(c_{ig}=0\) otherwise. Thus, \(\mathbf{c}_{i}\sim\text{MultiNom}(1,\mathbf{\tau})\). Denoting by \(\mathbf{\Theta}\) the component means \(\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{G}\), and the precision matrix \(\mathbf{\Omega}\), the joint posterior distribution of the LSPCM is then
\[\mathbb{P}(\alpha,\mathbf{Z},\mathbf{C},\mathbf{\tau},\mathbf{\Theta}\mid\mathbf{Y}) \propto\mathbb{P}(\mathbf{Y}\mid\alpha,\mathbf{Z})\mathbb{P}(\alpha)\mathbb{ P}(\mathbf{Z}\mid\mathbf{C},\mathbf{\tau},\mathbf{\Theta})\mathbb{P}(\mathbf{C}\mid\mathbf{\tau}) \mathbb{P}(\mathbf{\tau})\mathbb{P}(\mathbf{\Theta})\]
where \(\mathbb{P}(\alpha)\) is the non-informative \(N(\mu_{\alpha}=0,\sigma_{\alpha}^{2}=4)\) prior for the \(\alpha\) parameter, and \(\mathbb{P}(\mathbf{\tau})\) and \(\mathbb{P}(\mathbf{\Theta})\) denote the prior distributions outlined in Section 2.2. For the MTGP priors, similar to Gwee et al. (2023) and Durante (2017), the hyperparameters \(a_{1}=2\) and \(a_{2}=3\). Throughout, we consider sensitivity of inference to the Dirichlet hyperparameter \(\nu\).
### An adaptive Metropolis-within-Gibbs sampler
Markov chain Monte Carlo (MCMC) is employed to draw samples from the joint posterior distribution. After defining necessary notation in Appendix A, derivations of the full conditional distributions of the latent positions, cluster membership vectors and model parameters are given in Appendix B. As the MTGP prior is nonparametric and assumes an infinite number of latent dimensions, in practice setting a finite truncation level, \(p_{0}\), on the number of dimensions fitted is required. Thus, an adaptive Metropolis-within-Gibbs sampler is employed where \(p\) is dynamically shrunk or augmented as the sampler proceeds. Denoting the current iteration by \(s\) (but for clarity \(s\) is omitted where it is unnecessary), a single pass of the sampler proceeds as follows:
1. Sample the component mean \(\mathbf{\mu}_{g}\) for \(g=1,\ldots,G\) from \(\text{MVN}_{p}\left(\frac{\sum_{i=1}^{n}c_{iq}\mathbf{z}_{i}}{\sum_{i=1}^{n} c_{iq}+\xi^{-1}},\left[\mathbf{\Omega}\left(\sum_{i=1}^{n}c_{iq}+\xi^{-1}\right) \right]^{-1}\right).\)
2. Sample the mixing weights \(\mathbf{\tau}\) from \(\text{Dir}(\sum_{i=1}^{n}c_{i1}+\nu,\ldots,\sum_{i=1}^{n}c_{iG}+\nu).\)
3. Sample the latent memberships \(\mathbf{c}_{i}\) for each node \(i=1,\ldots,n\) from a Multinomial with \(G\) categories by drawing the \(g\)th category with probability \(\frac{\tau_{g}\phi_{p}(\mathbf{z}_{i};\mathbf{\mu}_{g},\mathbf{\Omega}^{-1})}{\sum_{i= 1}^{G}\tau_{g}\phi_{p}(\mathbf{z}_{i};\mathbf{\mu}_{g},\mathbf{\Omega}^{-1})}\) where \(\phi_{p}(\mathbf{z}_{i};\mathbf{\mu},\mathbf{\Omega}^{-1})\) is the \(p\)-dimensional multivariate normal density.
4. Sample \(\tilde{\mathbf{Z}}\) from a \(\text{MVN}_{p}(\mathbf{Z}^{(s)},k\mathbf{\Omega}^{-1(s)})\) proposal distribution where \(k\) is a step size factor. Accept \(\tilde{\mathbf{Z}}\) as \(\mathbf{Z}^{(s+1)}\) with probability \(\frac{\mathbb{P}(\mathbf{Y}|\alpha^{(s)},\tilde{\mathbf{Z}})}{\mathbb{P}( \mathbf{Y}|\ \alpha^{(s)},\tilde{\mathbf{Z}}^{(s)})}\frac{\phi_{p}( \mathbf{z}_{i};\tilde{\mathbf{Z}},k\mathbf{\Omega}^{-1(s)})}{\phi_{p}(\mathbf{z}_ {i};\tilde{\mathbf{Z}}^{(s)},k\mathbf{\Omega}^{-1(s)})}\), otherwise set \(\mathbf{Z}^{(s+1)}=\mathbf{Z}^{(s)}\).
5. Sample \(\tilde{\alpha}\) from an informed Gaussian proposal distribution (Gormley and Murphy, 2010) and accept following the Metropolis-Hastings acceptance ratio.
6. Sample \(\delta_{1}\) from \[\operatorname{Gam}\left(\frac{(n+G)p}{2}+a_{1},\right.\] \[\left.\begin{array}{c}\frac{1}{2}\sum_{i=1}^{n}\sum_{g=1}^{G}( \mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\left(\prod_{m=2}^{\ell}\delta_{m} \right)\mathbf{I}_{p}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}+\\ \left.\begin{array}{c}\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}( \xi^{-1})\left(\prod_{m=2}^{\ell}\delta_{m}\right)\mathbf{I}_{p}\boldsymbol{\mu }_{g}+b_{1}\right).\end{array}\right.\]
7. Sample \(\delta_{h}^{(s+1)}\) for \(h=2,\ldots,p\) from \[\operatorname{Gam}^{\mathrm{T}}\left(\frac{(n+G)(p-h+1)}{2}+a_{2},\right.\] \[\left.\begin{array}{c}\frac{1}{2}\sum_{i=1}^{n}(\mathbf{z}_{i}- \boldsymbol{\mu}_{g})^{T}\left(\prod_{m=1,m\neq h}^{\ell}\delta_{m}^{(s^{*})} \right)\mathbf{I}_{p}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}+\\ \left.\begin{array}{c}\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}( \xi^{-1})\left(\prod_{m=1,m\neq h}^{\ell}\delta_{m}^{(s^{*})}\right)\mathbf{I} _{p}\boldsymbol{\mu}_{g}+b_{2},\quad 1\right)\end{array}\right.\] where \(s^{*}=s+1\) for \(m<h\) and \(s^{*}=s\) for \(m>h\).
8. Calculate \(\omega_{\ell}\) by taking the cumulative product of \(\delta_{1}\) to \(\delta_{\ell}\).
### Practical implementation details
When implementing the adaptive Metropolis-within-Gibbs sampler, practical details such as initialisation of latent variables and parameters, adapting the latent dimension and post-processing of the MCMC chain require attention.
#### 3.2.1 Initialisation
While the LSPCM allows for an infinite number of latent dimensions and a potentially large number of components in the overfitted mixture, this is computationally impractical. Setting initial values for both the truncation level of the latent dimensions, \(p_{0}\), and for the number of components in the mixture, \(G\), prevents the sampler from exploring computationally unfeasible models. Here, \(p_{0}=5\) and \(G=20\) are used throughout as these settings were empirically observed to strike a balance between exploring complex models and computational feasibility for the type of networks analysed.
The LSPCM's parameters, cluster allocations and latent positions also require initialisation, achieved here as follows, where \(s=0\):
1. Calculate the geodesic distances (Kolaczyk and Csardi, 2020) between the nodes in the network. Apply classical multidimensional scaling (Cox and Cox, 2001) to the geodesic distances and set \(\mathbf{Z}^{(s)}\) to be the resulting \(n\times p_{0}\) positions.
2. Fit a standard regression model, with regression coefficients \(\alpha\) and \(\beta\), to the vectorised adjacency matrix where \(\log\)odds\((q_{i,j}=1)=\alpha-\beta\|\mathbf{z}_{i}^{(s)}-\mathbf{z}_{j}^{(s)}\|_{2}^{2}\) to obtain estimates \(\hat{\alpha}\) and \(\hat{\beta}\). Set \(\alpha^{(s)}=\hat{\alpha}\).
3. As the LSPCM model (1) constrains \(\beta=1\), centre and rescale the latent positions by setting \(\mathbf{Z}^{(s)}=\sqrt{|\hat{\beta}|}\tilde{\mathbf{Z}}^{(s)}\), where \(\tilde{\mathbf{Z}}^{(s)}\) are the mean-centred initial latent positions.
4. Apply model-based clustering to \(\mathbf{Z}^{(s)}\) via mclust(Scrucca et al., 2016) with up to \(G\) clusters and the EEI covariance structure to obtain initial cluster allocations.
5. Obtain \(\omega_{\ell}^{(s)}\) for \(\ell=1,\ldots,p_{0}\) by calculating the empirical precision of each column of \(\mathbf{Z}^{(s)}\).
6. Set \(\delta_{1}^{(s)}=\omega_{1}^{(s)}\) and calculate \(\delta_{h}^{(s)}=\frac{\omega_{h}^{(s)}}{\omega_{h-1}^{(s)}}\) where \(h=2,\ldots,p_{0}\).
#### 3.2.2 Adapting the latent dimension
As in Gwee et al. (2023), the LSPCM adapts the latent space dimension \(p\) as the MCMC sampler runs, meaning there is a finite number of effective dimensions at each iteration. Similar to Bhattacharya and Dunson (2011), the probability of adapting the number of latent dimensions at iteration \(s\) is taken as \(\mathbb{P}(s)=\exp(-\kappa_{0}-\kappa_{1}s)\), which decreases exponentially as the MCMC chain evolves. Here, setting \(\kappa_{0}=4\) and \(\kappa_{1}=5\times 10^{-4}\) demonstrated good performance in empirical experiments.
After the burn-in period, at an adaptation step, \(p\) is reduced by 1 if the cumulative proportion of the latent position variance that the dimensions \(\ell=1,\ldots,p-1\) contain is greater than \(\epsilon_{1}\). In general, \(\epsilon_{1}=0.8\) was found to work well but higher \(\epsilon_{1}\) showed better performance in networks with large \(n\). If the criterion for reducing \(p\) is not met, an increase in \(p\) is then considered by examining \(\delta_{p}^{-1}\) and a threshold \(\epsilon_{2}\): if \(\delta_{p}^{-1}>\epsilon_{2}\), \(p\) is increased by 1, with the additional associated parameters drawn from their respective priors. We
consider \(\epsilon_{2}=0.9\) which was found to work well in practice. The case where \(p=1\) at an adaptation step requires a different criterion: if the proportion of latent positions that have absolute deviation from their empirical mean is greater than \(\epsilon_{3}\) times the 95% critical value of a standard Normal distribution, then \(p\) is increased to 2; reducing \(p\) is not considered if \(p=1\). Setting \(\epsilon_{3}=5\) was found to work well in practice.
Crucially, the adaptive MCMC sampler provides the posterior distribution of the number of active dimensions \(p\). Here, the posterior mode \(p_{m}\) is used as the estimate of the effective dimension, denoted by \(p^{*}\), with credible intervals quantifying the associated uncertainty.
#### 3.2.3 Post-processing of the MCMC chain
The likelihood function of the LSPCM depends on the Euclidean distances between the latent positions, hence it remains unaffected by rotations, reflections, or translations of these positions, giving rise to identifiability issues. To ensure valid posterior inference, similar to Gormley and Murphy (2010), a Procrustean transformation of the sampled latent positions \(\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(S)}\) is considered. The transformation aligns the sampled positions with a reference configuration \(\tilde{\mathbf{Z}}\), which is selected based on the configuration that yields the highest log-likelihood during the burn-in phase of the MCMC chain. Although this choice is arbitrary, it has little effect as the reference configuration solely serves the purpose of addressing identifiability.
Since the MCMC samples of the component allocations will have varying numbers of non-empty components, and because of the label switching problem caused by invariance of the mixture with respect to permutation of the component labels, care must be taken when deriving posterior summaries of cluster labels and parameters. While Fruhwirth-Schnatter et al. (2019) give an overview of many potential approaches, here, as it demonstrated robust and accurate performance for the networks examined, we adopt the procedure of Fritsch and Ickstadt (2009) to estimate the cluster labels of the nodes and subsequently the number of clusters. The method, implemented in the R package mccclust(Fritsch,
2022), first estimates the posterior similarity matrix containing the proportion of times a pair of nodes are placed in the same cluster, and then maximizes the posterior expected adjusted Rand index (PEAR) to obtain the optimal cluster labels. In addition, as in Malsiner-Walli et al. (2014), the posterior distribution of the number of filled components \(G_{+}\) and the corresponding posterior mode \(G_{m}\) are examined to inspect the uncertainty in the number of clusters. In some situations (see Section 5.1 for an example), the posterior mode may differ from the number of clusters estimated by the PEAR method as it tends to aggregate together small clusters. Finally, for summarizing the posterior distributions of the cluster parameters, as in Gormley and Murphy (2010), cluster parameters are obtained after permuting cluster labels to minimise a loss function based on the cluster means.
## 4 Simulation studies
The performance of the LSPCM is assessed on simulated data scenarios by evaluating its ability to recover the dimension of the latent space and to correctly infer the latent positions, the number of clusters and the cluster allocations. The latent positions are simulated according to (2) where the mixture weights \(\boldsymbol{\tau}\) are generated from a symmetric Dir(10) which gives rise to clusters with similar numbers of nodes. Given the latent positions, a network is then generated as in (1). Two scenarios are considered: Sections 4.1 and 4.2, respectively, assess the performance of LSPCM on networks with small and moderate numbers of nodes, dimensions, and clusters. A total of 10 networks are simulated in each scenario. In Section 4.1, the MCMC chains are run for 500,000 iterations with a burn-in of 50,000 iterations, thinned every 1,000th. In Section 4.2, 1,000,000 iterations are considered, with a burn-in of 100,000 iterations, thinned every 2,000th iteration. The hyperparameter \(\nu\) of the Dirichlet mixing prior has been set at different orders of magnitude from \(10^{-1}\) to \(10^{-5}\) to assess sensitivity when inferring the number of clusters. Also, different values from 2 to 3.5 are used for the step size \(k\) in the proposal distributions to ensure acceptance rates are in the \(20\%-40\%\) range.
### Scenario 1: small networks
Networks with \(n=50\) are generated with effective latent dimension \(p^{*}=2\) and true number of clusters \(G^{*}=3\), with shrinkage strength \(\mathbf{\delta}=(1,1.05)\) and cluster mean positions \(\mathbf{\mu}=\{(0,0),(-4,0),(-4,0)\}\) giving well separated clusters. Across the 10 simulated networks, the smallest cluster contained 9 nodes while the largest cluster contained 31, and fixing \(\alpha=6\) gave rise to network densities between 27% and 38%.
Table 1 indicates accurate inference across the various settings of \(\nu\), with the posterior modal number of dimensions \(p_{m}=2\) and the posterior modal number of clusters \(G_{m}=3\), except when \(\nu=0.1\). When \(\nu=0.1\), while the number of clusters is overestimated with \(G_{m}=6\), nodes are predominantly allocated to 3 clusters while the remaining clusters typically contain only one node. Thus, the relatively large value of \(\nu=0.1\) induces the algorithm to struggle to empty superfluous components in the small network scenario. Overall, the adjusted Rand index (ARI, Hubert and Arabie, 1985), which measures agreement between the inferred cluster memberships and the true cluster labels, shows good correspondence, with values typically around 0.9 across different \(\nu\). However, the lower bound of the 95% credible interval for the ARI drops when \(\nu=0.00001\) as the relatively small value of \(\nu\) at times overshrinks the mixture components, collapsing all the nodes into one cluster. Procrustes correlations (PC, Peres-Neto and Jackson, 2001) between inferred and true latent positions also show that the latent positions are accurately estimated, with average values \(\geq 0.92\) and lower and upper bounds of 95% credible intervals of 0.84 and 1.0, respectively. Additional visualization of the posterior distributions on the number of dimensions, clusters, variances, shrinkage strengths, and \(\alpha\) are available in Appendix C.
For comparison, the LPCM is fitted on the same simulated data using the R package latentnet(Krivitsky et al., 2022; Krivitsky and Handcock, 2008). Upon fitting 25 LPCMs with different combinations of \(p=\{1,\ldots,5\}\) and \(G=\{1,\ldots,5\}\) to each of the 10 simulated networks, the BIC suggested 2 dimensions and 3 clusters for all of them. In terms of computational cost, fitting the LSPCM on a computer with an i7-10510U CPU and 16GB RAM took on average 33 minutes. The LPCM with the correct \(p\) and \(G\) took 4.5 minutes
on average to run for the same number of iterations. Across the various replicates, the total time taken to fit the 25 LPCMs with different combinations of \(p\) and \(G\) was comparable to or longer than the time taken to fit the LSPCM. However, notably, when fitting the LPCMs no quantification of the uncertainty in \(p\) and \(G\) is provided.
### Scenario 2: moderately sized networks
Networks with \(n=200\) were generated with \(p^{*}=4\) effective latent dimensions and \(\boldsymbol{\delta}=(1,1.1,1.05,1.02)\). The true number of clusters \(G^{*}=7\) with \(\boldsymbol{\mu}=\{(-5,0,0,0),(-5,5,0,0),\\ (0,-5,5,0),(0,0,-5,5),(2,0,2,-5),(-2,2,-2,0),(0,-2,0,0)\}\). The 7 clusters have different degrees of separation across the various dimensions. Among the 10 simulated networks, the smallest cluster size had 11 nodes while the largest cluster had 55. The parameter \(\alpha=20\), which resulted in networks with density varying between 16% and 22%.
Table 2 shows that the posterior modal number of dimensions \(p_{m}\) tended to underestimate the true number of dimensions, although it was contained within the 95% credible interval in all settings of \(\nu\). Inspection of the true latent positions empirical variance reveals that the variance across dimensions are similar but a relatively large decrease is seen in the 4th dimension. Using a higher dimension threshold parameter of \(\epsilon_{1}=0.9\) resulted in \(p_{m}=4\). The posterior modal number of clusters \(G_{m}\) corresponded to the true generating
\begin{table}
\begin{tabular}{l l l l l} \hline \(\nu\) & \(p_{m}\) & \(G_{m}\) & \multicolumn{2}{c}{ARI} & \multicolumn{1}{c}{PC} \\ \hline
0.1 & 2 (1, 3) & 6 (3, 10) & 0.88 (0.81, 0.99) & 0.92 (0.85, 0.99) \\
0.01 & 2 (2, 3) & 3 (2, 5) & 0.93 (0.83, 1.00) & 0.96 (0.92, 0.99) \\
0.001 & 2 (2, 3) & 3 (1, 4) & 0.92 (0.80, 1.00) & 0.96 (0.86, 0.99) \\
0.0001 & 2 (1, 3) & 3 (1, 3) & 0.92 (0.83, 1.00) & 0.94 (0.84, 1.00) \\
0.00001 & 2 (1, 3) & 3 (1, 3) & 0.79 (0.19, 1.00) & 0.97 (0.88, 1.00) \\ \hline \end{tabular}
\end{table}
Table 1: Sensitivity analysis across various settings of \(\nu\) on 10 simulated networks with \(n=50,p^{*}=2,G^{*}=3\) assessed via \(p_{m}\), \(G_{m}\), ARI, and PC. The 95% credible intervals are given in parentheses.
number of clusters, except again in the case where \(\nu=0.1\) due to insufficient shrinkage. The ARI values indicate very accurate and robust inference of cluster membership with all \(95\%\) credible interval lower bound values \(\geq 0.96\). The average PC values were \(\geq 0.86\) but lower bounds of the \(95\%\) credible intervals were only \(\geq 0.71\). The PC values are calculated using only dimensions up to and including the posterior modal number of dimensions; in cases where \(p_{m}<p^{*}\), the higher dimensions are not included in the comparison resulting in lower PC values. Additional visualization of the posterior distributions on the number of dimensions, clusters, variances, shrinkage strengths, and \(\alpha\) are available in Appendix C.
Compared to the small networks scenario, intuitively, the credible intervals tend to be narrower when \(n=200\). The LSPCM again struggled to eliminate unnecessary components when the mixing prior was not sparse enough i.e., when \(\nu=0.1\). However, unlike the small networks scenario, here the sparse prior when \(\nu=0.00001\) did not cause the components to collapse into a reduced number of large clusters, indicating robust inference under the larger size network scenario.
Upon fitting the 25 LPCMs from all combinations of \(p=\{1,\ldots,5\}\) and \(G=\{5,\ldots,10\}\) to the 10 simulated networks, the BIC suggested 3 dimensions and 7 clusters \(90\%\) of the time. The average time for the completion of one MCMC chain for the LSPCM was 230 minutes. However, fitting the LPCM for multiple combinations of \(p\) and \(G\) incurred a considerably larger computational cost. In fact, solely for the correct pair \(p=4\) and \(G=7\) it took 200 minutes to run the MCMC via latentnet for the same number of iterations, and again no uncertainty quantification is provided.
## 5 Application to Twitter network data
The LSPCM is fitted to two binary Twitter networks with different characteristics: a football players network with a small number of nodes, in which each player is known to belong to one of three football clubs, and a network among Irish politicians with a moderate number of nodes, where each politician is affiliated to one of seven Irish political parties. The data are publicly available at this link(Greene and Cunningham, 2013). In
the analyses, hyperparameters, initial values, and step sizes are set as in Section 4.
### Football players network
This binary network consists of directed edges indicating the presence of Twitter mentions from one English Premier League football player's Twitter account to another. The data were adapted from those provided in Greene and Cunningham (2013): 55 players playing for 3 different Premier League clubs are considered with 15 players from Stoke City football club (Stoke), 23 players from Tottenham Hotspur (Spurs), and 17 players from West Bromwich Albion (West Brom). In total, there were 497 mentions between players, giving a network density of 16.73%. To fit the LSPCM to these data, 10 MCMC chains are run with \(\nu=0.01\), each for 500,000 iterations with a burn-in period of 50,000 and thinned every 1,000th. The average time for the completion of one MCMC chain was 35 minutes.
Figure 1 illustrates that under the LSPCM the posterior modal number of clusters \(G_{m}=4\) (1, 6) and the posterior modal number of dimensions is 2 (1, 2), with 95% credible intervals in brackets. There is uncertainty in \(G_{+}\) as demonstrated by the wide posterior interval and notable support for \(G_{+}=3\), but there is strong certainty on the number of dimensions \(p\).
Despite \(G_{m}=4\), further inspection showed that, across all the 10 chains, a small
\begin{table}
\begin{tabular}{l l l l l} \hline \(\nu\) & \(p_{m}\) & \(G_{m}\) & \multicolumn{2}{c}{ARI} & \multicolumn{1}{c}{PC} \\ \hline
0.1 & 2 (2, 4) & 9 (7, 13) & 0.97 (0.96, 0.99) & 0.86 (0.71, 0.97) \\
0.01 & 3 (2, 4) & 7 (7, 9) & 0.99 (0.97, 1.00) & 0.93 (0.76, 0.98) \\
0.001 & 3 (2, 4) & 7 (7, 8) & 0.99 (0.97, 1.00) & 0.90 (0.72, 0.98) \\
0.0001 & 3 (3, 4) & 7 (7, 7) & 0.99 (0.97, 1.00) & 0.95 (0.93, 0.98) \\
0.00001 & 4 (2, 5) & 7 (7, 7) & 0.98 (0.96, 1.00) & 0.91 (0.74, 0.98) \\ \hline \end{tabular}
\end{table}
Table 2: Sensitivity analysis across various settings of \(\nu\) on 10 simulated networks with \(n=200,p^{*}=4,G^{*}=7\) assessed via \(p_{m}\), \(G_{m}\), ARI, and PC. The 95% credible intervals are given in parentheses.
number of nodes was typically allocated to the 4th cluster. Maximising PEAR resulted in an estimate of the number of clusters of 3, with an ARI of 0.94 between the final clustering and the players' clubs. Figures 2(a) and 2(b) show the Fruchterman-Reingold layouts (Kolaczyk and Csardi, 2020) of the network with the players' clubs and inferred clusters detailed respectively. Only one player was clustered differently to the other players in his club: the West Brom player Romelu Lukaku who was originally from Chelsea football club and was on a season-long loan deal with West Brom. Lukaku mentioned and was mentioned by only one player, the Spurs player Jan Vertonghen and intuitively the LSPCM has clustered them together. Figure 2(c) shows the posterior mean latent positions coloured by the cluster labels that maximises the PEAR across the 10 chains. Cluster 1 (which contains all of the Stoke players only) and cluster 2 (all players from Spurs, and Romelu Lukaku) are separate from each other on the first dimension, while in the second dimension cluster 3 (which captures West Brom players except Romelu Lukaku) is separated from clusters 1 and 2, indicating both dimensions are necessary for representation of the nodes in the three clusters.
Through the posterior similarity matrix, Figure 3 illustrates the uncertainty in the football players' cluster labels where the colour intensity indicates the probability of a player being clustered together with another player. Cluster 3 (predominantly players from West Brom) has the strongest certainty in its cluster labels, while there is some uncertainty
Figure 1: Football players network, (a) the posterior distribution of the number of non-empty components \(G_{+}\), and (b) the posterior distribution of the latent space dimension \(p\).
in the membership of players in clusters 1 and 2.
Upon fitting 16 LPCMs from all combinations of \(p=\{1,\ldots,4\}\) and \(G=\{1,\ldots,4\}\), the BIC suggests 3 clusters and 2 dimensions as optimal. The Procrustes correlation between the posterior mean latent positions under the optimal LPCM and the LSPCM was 0.94 (standard deviation of 0.05 across the 10 LSPCM chains). Additional results regarding the posterior distributions of variance and shrinkage strength parameters can be found in the supplementary material, Appendix D.
### Irish politicians network
The LSPCM is used to analyse a Twitter network between 348 Irish politicians from the year 2012. The network consists of binary directed edges indicating if one politician follows another (Greene and Cunningham, 2013). Each of the politicians is affiliated with one of the seven Irish political parties: 49 are affiliated with Fianna Fail (FF), 143 with Fine Gael (FG), 7 with the Green Party (Green), 79 with the Labour Party (Lab), 31 with Sinn Fain (SF), 8 with the United Left Alliance (ULA) and 31 are Independent (Ind). There are 16,856 directed relationships between the politicians, giving a network density of 13.96%.
Figure 2: Football players network, (a) the Fruchterman-Reingold layout with players coloured by club, (b) the Fruchterman-Reingold layout with players coloured by inferred cluster label and (c) posterior mean latent positions on the \(p_{m}=2\) dimensions coloured by inferred cluster label.
For inference, 10 MCMC chains are run with \(\nu=0.00001\), each for 2,000,000 iterations with a burn-in period of 100,000 and thinned every 4,500th sample. The average time for the completion of one MCMC chain was 340 minutes.
Figure 4 shows that, under the LSCPM, the posterior modal number of clusters is 5 (4, 6) and the posterior modal number of dimensions is 4 (3, 5), with the 95% credible intervals reported in brackets. Upon fitting multiple LPCMs across all combinations of \(p=\{2,\ldots,8\}\) and \(G=\{2,\ldots,8\}\), the BIC suggests 6 clusters and 6 dimensions. The median Procrustes correlation between the posterior mean positions under the LPCM and LSPCM was 0.71 (standard deviation of 0.03 across the 10 LSPCM chains). Additional results regarding the posterior distributions of variance and shrinkage strength parameters can be found in the supplementary material, Appendix D.
After maximising the PEAR across the 10 chains, the estimated number of clusters is 5 and the ARI between the LSPCM inferred cluster labels and the politicians' political affiliation is 0.92. Table 3 presents the cross-tabulation of the political party membership and the LSPCM cluster labels. Additionally, Figure 5 shows the Fruchterman-Reingold layout of the network with nodes coloured by political party affiliation and LSPCM inferred
Figure 3: Heat map of the posterior similarity matrix of the cluster labels inferred from the football network, ordered by the cluster labels.
cluster label. The inferred clustering structure captures the composition and nature of the 2012 Irish political landscape: a coalition government of Fine Gael and the Labour Party were in power with Fianna Fail in opposition. Cluster 1 captures the entirety of the Labour Party along with one politician from Fine Gael (Catherine Yore), one politician from the Green Party (Ciaran Cuffe), and five Independent politicians (John Crown, David Norris, Marian Harkin, Veronica Cawley, Katherine Zappone). Cluster 2 captures the entirety of Sinn Fein with no other politician present. Cluster 3 captures the entirety of Fine Gael, bar one politician, along with one Green Party politician (Daniel Boyle) and 3 other Independents (Michael Lowry, Sean Gallagher, Jillian van Turnhout). Notably Michael Lowry was previously a member of Fine Gael. Cluster 4 is a combination of the Green Party, Independents, and the United Left Alliance, with the former two parties having the majority of their politicians in this cluster and the latter having its entire set of politicians in this cluster. Cluster 5 captures the entirety of Fianna Fail but has one Green Party politician (Eamon Ryan) and two Independents (Ronan Mullen, Mattie McGrath). There is a lack of overlap in the clustering structure of members of the three main parties at the time of Fine Gael, Labour and Fianna Fail.
Through the posterior similarity matrix, Figure 6 shows that politicians in cluster 2 (Sinn Fein) have the strongest certainty among their labels while other politicians have some uncertainty.
Figure 4: For the Irish politicians network: (a) the posterior distribution of the number of non-empty components \(G_{+}\), and (b) the posterior distribution of the latent space dimension.
Figure 7 shows the politicians' posterior mean latent positions on the \(p_{m}=4\) dimensions with nodes coloured by the maximised PEAR estimated cluster allocation across the 10 chains. On each dimension, one cluster tends to be located separately to the others, indicating that all the dimensions provide relevant information to distinguish the clusters. For example, on dimension 1 the politicians in cluster 3 are located separately from the other politicians. Similar patterns are apparent for politicians in cluster 5 on dimension 2, politicians in cluster 2 on dimension 3 and politicians in cluster 4 on dimension 4. Visually, the combination of all four dimensions allows for separation of the five clusters.
## 6 Discussion
The latent shrinkage position cluster model (LSPCM) enables simultaneous inference on the number of clusters of nodes in a network and the dimension of the latent space required to represent it. This is achieved, respectively, through employing a sparse prior on the weights of a mixture model and a Bayesian nonparametric shrinkage prior on the variance of the nodes' positions in the latent space. The LSPCM eliminates the need for the compu
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{5}{c}{**Cluster**} \\ & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline \multirow{5}{*}{**Political**} & _Fianna Fáil_ & & & & 49 \\ & _Fine Gael_ & 1 & 142 & & \\ & _Green Party_ & 1 & 1 & 4 & 1 \\ & _Independent_ & 5 & 3 & 21 & 2 \\ & _Labour Party_ & 79 & & & \\ & _Sinn Féin_ & & 31 & & \\ & _United Left Alliance_ & & & 8 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cross-tabulation of political party membership and the LSPCM representative cluster labels.
tationally expensive procedure of fitting multiple latent position cluster models (LPCMs) with different numbers of clusters and dimensions, and then choosing the best model using a range of model selection criteria. The performance of the LSPCM was assessed through simulation studies and its application to sporting and political social networks uncovered interesting and intuitive clustering structures.
While the LSPCM demonstrated strong performance, it is important to consider its sensitivity to specification of priors' parameters, especially in small networks. As in Malsiner-Walli et al. (2014), the sensitivity analyses conducted here highlighted the importance of careful specification of the Dirichlet hyperparameter \(\nu\), with the model inherently supporting many mixture weights close to zero when small values of \(\nu\) are employed. Although this can avoid overfitting, careful consideration and robust sensitivity analyses with regard to choosing \(\nu\) are imperative when employing the LSPCM. Placing a gamma hyperprior on \(\nu\)(Fruhwirth-Schnatter and Malsiner-Walli, 2018; Murphy et al., 2020) could allow for
Figure 5: Fruchterman-Reingold layout of the Irish politicians network with nodes coloured by (a) political party affiliation and (b) LSPCM inferred cluster labels.
inference on this influential Dirichlet hyperparameter. Although our focus in this paper is on the clustering solution, the sensitivity of the latent position dimension threshold parameters are also important. In practice, the value of the threshold parameter required to recover the optimal dimension may change according to the size and density of the network. Using a prior may enable more robust inference on this parameter.
Fitting the LSPCM is computationally feasible on networks of the scale considered here, however, it would be computationally burdensome to apply it to larger networks. Further, while adapting the LSPCM to facilitate modelling of networks with more complex edge types is a natural extension, such advances would come with additional computational cost. Addressing these issues could be possible by employing case-control approaches for the likelihood function (Raftery et al., 2012) and/or avoiding MCMC through the use of variational inference methods (Salter-Townshend and Murphy, 2013).
Finally, while here a MGP shrinkage prior was employed to infer the number of dimensions and an overfitted mixture was used to infer the number of clusters, the LSPCM could be viewed as a member of a broader family of such models given the variety of alternative shrinkage priors and clustering approaches available. Grushanina (2023) provides a broad
Figure 6: Heat map of the posterior similarity matrix of the Irish politicians ordered by the cluster labels.
review of approaches to infinite factorisations. For example, the Indian buffet process (IBP) has been employed to penalise increasing dimensionality in latent factor models (Rockova and George, 2016; Knowles and Ghahramani, 2011), however the sparsity the IBP enforces could be too restrictive here. From the clustering point of view, an infinite mixture model could be used to infer the number of clusters. Fruhwirth-Schnatter and Malsiner-Walli (2018) discuss linkages between infinite and overfitted mixtures, where they highlight overfitted and infinite mixtures yield comparable clustering performance when these hyperprior are matched.
As network data continue to emerge in a broad range of application areas, and the detection of clusters of nodes in such networks is often of interest, the open access provision of R code at the lspm GitLab repository will facilitate the use of the LSPCM by practitioners.
## Acknowledgements
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6049. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
The authors are grateful for discussions with members of the Working Group in Model-based Clustering which greatly contributed to this work.
Figure 7: LSPCM inferred posterior mean latent positions of the Irish politicians on the \(p_{m}=4\) dimensions with nodes coloured by cluster membership.
## Appendix A Notation and terminology
\(n\): Number of nodes.
\(p\): Truncation level i.e. the number of dimensions in the fitted LSPM.
\(p^{*}\): The true effective dimension of the latent space.
\(p_{0}\): The initial truncation level of the number of dimensions.
\(p_{m}\): The posterior modal number of dimensions.
\(G\): The number of mixture components.
\(G_{+}\): The number of non-empty mixture components.
\(G^{*}\): The true number of clusters.
\(G_{m}\): The posterior modal number of non-empty components.
\(\mathbf{Y}\): \(n\times n\) network adjacency matrix.
\(y_{i,j}\): Edge between node \(i\) and node \(j\).
\(\mathbf{Z}\): \(n\times p\) matrix of latent positions.
\(z_{i\ell}\): The latent position of node \(i\) in dimension \(\ell\).
\(q_{i,j}\): Probability of forming an edge between node \(i\) and node \(j\).
\(\alpha\): Global parameter that captures the overall connectivity level in the network.
\(\tau_{g}\): The mixing weight of component \(g\).
\(c_{ig}\): A binary indicator variable of membership of node \(i\) in component \(g\).
\(\boldsymbol{\mu}\): \(G\times p\) matrix of the mean latent positions.
\(\boldsymbol{\mu}_{g}\): The mean latent position parameter for component \(g\).
\(\boldsymbol{\Omega}\): \(p\times p\) precision matrix of the latent positions.
\(\omega_{\ell}\): Precision/global shrinkage parameter for dimension \(\ell\).
\(\delta_{h}\): Shrinkage strength from dimension \(h\).
\(\xi\): Scaling factor in the prior covariance of the component means.
\(\boldsymbol{\Theta}\): A collective term for \(\boldsymbol{\tau},\boldsymbol{\mu},\boldsymbol{\Omega}\).
\(a_{h}\): Shape parameter of gamma distribution for \(\delta_{h}\) in dimension \(h\).
\(b_{h}\): Rate parameter of the gamma distribution for \(\delta_{h}\) in dimension \(h\).
\(t_{h}\): Left truncation point of the gamma distribution for \(\delta_{h}\).
\(\kappa_{0},\kappa_{1}\): The dimension adaptation probability parameters.
\(\epsilon_{1}\): The threshold for decreasing dimensions in the dimension adaptation step.
\(\epsilon_{2}\): The threshold for adding a dimension in the dimension adaptation step.
\(\epsilon_{3}\): The threshold for adding a dimension in the dimension adaptation step when the number of active dimensions is 1.
## Appendix B Derivation of full conditional distributions
Links between nodes \(i\) and \(j\) are assumed to form probabilistically from a Bernoulli distribution:
\[y_{i,j}\sim\text{Bin}(q_{i,j})\]
For a binary network, the probability \(q_{i,j}\) is expressed in terms of a logistic model i.e.
\[\log\frac{q_{i,j}}{1-q_{i,j}}=\alpha-\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}.\]
Denoting \(\eta_{i,j}=\alpha-\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}\), then \(q_{i,j}=\frac{\exp(\eta_{i,j})}{1+\exp(\eta_{i,j})}\).
The likelihood function of the LSPCM is
\[L(\mathbf{Y}|\mathbf{Z},\alpha) =\prod_{i\neq j}P(y_{i,j}|\mathbf{z}_{i},\mathbf{z}_{j},\alpha)\] \[=\prod_{i\neq j}\left[\frac{\exp(\eta_{i,j})}{1+\exp(\eta_{i,j}) }\right]^{y_{i,j}}\left[1-\frac{\exp(\eta_{i,j})}{1+\exp(\eta_{i,j})}\right]^{ 1-y_{i,j}}\] \[=\prod_{i\neq j}\frac{\exp(\eta_{i,j}y_{i,j})}{1+\exp(\eta_{i,j})},\]
The joint posterior distribution of the LSPM is
\[\mathbb{P}(\alpha,\mathbf{Z},\mathbf{C},\mathbf{\Theta}\mid\mathbf{Y}) \propto\mathbb{P}(\mathbf{Y}\mid\alpha,\mathbf{Z})P(\alpha)\mathbb{ P}(\mathbf{Z}\mid\mathbf{C},\mathbf{\Theta})\mathbb{P}(\mathbf{C}\mid\mathbf{\tau}) \mathbb{P}(\mathbf{\Theta})\] \[\mathbb{P}(\alpha,\mathbf{Z},\mathbf{C},\mathbf{\tau},\mathbf{\mu},\mathbf{ \delta}\mid\mathbf{Y}) \propto\mathbb{P}(\mathbf{Y}\mid\alpha,\mathbf{Z})\mathbb{P}( \alpha)\mathbb{P}(\mathbf{Z}\mid\mathbf{C},\mathbf{\mu},\mathbf{\delta})\mathbb{P}(\mathbf{C} \mid\mathbf{\tau})\mathbb{P}(\mathbf{\tau})\mathbb{P}(\mathbf{\mu}\mid\mathbf{\delta}) \mathbb{P}(\mathbf{\delta})\]
\[\mathbb{P}(\alpha,\mathbf{Z},\mathbf{C},\boldsymbol{\tau}, \boldsymbol{\mu},\boldsymbol{\delta}\mid\mathbf{Y}) \propto\left\{\prod_{i\neq j}\left[\frac{\exp(\eta_{i,j}y_{i,j})}{1+ \exp(\eta_{i,j})}\right]\right\}\] \[\qquad\times\left\{\frac{1}{\sqrt{2\pi\sigma_{\alpha}^{2}}}\exp \left[-\frac{1}{2\sigma_{\alpha}^{2}}(\alpha-\mu_{\alpha})^{2}\right]\right\}\] \[\qquad\times\prod_{i=1}^{n}\prod_{g=1}^{G}\left\{\left(\frac{1}{ 2\pi}\right)^{\frac{p}{2}}(\det\,\boldsymbol{\Omega})^{\frac{1}{2}}\exp\left[- \frac{1}{2}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\boldsymbol{\Omega}( \mathbf{z}_{i}-\boldsymbol{\mu}_{g})\right]\right\}^{c_{ig}}\] \[\qquad\times\prod_{i=1}^{n}\left[\frac{n!}{c_{i1}!\cdots c_{i}G!} \tau_{1}^{c_{i1}}\cdots\tau_{G}^{c_{iG}}\right]\] \[\qquad\times\left[\frac{1}{\beta(\nu)}\prod_{g=1}^{G}\tau_{g}^{ \nu-1}\right]\] \[\qquad\times\left\{\prod_{g=1}^{G}\left(\frac{1}{2\pi}\right)^{ \frac{p}{2}}[\det\,(\xi^{-1}\boldsymbol{\Omega})]^{-\frac{1}{2}}\exp\left[- \frac{1}{2}(\boldsymbol{\mu}_{g}-\boldsymbol{0})^{T}(\xi^{-1})\boldsymbol{ \Omega}(\boldsymbol{\mu}_{g}-\boldsymbol{0})\right]\right\}\] \[\qquad\times\left\{\frac{b_{1}^{a_{1}}}{\Gamma(a_{1})}(\delta_{1 })^{a_{1}-1}\exp\left[-b_{1}(\delta_{1})\right]\right\}\] \[\qquad\times\left\{\prod_{h=2}^{p}\frac{b_{2}^{a_{2}}}{\Gamma(a_{ 2})}(\delta_{h})^{a_{2}-1}\exp\left[-b_{2}(\delta_{h})\right]\right\}\]
Indicating with \(-\) the conditioning on all the remaining variables, the full conditional distribution for \(\alpha\) is:
\[\mathbb{P}(\alpha\mid-) \propto\left\{\prod_{i\neq j}\left[\frac{\exp(\eta_{i,j}y_{i,j}) }{1+\exp(\eta_{i,j})}\right]\right\}\times\frac{1}{\sqrt{2\pi\sigma_{\alpha}^ {2}}}\exp\left[-\frac{1}{2}\frac{(\alpha-\mu_{\alpha})^{2}}{\sigma_{\alpha}^{ 2}}\right]\] \[\propto\left\{\prod_{i\neq j}\left[\frac{\exp(\eta_{i,j}y_{i,j}) }{1+\exp(\eta_{i,j})}\right]\right\}\times\exp\left[-\frac{1}{2}\frac{(\alpha -\mu_{\alpha})^{2}}{\sigma_{\alpha}^{2}}\right]\] \[\log\mathbb{P}(\alpha\mid-) \propto\sum_{i\neq j}\,\left\{\eta_{i,j}y_{i,j}-\log\left[1+\exp( \eta_{i,j})\right]\right\}-\frac{1}{2}\frac{(\alpha-\mu_{\alpha})^{2}}{\sigma _{\alpha}^{2}}\] \[\propto\sum_{i\neq j}\,\left\{\eta_{i,j}y_{i,j}-\log\left[1+\exp( \eta_{i,j})\right]\right\}-(\alpha-\mu_{\alpha})^{2}\] \[\propto\sum_{i\neq j}\,\left\{(\alpha-\|\mathbf{z}_{i}-\mathbf{z }_{j}\|_{2}^{2})y_{i,j}-\log\left[1+\exp(\alpha-\|\mathbf{z}_{i}-\mathbf{z}_{j }\|_{2}^{2})\right]\right\}-(\alpha-\mu_{\alpha})^{2}\] \[\log\mathbb{P}(\alpha\mid-) \propto\sum_{i\neq j}\,\left\{\alpha y_{i,j}-\log\left[1+\exp( \alpha-\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2})\right]\right\}-(\alpha-\mu_ {\alpha})^{2}\]
As this is not a recognisable distribution, the Metropolis-Hasting algorithm is employed. The full conditional distribution for \(\mathbf{Z}\) is:
\[\mathbb{P}(\mathbf{Z}\mid-) \propto\left\{\prod_{i\neq j}\left[\frac{\exp(\eta_{i,j}y_{i,j})}{ 1+\exp(\eta_{i,j})}\right]\right\}\] \[\qquad\times\prod_{i=1}^{n}\prod_{g=1}^{G}\left\{\left(\frac{1}{ 2\pi}\right)^{\frac{p}{2}}(\det\ \mathbf{\Omega})^{\frac{1}{2}}\exp\left[-\frac{1}{2}(\mathbf{z}_{i}-\mathbf{\mu}_ {g})^{T}\mathbf{\Omega}(\mathbf{z}_{i}-\mathbf{\mu}_{g})\right]\right\}^{c_{ig}}\] \[\log\mathbb{P}(\mathbf{Z}\mid-) \propto\sum_{i\neq j}\,\left\{\eta_{i,j}y_{i,j}-\log\left[1+\exp( \eta_{i,j})\right]\right\}-\sum_{i=1}^{n}\sum_{g=1}^{G}\left[\frac{1}{2}( \mathbf{z}_{i}-\mathbf{\mu}_{g})^{T}\mathbf{\Omega}(\mathbf{z}_{i}-\mathbf{\mu}_{g}) \right]^{c_{ig}}\] \[\propto\sum_{i\neq j}\,\left\{\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_ {2}^{2}y_{i,j}-\log\left[1+\exp(\alpha-\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{ 2})\right]\right\}\] \[\qquad-\sum_{i=1}^{n}\sum_{g=1}^{G}\left[\frac{1}{2}(\mathbf{z}_{ i}-\mathbf{\mu}_{g})^{T}\mathbf{\Omega}(\mathbf{z}_{i}-\mathbf{\mu}_{g})\right]^{c_{ig}}\]
As this is not a recognisable distribution, the Metropolis-Hasting algorithm is employed. The full conditional distribution for \(\mathbf{c}_{ig}=1\) is:
\[\mathbb{P}(\mathbf{c}_{ig}=1\mid-) =\frac{\mathbb{P}(-\mid\mathbf{c}_{ig}=1)\mathbb{P}(\mathbf{c}_{ig}=1)}{ \mathbb{P}(-)}=\frac{\mathbb{P}(\mathbf{z}_{i}\mid\mathbf{c}_{ig}=1)\mathbb{P}(\bm {c}_{ig}=1)}{\sum_{r=1}^{G}\mathbb{P}(\mathbf{z}_{ir}\mid\mathbf{c}_{ir}=1) \mathbb{P}(\mathbf{c}_{ir}=1)}\] \[=\frac{\operatorname{MVN}_{p}(\mathbf{z}_{i};\mathbf{\mu}_{g}, \mathbf{\Omega}^{-1})\tau_{g}}{\sum_{r=1}^{G}\operatorname{MVN}_{p}(\mathbf{ z}_{ir};\mathbf{\mu}_{r},\mathbf{\Omega}^{-1})\tau_{r}}=\frac{\tau_{g} \operatorname{MVN}_{p}(\mathbf{z}_{i};\mathbf{\mu}_{g},\mathbf{\Omega}^{-1})}{ \sum_{r=1}^{G}\tau_{r}\operatorname{MVN}_{p}(\mathbf{z}_{ir};\mathbf{\mu}_{r}, \mathbf{\Omega}^{-1})}\]
The full conditional distribution for \(\tau_{g}\) is:
\[\mathbb{P}(\tau_{g}\mid-) \propto\left[\frac{1}{\beta(\nu)}\tau_{g}^{\nu-1}\right]\left[ \prod_{i=1}^{n}\tau_{g}^{c_{ig}}\right]\] \[\propto\tau_{g}^{\nu-1}\tau_{g}^{\sum_{i=1}^{n}\mathbf{c}_{ig}}\] \[\propto\tau_{g}^{\sum_{i=1}^{n}\mathbf{c}_{ig}+\nu-1}\] \[\sim\operatorname{Dir}(\sum_{i=1}^{n}\mathbf{c}_{ig}+\nu)\]
The full conditional distribution for \(\mathbf{\mu}_{g}\) is:
\[\mathbb{P}(\mathbf{\mu}_{g}\mid-) \propto\prod_{i=1}^{n}\left\{\left(\frac{1}{2\pi}\right)^{\frac{p}{2 }}(\det\ \mathbf{\Omega})^{\frac{1}{2}}\exp\left[-\frac{1}{2}(\mathbf{z}_{i}-\mathbf{\mu}_{g})^{T }\mathbf{\Omega}(\mathbf{z}_{i}-\mathbf{\mu}_{g})\right]\right\}^{c_{ig}}\] \[\qquad\times\left\{\left(\frac{1}{2\pi}\right)^{\frac{p}{2}}[ \det\ (\xi^{-1}\mathbf{\Omega})]^{\frac{1}{2}}\exp\left[-\frac{1}{2}(\mathbf{\mu}_{g}-\mathbf{0} )^{T}(\xi^{-1})\mathbf{\Omega}(\mathbf{\mu}_{g}-\mathbf{0})\right]\right\}\] \[\propto\exp\left\{\sum_{i=1}^{n}\left[-\frac{1}{2}(\mathbf{z}_{i }-\mathbf{\mu}_{g})^{T}\mathbf{\Omega}(\mathbf{z}_{i}-\mathbf{\mu}_{g})(c_{ig})\right]- \frac{\mathbf{\mu}_{g}^{T}(\xi^{-1})\mathbf{\Omega}\mathbf{\mu}_{g}}{2}\right\}\] \[\propto\exp\left\{-\frac{1}{2}\left[\sum_{i=1}^{n}\left[( \mathbf{z}_{i}^{T}\mathbf{\Omega}\mathbf{z}_{i}c_{ig}-2\mathbf{z}_{i}^{T}\mathbf{\Omega }\mathbf{\mu}_{g}c_{ig}+\mathbf{\mu}_{g}^{T}\mathbf{\Omega}\mathbf{\mu}_{g}c_{ig})\right]+\mathbf{ \mu}_{g}^{T}(\xi^{-1})\mathbf{\Omega}\mathbf{\mu}_{g}\right]\right\}\] \[\propto\exp\left\{-\frac{1}{2}\left[-2\sum_{i=1}^{n}c_{ig} \mathbf{z}_{i}^{T}\mathbf{\Omega}\mathbf{\mu}_{g}+\sum_{i=1}^{n}c_{ig}\mathbf{\mu}_{g}^{T} \mathbf{\Omega}\mathbf{\mu}_{g}+\mathbf{\mu}_{g}^{T}(\xi^{-1})\mathbf{\Omega}\mathbf{\mu}_{g} \right]\right\}\] \[\propto\exp\left\{-\frac{1}{2}\left[\mathbf{\mu}_{g}^{T}\mathbf{\Omega} \left(\sum_{i=1}^{n}c_{ig}+\xi^{-1}\right)\mathbf{\mu}_{g}-2\mathbf{\mu}_{g}^{T}\left( \mathbf{\Omega}\sum_{i=1}^{n}(c_{ig}\mathbf{z}_{i})\right)\right]\right\}\] \[\propto\exp\left\{-\frac{1}{2}\left[\mathbf{\mu}_{g}^{T}-\left\{\mathbf{ \Omega}\sum_{i=1}^{n}(c_{ig}\mathbf{z}_{i})\right\}^{T}\left\{\mathbf{\Omega} \left(\sum_{i=1}^{n}c_{ig}+\xi^{-1}\right)\right\}^{-1}\right]\left[\mathbf{\Omega }\left(\sum_{i=1}^{n}c_{ig}+\xi^{-1}\right)\right]\right.\] \[\qquad\times\left.\left.\left[\mathbf{\mu}_{g}-\left\{\mathbf{\Omega}^{T }\left(\sum_{i=1}^{n}c_{ig}+\xi^{-1}\right)\right\}^{-1}\left\{\mathbf{\Omega} \sum_{i=1}^{n}(c_{ig}\mathbf{z}_{i})\right\}\right]\right\}\]
since \(\mathbf{\Omega}\) is a diagonal matrix, \(\mathbf{\Omega}=\mathbf{\Omega}^{T}\) and \(\mathbf{\Omega}\mathbf{\Omega}^{-1}=\mathbf{I}\), thus,
\[\propto\exp\left\{-\frac{1}{2}\left[\mathbf{\mu}_{g}-\frac{\sum_{i=1}^ {n}c_{ig}\mathbf{z}_{i}}{\sum_{i=1}^{n}c_{ig}+\xi^{-1}}\right]^{T}\left[\mathbf{ \Omega}\sum_{i=1}^{n}c_{ig}+\xi^{-1}\right]\right.\] \[\qquad\times\left.\left.\left[\mathbf{\mu}_{g}-\frac{\sum_{i=1}^{n}c_{ ig}\mathbf{z}_{i}}{\sum_{i=1}^{n}c_{ig}+\xi^{-1}}\right]\right\}\] \[\sim\mathrm{MVN}_{p}\left(\frac{\sum_{i=1}^{n}c_{ig}\mathbf{z}_{i }}{\sum_{i=1}^{n}c_{ig}+\xi^{-1}}\quad,\quad\left[\mathbf{\Omega}\left(\sum_{i=1}^ {n}c_{ig}+\xi^{-1}\right)\right]^{-1}\right)\]
The full conditional distribution for \(\delta_{1}\) is:
\[\mathbb{P}(\delta_{1}\mid-)\propto\] \[\qquad\times\prod_{g=1}^{G}\left\{\left(\frac{1}{2\pi}\right)^{ \frac{p}{2}}\left[\det\,(\xi^{-1}\boldsymbol{\Omega})\right]^{\frac{1}{2}} \exp\left[-\frac{1}{2}(\boldsymbol{\mu}_{g}-\boldsymbol{0})^{T}(\xi^{-1}) \boldsymbol{\Omega}(\boldsymbol{\mu}_{g}-\boldsymbol{0})\right]\right\}\] \[\qquad\times\left[\frac{b_{1}^{a_{1}}}{\Gamma(a_{1})}(\delta_{1})^ {a_{1}-1}\exp\left(-b_{1}\delta_{1}\right)\right]\] \[\qquad\times\left\{\boldsymbol{\omega}^{\frac{p\sum_{i=1}^{n} \sum_{i=g}^{G}c_{ig}}{2}}\mathbf{I}_{p}\exp\left[-\frac{1}{2}\sum_{i=1}^{n} \sum_{i=g}^{G}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\boldsymbol{\omega} \mathbf{I}_{p}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\right]\right\}\] \[\qquad\times\left\{\boldsymbol{\omega}^{\frac{Gp}{2}}\mathbf{I}_{p }\exp\left[-\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}(\xi^{-1} \boldsymbol{\omega})\mathbf{I}_{p}\boldsymbol{\mu}_{g}\right]\right\}\times \left[\delta_{1}^{a_{1}-1}\exp(-b_{1}\delta_{1})\right]\] \[\qquad\times\left\{\delta_{1}^{\frac{Gp}{2}}\exp\left[-\frac{1}{2 }\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}(\xi^{-1})\left(\prod_{m=1}^{\ell}\delta _{m}\right)\mathbf{I}_{p}\boldsymbol{\mu}_{g}\right]\right\}\times\left[ \delta_{1}^{a_{1}-1}\exp(-b_{1}\delta_{1})\right]\] \[\qquad\propto\delta_{1}^{\frac{pp}{2}}\delta_{1}^{\frac{Gp}{2}} \delta_{1}^{a_{1}-1}\exp\left[-\frac{1}{2}\sum_{i=1}^{n}\sum_{i=g}^{G}(\mathbf{ z}_{i}-\boldsymbol{\mu}_{g})^{T}\left(\delta_{1}\prod_{m=2}^{\ell}\delta_{m} \right)\mathbf{I}_{p}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\right.\] \[\qquad\left.-\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}( \xi^{-1})\left(\delta_{1}\prod_{m=2}^{\ell}\delta_{m}\right)\mathbf{I}_{p} \boldsymbol{\mu}_{g}-b_{1}\delta_{1}\right]\] \[\qquad\propto\delta_{1}^{\frac{(n+G)p}{2}+a_{1}-1}\exp\left\{- \left[\frac{1}{2}\sum_{i=1}^{n}\sum_{g=1}^{G}(\mathbf{z}_{i}-\boldsymbol{\mu} _{g})^{T}\left(\prod_{m=2}^{\ell}\delta_{m}\right)\mathbf{I}_{p}(\mathbf{z}_{ i}-\boldsymbol{\mu}_{g})c_{ig}\right.\right.\] \[\qquad\left.\left.+\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^ {T}(\xi^{-1})\left(\prod_{m=2}^{\ell}\delta_{m}\right)\mathbf{I}_{p} \boldsymbol{\mu}_{g}+b_{1}\right]\delta_{1}\right\}\] \[\qquad\sim\mathrm{Gam}\left(\frac{(n+G)\,p}{2}+a_{1}\quad,\right.\] \[\qquad\qquad\left.\frac{1}{2}\sum_{i=1}^{n}\sum_{g=1}^{G}(\mathbf{ z}_{i}-\boldsymbol{\mu}_{g})^{T}\left(\prod_{m=2}^{\ell}\delta_{m}\right)\mathbf{I}_{p}( \mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\right.\quad+\] \[\qquad\qquad\left.\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^ {T}(\xi^{-1})\left(\prod_{m=2}^{\ell}\delta_{m}\right)\mathbf{I}_{p} \boldsymbol{\mu}_{g}+b_{1}\right)\]
The full conditional distribution for \(\delta_{h}\), where \(h\geq 2\) is:
\[\mathbb{P}(\delta_{h}\mid-)\propto \prod_{i=1}^{n}\prod_{g=1}^{G}\left\{\left(\frac{1}{2\pi}\right)^{ \frac{p}{2}}(\det\,\boldsymbol{\Omega})^{\frac{1}{2}}\exp\left[-\frac{1}{2}( \mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\boldsymbol{\Omega}(\mathbf{z}_{i}- \boldsymbol{\mu}_{g})\right]\right\}^{c_{ig}}\] \[\quad\times\prod_{g=1}^{G}\left\{\left(\frac{1}{2\pi}\right)^{ \frac{p}{2}}[\det\,(\xi^{-1}\boldsymbol{\Omega})]^{\frac{1}{2}}\exp\left[-\frac{ 1}{2}(\boldsymbol{\mu}_{g}-\boldsymbol{0})^{T}(\xi^{-1}\boldsymbol{\Omega}( \boldsymbol{\mu}_{g}-\boldsymbol{0})\right]\right\}\] \[\quad\times\left[\frac{b_{2}^{a_{2}}}{\Gamma(a_{2})}(\delta_{h})^ {a_{2}-1}\exp(-b_{2}\delta_{h})\right]\] \[\propto \left\{\boldsymbol{\omega}^{\frac{(p-h+1)\sum_{i=1}^{n}\sum_{g=1 }^{G}c_{ig}}{2}}\mathbf{I}_{p}\exp\left[-\frac{1}{2}\sum_{i=1}^{n}\sum_{g=1}^ {G}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\boldsymbol{\omega}\mathbf{I}_{p} (\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\right]\right\}\] \[\quad\times\left\{\boldsymbol{\omega}^{\frac{Gp}{2}}\mathbf{I}_{p} \exp\left[-\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}(\xi^{-1} \boldsymbol{\omega})\mathbf{I}_{p}\boldsymbol{\mu}_{g}\right]\right\}\] \[\quad\times\left[\delta_{h}^{a_{2}-1}\exp(-b_{2}\delta_{h})\right]\] \[\propto \delta_{h}^{\frac{(p-h+1)n}{2}}\exp\left[-\frac{1}{2}\sum_{i=1}^{ n}\sum_{g=1}^{G}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\left(\prod_{m=1}^{\ell} \delta_{m}\right)\mathbf{I}_{p}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\right]\] \[\quad\times\left[\delta_{h}^{a_{2}-1}\exp(-b_{2}\delta_{h})\right]\] \[\propto \delta_{h}^{\frac{n(p-h+1)}{2}}\delta_{h}^{\frac{G(p-h+1)}{2}} \delta_{h}^{a_{2}-1}\exp\left[-\frac{1}{2}\sum_{i=1}^{n}\sum_{g=1}^{G}( \mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{T}\left(\delta_{h}\prod_{m=1,m\neq h}^{ \ell}\delta_{m}\right)\mathbf{I}_{p}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\right.\] \[\quad\left.-\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}(\xi ^{-1})\left(\delta_{h}\prod_{m=1,m\neq h}^{\ell}\delta_{m}\right)\mathbf{I}_{ p}\boldsymbol{\mu}_{g}-b_{2}\delta_{h}\right]\] \[\propto \delta_{h}^{\frac{(n+G)(p-h+1)}{2}+a_{2}-1}\exp\left\{-\left[ \frac{1}{2}\sum_{i=1}^{n}\sum_{g=1}^{G}(\mathbf{z}_{i}-\boldsymbol{\mu}_{g})^{ T}\left(\prod_{m=1,m\neq h}^{\ell}\delta_{m}\right)\mathbf{I}_{p}(\mathbf{z}_{i}- \boldsymbol{\mu}_{g})c_{ig}\right.\right.\] \[\quad\left.\left.+\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{ T}(\xi^{-1})\left(\prod_{m=1,m\neq h}^{\ell}\delta_{m}\right)\mathbf{I}_{p} \boldsymbol{\mu}_{g}+b_{2}\right]\delta_{h}\right\}\]
since \(\delta_{h}\) is bounded between \([1,\infty)\),
\[\sim\mathrm{Gam}^{\mathrm{T}}\left(\frac{(n+G)\left(p-h+1\right)} {2}+a_{2}\quad,\right.\] \[\quad\left.\frac{1}{2}\sum_{i=1}^{n}(\mathbf{z}_{i}-\boldsymbol{\mu }_{g})^{T}\left(\prod_{m=1,m\neq h}^{\ell}\delta_{m}\right)\mathbf{I}_{p}( \mathbf{z}_{i}-\boldsymbol{\mu}_{g})c_{ig}\quad+\right.\] \[\quad\left.\frac{1}{2}\sum_{g=1}^{G}\boldsymbol{\mu}_{g}^{T}(\xi ^{-1})\left(\prod_{m=1,m\neq h}^{\ell}\delta_{m}\right)\mathbf{I}_{p} \boldsymbol{\mu}_{g}+b_{2}\quad,\quad 1\right)\]
Simulation studies' additional posterior distribution plots
The LSPCM performance is assessed via the posterior distributions of the number of non-empty components (estimate of the number of clusters), number of dimensions, latent positions' variance, shrinkage strength, \(\alpha\) parameters. Included here are supplementary plots to assist in assessing the performance of the LSPCM.
Figure 8 shows an increasing probability that \(G_{+}=3\) when \(\nu\) decreases from 0.1 to 0.001, but \(G_{+}=1\) receives increasing probability when \(\nu\) decreases from 0.001 to 0.00001, indicating the nodes are collapsing into one cluster. Figure 9 indicates that, irrespective of the value of \(\nu\), the number of dimensions is well estimated with \(p_{m}=2\). Figures 10, 11 and 12 demonstrate accurate parameter estimation.
On the other hand, Figure 13 shows increasingly posterior probability for the correct \(G_{+}=7\) when \(\nu\) decreases from 0.1 to 0.00001. However, Figure 14 indicates more uncertain inference of the number of dimensions with notable posterior probability for \(p=2\) to \(p=4\). Figures 15, 16 and 17 also show some bias in the estimation of the parameters associated with the dimensions of the latent space.
Figure 11: For the simulated network with \(p^{*}=2\) and \(G^{*}=3\), the posterior distribution of the variance parameters across different values of \(\nu\).
Figure 10: For the simulated network with \(p^{*}=2\) and \(G^{*}=3\) the posterior distribution of the shrinkage strength parameter across different values of \(\nu\).
Figure 9: Posterior distributions on the number of dimensions for the simulated network with \(p^{*}=2\) and \(G^{*}=3\) across different values of \(\nu\).
Figure 12: For the simulated network with \(p^{*}=2\) and \(G^{*}=3\), the posterior distribution of \(\alpha\) across different values of \(\nu\).
Figure 13: For the simulated network with \(p^{*}=4\) and \(G^{*}=7\), the posterior distribution on the number of clusters across different values of \(\nu\).
Figure 16: For the simulated network with \(p^{*}=4\) and \(G^{*}=7\), the posterior distributions of the variance parameters across dimensions for different values of \(\nu\).
Figure 17: For the simulated network with \(p^{*}=4\) and \(G^{*}=7\), the posterior distribution of \(\alpha\) for different values of \(\nu\).
Figure 15: For the simulated network with \(p^{*}=4\) and \(G^{*}=7\), the posterior distributions of the shrinkage strength parameters across dimensions for different values of \(\nu\).
Additional results from analyses of Twitter networks
Supplementary results from applying the LSPCM to the Twitter networks are provided here, including the posterior distributions of the variance, shrinkage strength, and \(\alpha\) parameters. As noted in Gwee et al. (2023), different dimensions solution will gives bias estimate with the \(\alpha\) parameter being very obvious with its own set of violin plots separated from another.
Figure 18: For the football Twitter network, (a) the posterior distributions of the shrinkage strength parameters across dimensions, (b) the posterior distributions of the variance parameters across dimensions, and (c) the posterior distribution of \(\alpha\).
Figure 19: For the Irish politicians Twitter network, (a) the posterior distribution of the shrinkage strength parameter across dimensions, (b) the posterior distributions of the variance parameters across dimensions, and (c) the posterior distribution of \(\alpha\). |
2305.16935 | Gender Lost In Translation: How Bridging The Gap Between Languages
Affects Gender Bias in Zero-Shot Multilingual Translation | Neural machine translation (NMT) models often suffer from gender biases that
harm users and society at large. In this work, we explore how bridging the gap
between languages for which parallel data is not available affects gender bias
in multilingual NMT, specifically for zero-shot directions. We evaluate
translation between grammatical gender languages which requires preserving the
inherent gender information from the source in the target language. We study
the effect of encouraging language-agnostic hidden representations on models'
ability to preserve gender and compare pivot-based and zero-shot translation
regarding the influence of the bridge language (participating in all language
pairs during training) on gender preservation. We find that language-agnostic
representations mitigate zero-shot models' masculine bias, and with increased
levels of gender inflection in the bridge language, pivoting surpasses
zero-shot translation regarding fairer gender preservation for speaker-related
gender agreement. | Lena Cabrera, Jan Niehues | 2023-05-26T13:51:50Z | http://arxiv.org/abs/2305.16935v1 | Gender Lost In Translation: How Bridging The Gap Between Languages Affects Gender Bias in Zero-Shot Multilingual Translation
###### Abstract
NMT models often suffer from gender biases that harm users and society at large. In this work, we explore how bridging the gap between languages for which parallel data is not available affects gender bias in multilingual NMT, specifically for zero-shot directions. We evaluate translation between grammatical gender languages which requires preserving the inherent gender information from the source in the target language. We study the effect of encouraging language-agnostic hidden representations on models' ability to preserve gender and compare pivot-based and zero-shot translation regarding the influence of the bridge language (participating in all language pairs during training) on gender preservation. We find that language-agnostic representations mitigate zero-shot models' masculine bias, and with increased levels of gender inflection in the bridge language, pivoting surpasses zero-shot translation regarding fairer gender preservation for speaker-related gender agreement.
+
Footnote †: 2023: The authors. This article is licensed under a Creative Commons 4.0 licence, no derivative works, attribution, CC-BY-ND.
## 1 Introduction
With the rapid proliferation of intelligent systems, machine learning models reflecting patterns of discriminatory behavior found in the training data is a growing concern of practitioners and academics. Neural machine translation (NMT) models have proven notoriously gender-biased, often resulting in harmful gender stereotyping or an under-representation of the feminine gender in their outputs. In recent years, several approaches to debias NMT have been proposed, including debiasing the data before model training, the models during training, or post-processing their outputs. However, to the best of the authors' knowledge, it has yet to be explored how the phenomenon of not observing enough data, if any, to model language accurately affects gender discrimination in MNMT.
To support translation between language pairs never seen during training (i.e., zero-shot directions), two widely-used approaches leverage the language resources (i.e., parallel data) available during training: _Pivot-based_ translation uses an intermediate pivot/bridge language (as in source\(\rightarrow\)pivot\(\rightarrow\)target), whereas _zero-shot_ translation learns to bridge the gap between unseen language pairs using cross-lingual transfer learning.1
Footnote 1: We use “zero-shot _directions_” to refer to language pairs unseen during training, whereas “zero-shot _translation_” is NMT capable of zero-shot inference, relying on a model’s generalizability to conditions unseen during training.
In this work, we analyze gender bias in MNMT in the context of _gender preservation_, where gender information conveyed by the source language sentence needs to be preserved in the target language translation; in our experimental setting, source and target languages are grammatical gender languages that use a noun class system conforming with the _gender binary_, i.e., the classification of gender into the opposite forms of feminine and masculine, considered indicative of a person's biological sex.2 We examine translations
in terms of differences in gender preservation between both genders, which, if found, are evidence of gender-biased MT. More precisely, we focus on the impact that _bridging the gap between unseen language pairs_ has on the MT models' ability to preserve the feminine and masculine gender, unambiguously indicated by the source sentence, equally well in their outputs. Our research questions are:
**RQ1**: How do zero-shot and pivot-based translation compare regarding gender-biased outputs for zero-shot directions?
**RQ2**: Does the bridge language affect the gender biases perpetuated by zero-shot and pivot-based translations?
**RQ3**: Do translation quality improvements of zero-shot models reduce their gender biases?
The remainder of this paper is structured as follows. Section 2 introduces the task of gender preservation in translation with relevant terminology and reviews related work on gender bias in NMT. Section 3 describes our experimental design, tailored toward investigating cause-and-effect relationships of gender bias in MNMT. Section 4 presents the data used and the evaluative procedure followed in our experiments. Section 5 presents the experimental setup and results, and Section 6 concludes with our summarized findings, limitations, and future research directions.
## 2 Terminology & Related Work
In a large-scale analysis of the plethora of existing research addressing gender bias in NMT, Savoldi et al. (2021) categorize them based on two conceptualizations of the problem: research works focusing on the weight of prejudice and stereotypes in NMT, and studies assessing whether gender is preserved in translation. In this paper, we analyze gender bias in MNMT in the context of gender preservation, where for translation into a gender-sensitive target language, the gender information conveyed by the source language needs to be retained in the target language translation.
Gender in Linguistics:In our gender bias evaluation we consider _referential gender_, which, according to Cao and Daume III (2021), only exists when an entity (i.e., a human) is mentioned and their gender (or sex) is realized linguistically. Moreover, we focus on the translation between languages using _grammatical gender_, a way of classifying nouns, assigning them gender categories (e.g., masculine, feminine, neuter, etc.) that may be independent of the real-world biosocial genders associated with referents; however, there is a tendency for languages to correlate grammatical gender with the gender of a referent, especially if human (Corbett, 1991; Ackerman, 2019).
For example, talking about a specific doctor (e.g., "the doctor loves _her\({}_{F}\)_ job"), the word choice of the female anaphoric pronoun is not determined by grammatical gender but only by referential gender. The same sentence translated into German ("_dier\({}_{F}\) Arztin\({}_{F}\)_ lieb _ihren\({}_{F}\)_ Job\({}_{M}\).") requires the article ("die" = the) and pronoun ("ihren" = her) to agree with the feminine grammatical gender category the noun is assigned ("Arztin" = female doctor).3 On the other hand, the sentence "the doctor helps the nurse" without any further context information does not indicate the gender of either of the two mentioned entities; for the German translation, the gender of both the doctor ("Arztin\({}_{M}\)"/"Arztin\({}_{F}\)") and the nurse ("Krankenpfleger\({}_{M}\)"/"Krankenschwester\({}_{F}\)") needs to be considered for the correct syntactic build-up of the sentence. For details on the many differences in the manifestation of gender in languages, we refer the interested reader to related works such as that of Cao and Daume III (2021).
Footnote 3: Note, in German, the abstract noun “Job” is assigned the masculine grammatical gender category, while in English, “job” has no grammatical gender.
Gender Preservation:Translation into a gender-sensitive language, e.g., a grammatical gender language, involves gender agreement between nominal properties--e.g. grammatical and referential gender of a (pro)noun--and a determiner, adjective, verb, etc., depending on the target language agreement rules. Whenever the source language is (largely) genderless, i.e., the gender of the noun is unspecified, and context information is unavailable, gender preservation is a non-trivial task for machines and humans alike.
In recent years, several approaches have been proposed to address the challenge of gender preservation. Vanmassenhove et al. (2018) leverage additional gender information by prepending a gender tag to each source sentence, both at training and inference time, to improve the generation of speakers' referential markings. Avoiding the need
for additional context information for training or inference, Basta et al. (2020) concatenate each sentence with its predecessor to achieve slight improvements in gender translation. Moryossef et al. (2019) inject context information as they prepend a short phrase, e.g., "_she_ said to _them_", to the source sentence, translate the sentence with the prefix, and afterward remove the prefix translation from the model's output. Specifying gender inflection in this way improves models' ability to generate feminine target forms, but it relies on (not always available) metadata about speakers and listeners. Furthermore, different gender-specific translations in terms of word choices can be an arguably non-desirable side-effect.
A different approach is to post-process the output using counterfactual data augmentation. Saunders and Byrne (2020) use a lattice rescoring module that maps gender-marked words in the output to all possible inflectional variants and rescores all paths in the lattice corresponding to the different sentences with a model that has been gender-biased at the cost of lower translation quality. Choosing the sentence with the highest score as the final translation results in increased accuracy of gender selection. A downside is that data augmentation is very demanding for complex sentences with a variety of gender phenomena, such as those typically occurring in natural language scenarios.
## 3 Analyzing Gender Bias in MNMT
In our experimental setting, information necessary to disambiguate gender was _always_ conveyed by the source sentence (cf. Figure 0(a)) and, thus, available to the models. Motivated by our research inquiry, we focused our investigation on the effect of bridging on gender preservation in MNMT between unseen language pairs, as illustrated broadly in Figure 0(b), exploring three influencing factors to learn about the cause-and-effect relationship of gender bias in MNMT: _i)_ the approach taken to bridge unseen language pairs (i.e., using continu
Figure 1: Overview of our investigated translation scenario (here, for the utterance meaning “I felt alienated”): At inference, we translated between unseen gender-inflected source-target language pairs (i.e., Italian\(\leftrightarrow\)French) by bridging, implicitly (zero-shot) and explicitly (pivot-based), using bridge languages with different gender-inflectional systems (e.g., Spanish or English).
ous representations for zero-shot translation or discrete pivot language representations); _ii)_ the choice of bridge language; and _iii)_ language-agnostic model hidden representations.
Zero-Shot Translation Vs. Pivoting:To bridge the gap between an unknown source-target language pair at inference, we took two different approaches using the same trained translation model. For _pivot-based translation_, we cascaded a model to perform source\(\rightarrow\)pivot and pivot\(\rightarrow\)target translation. As such, pivoting used the pivot language as an explicit bridge between the unknown language pair. For _zero-shot translation_, we used the same model to translate directly between the unknown language pair, relying on the model's learned semantic space where sentences with the same meaning are mapped to similar regions regardless of the language. Compared to pivoting, zero-shot translation circumvents error propagation and reduces computation time, but achieving high-quality zero-shot translations is challenging. In light of our inquiry, we analyzed each approach's ability to preserve gender, comparing their performances for the feminine and the masculine gender.4
Footnote 4: In the presentation of our results, we use ZS and PV, short for zero-shot and pivot-based translation when space is limited.
Bridge Language:English often participates in most, if not all, language pairs in a training corpus, making English, a language limited to pronominal gender (with a few exceptions), the most reasonable choice for a bridge language. When translating into a genderless language (e.g., Hungarian), the potential loss of gender information conveyed by the source sentence is unproblematic as it is evidently without detrimental consequence. However, when translating into a language with a _higher_ gender-inflected system than English (e.g., French or Italian), the loss of gender information poses a significant problem since the information necessary to disambiguate gender is virtually no longer existent (cf. bottom in Figure 0(b)).
As preserving non-existent gender information is inherently impossible, also for humans, it is fair to assume that MT models have difficulty when encountering this phenomenon of gender ambiguity; the simplest solution is to resort to _random guessing_, with a 50% chance of choosing one gender over the other. Any other gender distribution (\(\neq\) 50:50%) is not reflective of random guessing but instead indicative of _educated guessing_ based on knowledge or observations _assumed_ to be true that can, however, include biases.
Against this background, we studied the role of the bridge language in gender preservation, focusing on the gender bias differences between pivot-based and zero-shot translation, using bridge languages with different gender-inflectional systems, including English (low gender inflection), German and Spanish (high(er) gender inflection). German and English are both Germanic languages. Whereas in German, all noun classes require masculine, feminine, or neuter5 inflection, English lacks a similar grammatical gender system. In German, the gender of the noun is reflected in determiners like articles, possessives, and demonstratives. On the other hand, Spanish is a Romance language with a binary grammatical gender system, differentiating masculine and feminine nouns; from a grammatical point of view, there are no gender-neutral nouns. The gender of nouns agrees with (some) determiners and, more often than in German, adjectives, making gender a pervasive feature in Spanish.
Footnote 5: In German, neuter gender inflection does not apply to nouns identifying people (cf. referential gender).
Language-Agnostic Hidden Representations:Since languages are characterized by different linguistic features, including those related to gender, it is reasonable to assume that language-_specific_ representations, tailored to the language pairs included during training, _impair_ gender preservation for unseen language pairs. Because of this, we explored the effect of three modifications to (the training of) a baseline Transformer Vaswani et al. (2017) to encourage language-_agnostic_ hidden representations, which have proven to cause performance gains for zero-shot translation. We
* removed a residual connection in a middle Transformer encoder to _lessen positional correspondences to the input tokens_ and, thereby, reduced dependencies to language-specific word order (\(R\)) as proposed by Liu et al. (2021),
* encouraged _similar (i.e., closer) source and target language representations_ through an auxiliary loss (\(AUX\)) similar to Pham et al. (2019) and Arivazhagan et al. (2019), and
* performed joint adversarial training _penalizing recovery of source language signals_ in the
representations (\(ADV\)) as done by Arivazhajan et al. (2019).
In our experiments, we examined the effect of these three modifications in isolation and tested some combinations; in total, we compared five different models to our baseline (\(B\))--which we refer to as \(B+AUX\), \(B+ADV\), \(R\), \(R+AUX\), and \(R+ADV\)--to determine whether they mitigated models' gender biases.
## 4 Evaluation Data & Procedure
For our evaluation, we built on the work of Bentivogli et al. (2020) regarding the data and procedure used for our gender bias evaluation.
### Multilingual Gender Preservation Dataset
In our experiments, we used the publicly available TED-based corpus MuST-C (Di Gangi et al., 2019) for model training (cf. Section 5.1 for details) and evaluated our models on a subset of MuST-SHE (Bentivogli et al., 2020), a gender-annotated benchmark. MuST-SHE is a subset of MuST-C and is available for English\(\rightarrow\)French, English\(\rightarrow\)Italian, and English\(\rightarrow\)Spanish translation, where at least one English gender-neutral word in a sentence needs to be translated into the corresponding masculine/feminine target word(s).
The target languages included in MuST-SHE allowed us to investigate gender preservation for sentences where _the source language always provides enough information to disambiguate gender_; with this research inquiry, two main criteria needed to be met by the evaluation data: First, we wanted to evaluate gender translation _between_ grammatical gender languages. Therefore, we formed a many-to-many subset from MuST-SHE, keeping only true-parallel data and realigning it to support evaluating translation between the three initial target languages. Second, we wanted to investigate the gender biases in translation between language pairs unseen during training (i.e., zero-shot directions). Using training corpora comprising different language pairs, we built models with different supervised translation directions. Accordingly, the models did not share the same zero-shot directions. For instance, a model trained on Spanish-X data had seen examples for language pairs that included Spanish. Therefore, we discarded the Spanish examples and only used French-Italian examples in our evaluation to ensure equal zero-shot directions across all models considered in our experiments.
We obtained 278 sentences with detailed statistics presented in Table 1. The included French\(\leftrightarrow\)Italian directions left us with 556 translations for evaluation.
The composition of this dataset, comprising French-Italian parallel data, provides different evaluative dimensions that can be considered for gender bias evaluation of MT models.
**Referent Gender:** Grammatical gender agreement determines the modification of certain words to express gender congruent with the other words they relate to, which, in our case, were the words designating a _referent_--a person the speaker mentioned. Consequently, the gender of a referent (cf. referential gender) determined the gender of gender-marked words relating to the referent (i.e., for a female referent, feminine inflected words, and for a male referent, masculine inflections). All gender-marked words in a sentence did agree with the same (referent) gender. As MuST-SHE is TED-based data, a referent was either the speaker, or a person not identified as the speaker (nor the addressee(s)/audience in our examples).
**Speaker Gender:** Due to the evaluation data stemming from TED talks, examples are transcribed utterances spoken by different speakers of both feminine or masculine gender. Depending on the type of gender agreement occurring in an utterance, the speaker's gender and referents' gender do or do not correlate.
**Gender Agreement:** Whenever the speaker was the referent, i.e., the speaker was referring to him- or herself, there is speaker-_related_ gender agreement among those gender-marked words referring to the speaker. Languages with a less pronounced inflection of gender, such as English, can encounter syntactic structures that do not indicate a speaker's gender (cf. bottom in Figure 0(b)). In contrast, syntactic structures of languages with rich gender-inflected systems typically encode enough
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \multicolumn{2}{c}{Feminine} & \multicolumn{2}{c}{Masculine} & \multicolumn{2}{c}{**Total**} \\ & \multicolumn{2}{c}{(Female/Male)} & \multicolumn{2}{c}{(Female/Male)} & \multicolumn{2}{c}{(Female/Male)} \\ \hline Cat. 1 & 64 & (64/0) & 56 & (0/56) & 120 & (64/56) \\ Cat. 2 & 72 & (58/14) & 86 & (27/59) & 158 & (85/73) \\ \hline
**Total** & 136 & (122/14) & 142 & (27/115) & **278** & (149/129) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the MuST-SHE data used, broken down by referent gender (Feminine/Masculine), gender agreement (Cat. 1/2: speaker-related/speaker-independent), and speaker gender (Female/Male).
information to unambiguously classify a speaker's gender (cf. top in Figure 0(b)). Consequently, we hypothesized that using English as a bridge language results in the loss of gender information for sentences with speaker-related gender agreement; meanwhile, the higher gender-inflected grammatical gender languages, German and Spanish, were hypothesized to preserve the gender information when used as a bridge language.
Whenever a person other than the speaker was the referent, i.e., the speaker was talking about someone else (e.g., "mi _padre_ se sentia \(\text{alienado}_{M}\)" = "my dad felt alienated" uttered by a _female_ speaker), there is speaker-_independent_ gender agreement among those gender-marked words referring to the referent. For these examples in our data, meaning construction typically does not require the integration of semantic information about the speaker for correct syntactic processing and translation. The gender inflection of words is therefore often purely based on syntactic agreement with a formally marked subject (here, the referent), making the referent's gender identity explicit in those utterances for all three considered bridge languages, English, German, and Spanish.
### Method of Measurement
Similar to Bentivogli et al. (2020), we used the concept of gender-swapping to measure how often a model preserved the gender compared to how often it produced the opposite gender form, thus opting for the wrong instead of the correct gender, which, if frequently done, signaled models' acting on gender biases.
Following this idea, models' generated translations of gender-marked words belonged to one of three categories, which we exemplify using Figure 2. First, the _expected translation_, for which we measured how often the _correct_ translation (ground truth)--specified by a reference translation C-REF--was produced (e.g., "isolee" in the exemplary model output in Figure 2). Second, the _gender-reversed translation_, for which we measured how often the translation was _wrong_, but only regarding the gender inflection of gender-marked words--specified by a reference W-REF--i.e., instead of the required correct gender realization as per ground truth (e.g., the feminine adjective "intimidee"), the model produced the opposite gender form (e.g., the masculine adjective "intimide"). Third, a _translation different from both reference translations_, e.g., instead of "jugee" (C-REF) or "juge" (W-REF), the model produced the adjective "condamee", or any other word not matching C-REF or W-REF; in this case, we had no reference as to whether the gender inflection, regardless of the predicted word base, was correct or wrong, forcing us to exclude these translations from our gender bias evaluation.
We used two metrics to evaluate our models: BLEU (similar to Bentivogli et al. (2020)) and accuracy. For the accuracy on feminine and masculine word forms, we measured how often a model was able to produce the correct gender (\(C\)) for those words that matched either the correct or the wrong reference set (\(C{+}W\)); we refer to this as _gender preservation_ (\(\alpha_{\mathrm{correct}}\)). As we only relied on correct and wrong "matches" (\(C{+}W\))--excluding words that did not match any reference set (\(N\))--the larger in size this set was, i.e., the larger the sample size, the more significant our findings; therefore, we weighted \(\alpha_{\mathrm{correct}}\) by the size of \(C{+}W\) in relation to the number of all translations (\(C{+}W+N\)), matching a reference (\(C{+}W\)
Figure 2: Illustration of the three possible translation outcomes of required gender preservation for Italian\(\rightarrow\)French translation of the utterance “I felt _alienated_, _intimidated_, and _judged_”: The translation of a gender-inflected word either matched the correct reference translation _C-REF_ (here, “isolee” = alienated), the wrong reference translation _W-REF_ (here, “intimide” = intimidated), or neither (here, “condamée” = condemned).
or not matching any reference (\(N\)); we refer to this weighting factor as _sample size_ (\(\rho\)). Formally, we defined the accuracy \(\gamma\) to measure the _gender preservation performance weighted by the sample size_ as follows:
\[\gamma=\underbrace{\frac{C}{C+W}}_{\alpha_{\mathrm{correct}}}\cdot\underbrace{ \frac{C+W}{C+W+N}}_{\rho}=\frac{C}{C+W+N}\]
To compare the performances for the two genders, we computed the _gender gap_\(\delta\) between results for feminine and the masculine word forms:
\[\delta=1-\frac{\min(\gamma^{\mathrm{F}},\gamma^{\mathrm{M}})}{\max(\gamma^{ \mathrm{F}},\gamma^{\mathrm{M}})}\]
As a reflection of gender biases, gender gaps should be as small as possible and ideally zero due to minimal differences between the results for the feminine and the masculine gender. Furthermore, we analyzed the difference between scores for the correct and the wrong references to determine whether translations were gender-biased.
## 5 Experiments & Results
The code and scripts used for our experimental evaluation are available on GitHub.6
Footnote 6: [https://github.com/lenacabrera/gb_mnmt](https://github.com/lenacabrera/gb_mnmt)
### Experimental Setup
Training Data:In our experiments, we used the publicly available corpora MuST-C [1] for model training. To investigate the impact of the bridge language, determined by the language pairs included during training, we formed three training corpora that are subsets of MuST-C (X),7 with language pairs en\(\leftrightarrow\)X\(\backslash\)en, de\(\leftrightarrow\)X\(\backslash\)de, and es\(\leftrightarrow\)X\(\backslash\)es, where X\(\backslash\)en is the language set X excluding English (en), German (de), or Spanish (es). On each of the three corpora, we trained a model and afterward evaluated the three trained models on our evaluation data. Since only a portion (-10%) of MuST-C is true-parallel data, the training corpora differed in size, as specified in Table 2.
Footnote 7: From release version 1.2, we included 10 of the 15 available languages: Czech, Dutch, English, French, German, Italian, Portuguese, Romanian, Russian, and Spanish.
Preprocessing:MuST-C comes with partitioned training and validation sets which we kept unchanged in our experiments, except for the modifications described above. For the training and validation data, we first performed tokenization and truecasing using the Moses8 tokenizer and truecaser. Afterward, we learned BPE using subwordnmt9[10]. We performed 20 thousand merge operations and only used tokens occurring in the training set with a minimum frequency of 50 times. Our evaluation data was preprocessed in a similar way using the BPE-learned vocabulary.
Footnote 8: [https://github.com/moses-smt/mosesdecoder](https://github.com/moses-smt/mosesdecoder)
Footnote 9: [https://github.com/rschenrich/subword-nmt](https://github.com/rschenrich/subword-nmt)
Training & Inference Details:Our baseline (\(B\)) was a Transformer with 5 encoder and 5 decoder layers with 8 attention heads, an embedding size of 512, and an inner size of 2048. For regularization, we used dropout with a rate of 0.2 and performed label smoothing with a rate of 0.1. Moreover, we used the learning rate schedule from Vaswani et al. (2017) with 8,000 WUS. The source and target word embeddings were shared. To specify the output language, we used a target-language-specific beginning-of-sentence token. As part of our model modifications, we removed a residual connection (\(R\)) in the third encoder layer [12]. We trained each model for 64 epochs and averaged the weights of the five best checkpoints ordered by the validation loss. For the auxiliary similarity loss (\(AUX\)) and the adversarial language classifier (\(ADV\)), we resumed training of the baseline and the model with removed residual connections for 10 additional epochs (400 WUS). By default, we only included supervised directions in the validation set. To compute BLEU scores, we used sacreBLEU [21], which provides a fair and reproducible evaluation, as it operates on detokenized text.
### Results
In Figure 3, we present the BLEU scores indicative of the similarity of the generated translations of MuST-SHE utterances to the _Correct_ references and their gender-reversed counterparts (_Wrong_ references) regardless of the referent gen
\begin{table}
\begin{tabular}{l r} \hline \hline Language Pairs & \# Sentences per Direction \\ \hline en \(\leftrightarrow\) X\(\backslash\)en & 125,000–267,000 \\ de \(\leftrightarrow\) X\(\backslash\)de & 103,000–223,000 \\ es \(\leftrightarrow\) X\(\backslash\)es & 102,000–258,000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the three MuST-C subsets used.
der, as well as the difference (delta) between _Correct_ and _Wrong_ scores for zero-shot models only.10
Footnote 10: Results are for models trained on en\(\leftrightarrow\)X\(\rangle\)en data.
The bar graph illustrates that modifying our baseline \(B\) to encourage language-agnostic representations improves the poor gender preservation performance of \(B\) noticeably when performing zero-shot translation. While the delta between _Correct_ and _Wrong_ scores for \(B\) is zero, we consistently observe positive deltas (cf. green bars) that signal more correct than wrong gender translations; hence, through more language-agnostic hidden representations the modified zero-shot models more often can recover information (conveyed by the source language sentence) necessary to preserve the gender in the target language translation which, in turn, reduces the number of translations produced based on reflecting learned gender biases (in response to RQ3). It shows that \(R+ADV\), closely followed by \(B+ADV\), yields the highest _Correct_ BLEU scores (higher is better) and one of the largest deltas between _Correct_ and _Wrong_ scores (higher is better); therefore, we take a closer look at the performance of \(R+ADV\).
Complementary to the BLEU-based evaluation, we examine \(R+ADV\) accuracies (\(\gamma\)), where better or worse performance measured is reliably attributed to better or worse translation of _gender-infected words only_. From Figure 4, we can observe very similar performances for zero-shot and pivot-based translation using \(R+ADV\) (RQ1).
While both approaches achieve similar _Correct_ accuracy scores (43.0 for ZS and 42.5 for PV), we observe slightly lower _Wrong_ scores for zero-shot translation (20.8) than for pivoting (22.5). As a result, the delta for zero-shot is higher (better) than for pivot-based translation (22.2 vs. 20.2).
To gain better insight into the difference in gender preservation between both approaches, we break down the accuracies and compare them for the feminine and masculine gender; the corresponding results are depicted in Figure 5. The large differences between the accuracies for feminine and masculine referents clearly show that the model is acting according to a _masculine bias_ that detrimments feminine and benefits masculine preservation of gender signals conveyed by the source sentence. The _Correct_ accuracies in the masculine case are almost twice as high as their feminine counterparts. Furthermore, comparing the _Wrong_ accuracies, we see an even bigger difference, as masculine _Wrong_ scores are much smaller (by a factor of 5), whereas feminine _Wrong_ scores are almost identical to their _Correct_ counterparts.
In the masculine case, performances by both approaches are very similar, with pivoting achieving slightly higher _Correct_ and _Wrong_ scores (54.5 vs. 53.4 and 10.6 vs. 10.4). In the feminine case, we see that zero-shot translation is more accurate regarding feminine gender preservation: The delta between _Correct_ and _Wrong_ accuracies is small but positive (0.5), whereas for pivoting, we
Figure 4: Average accuracy scores of zero-shot translation (full bars) and pivoting (hatched) for _Correct_ (left bar, higher \(\uparrow\) is better) and _Wrong_ (right bar, lower \(\downarrow\) is better) MuST-SHE references complemented with the delta (green bars, higher \(\uparrow\) is better) between both for the model \(R+ADV\). Results are for the feminine and masculine referent gender.10
Figure 3: Average BLEU scores for _Correct_ (left bar, higher \(\uparrow\) is better) and _Wrong_ (right bar, lower \(\downarrow\) is better) MuST-SHE references of our six evaluated zero-shot models, complemented with the delta (green bar, higher \(\uparrow\) is better) between both. Results are for the feminine and masculine referent gender.10
observe a negative delta (-4.9) that signals more wrong (masculine) than correct (feminine) translations for words where the required gender realization is feminine. Accordingly, it turns out that zero-shot translation performs noticeably better for feminine gender preservation--which is generally poorer than masculine gender preservation--compared to pivoting and, as a consequence, mitigates the masculine bias to a larger extent, producing more balanced gender outputs (RQ1).
As we assumed the bridge language to play an important role in gender preservation, we compare the model's performance for zero-shot and pivot-based translation when trained using different training corpora that enabled the use of different bridge languages, namely English (for the results presented so far) and the grammatical gender languages German and Spanish (in response to RQ2). As we expected to see differences between the three languages regarding sentences with and without speaker-related gender agreement, we present the _Correct_ accuracies broken down by referent gender and complemented with the gender gap (\(\delta\)) between feminine and masculine accuracies for either utterance category in Table 3.
It shows that the performances for speaker-independent gender agreement are noticeably better (i.e., higher accuracies and smaller gender gaps) than for speaker-related gender agreement, which can be attributed to reduced gender ambiguity due to more explicit gender clues provided by source sentences in the former case. It shows that the poorer performance for speaker-related gender agreement affects the feminine gender more than the masculine gender when considering the much smaller difference in results for masculine word forms compared to a significant drop in scores for feminine word forms for speaker-related gender agreement (again, this very prominently highlights the model's masculine bias). Consequently, it shows that the feminine discrimination found throughout all models' performances is more prominent in cases of high gender ambiguity, confirming the notion of models making "educated" gender guesses that are tainted by gender biases.
Figure 5: Average accuracy scores of zero-shot translation (full bars) and pivoting (hatched) for _Correct_ (left bar, higher \(\uparrow\) is better) and _Wrong_ (right bar, lower \(\downarrow\) is better) MuST-SHE references, complemented with the delta (green [\(Delta>0\)] and magenta [\(Delta<0\)] bars, higher \(\uparrow\) is better) between both for the model \(R+ADV\). Results are broken down by referent gender (feminine [left] vs. masculine [right]).\({}^{10}\)
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Bridge & \multicolumn{2}{c}{Feminine \(\uparrow\)} & \multicolumn{2}{c}{Masculine \(\uparrow\)} & \multicolumn{2}{c}{**Gender Gap \(\downarrow\)**} \\ \cline{2-7} Language & ZS & PV & ZS & PV & ZS & PV \\ \hline \hline \multicolumn{7}{c}{Speaker-_Independent_ Gender Agreement} \\ \hline English & 42.8 & 39.8 & 56.7 & **58.3** & 0.25 & 0.32 \\ German & 40.4 & 43.6 & 50.1 & 55.6 & 0.19 & 0.22 \\ Spanish & **49.6** & 45.3 & 57.7 & 55.0 & **0.14** & 0.18 \\ \hline \hline \multicolumn{7}{c}{Speaker-_Related_ Gender Agreement} \\ \hline English & 20.2 & 19.2 & 48.2 & 48.7 & 0.58 & 0.61 \\ German & 15.1 & 18.4 & **51.1** & 49.8 & 0.70 & 0.63 \\ Spanish & 23.8 & **29.4** & 50.6 & 45.7 & 0.53 & **0.36** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average accuracy scores for _Correct_ (higher \(\uparrow\) is better) references with speaker-related and speaker-independent gender agreement when bridging via English, German or Spanish using the model \(R+ADV\). Results are broken down by referent gender and complemented with the gender gap (lower \(\downarrow\) is better) between feminine and masculine accuracies. Underlined scores are the best of both approaches, and bold scores are the best across languages.
Moreover, our results reveal clear differences in gender preservation between languages for both types of gender agreement: For speaker-independent gender agreement (e.g., "mi _padre_ se sentia \(\text{alienado}_{M}\)" = "my \(\text{d}\text{d}\text{f}\text{t}\)lienated"), we find that zero-shot translation produces smaller gender gaps compared to pivoting for all three bridge languages. For the English bridge, the difference between zero-shot translation and pivoting is most pronounced, albeit small. For speaker-related gender agreement (e.g., "me senti \(\text{alienada}_{F}\)" = "I felt alienated"), it turns out that zero-shot translation achieves a slightly smaller gender gap compared to pivoting using the English bridge language (where gender information is likely lost); for the German and the Spanish bridge languages, we observe better pivoting results regarding smaller gender gaps and, thus, more balanced correct gender outputs. This outcome confirms our hypothesis that for languages where gender inflection is relatively low, zero-shot translation is not as much affected by a loss of gender information (which impairs gender preservation for pivoting using discrete language representations), as it relies on more language-agnostic gender clues likely found in the continuous representations. Moreover, the outcomes suggest that with an increased level of gender inflection in the bridge language, pivoting surpasses zero-shot translation regarding fairly balanced gender preservation for speaker-related gender agreement.
## 6 Conclusion
In this paper, we explored gender bias in MNMT in the context of gender preservation for zero-shot translation directions, i.e., unseen language pairs (French\(\leftrightarrow\)Italian), compared the performances of pivoting and zero-shot translation using discrete and continuous representations respectively, studied the influence the bridge language has on both approaches, and examined the effect language-agnostic representations have on zero-shot models' gender biases. Based on our experimental results, we addressed three research questions.
* How do zero-shot and pivot-based translation compare regarding gender-biased outputs for zero-shot directions?
We find that zero-shot translation and pivoting achieve similar gender preservation performances, but zero-shot translation better preserves the feminine gender, which mitigates the masculine bias--the consistently worse feminine than masculine results across all evaluated models and both approaches--more than pivoting when bridging via English.
* Does the bridge language affect the gender biases perpetuated by zero-shot and pivot-based translations?
Our experiments revealed that the bridge language affects gender biases in MNMT. For English, a language limited to pronominal gender (with a few exceptions), we find that zero-shot translation performs better than pivoting regarding a more fairly balanced preservation of feminine and masculine gender. Using two richer gender-inflected bridge languages, Spanish and German, revealed that with an increased level of gender inflection in the bridge language, pivoting surpasses zero-shot translation regarding fewer gender-biased outputs for utterances with speaker-related gender agreement.
* Do translation quality improvements of zero-shot models reduce their gender biases?
All three evaluated modifications encouraging language-agnostic hidden representations (cf. Section 3) improved zero-shot models' ability to preserve the feminine and masculine gender and reduced the gap between better masculine and worse feminine results; they improved zero-shot models' performances to the point where they outperformed pivoting regarding more fairly balanced preservation of both genders when bridging via English.
Besides our findings, this work also features some limitations that can be addressed in future work. First, the data used in our experimental evaluation limited the scenarios to those examined. Future work can examine the translation of sentences with mixed gender (i.e., sentences including feminine _and_ masculine word forms) and directions, including languages from different language families and with different gender systems, to further study language differences. Second, developing a large-scale gender-annotated corpus suitable for MNMT training could most likely be used to improve models' gender preservation performance. A well-performing gender classifier could be used to annotate the MuST-C dataset with token- or word-level gender labels. Third, we believe that |
2309.01144 | Distributed averaging for accuracy prediction in networked systems | Distributed averaging is among the most relevant cooperative control
problems, with applications in sensor and robotic networks, distributed signal
processing, data fusion, and load balancing. Consensus and gossip algorithms
have been investigated and successfully deployed in multi-agent systems to
perform distributed averaging in synchronous and asynchronous settings. This
study proposes a heuristic approach to estimate the convergence rate of
averaging algorithms in a distributed manner, relying on the computation and
propagation of local graph metrics while entailing simple data elaboration and
small message passing. The protocol enables nodes to predict the time (or the
number of interactions) needed to estimate the global average with the desired
accuracy. Consequently, nodes can make informed decisions on their use of
measured and estimated data while gaining awareness of the global structure of
the network, as well as their role in it. The study presents relevant
applications to outliers identification and performance evaluation in switching
topologies. | Christel Sirocchi, Alessandro Bogliolo | 2023-09-03T11:36:12Z | http://arxiv.org/abs/2309.01144v1 | # Distributed averaging for accuracy prediction in networked systems
###### Abstract
Distributed averaging is among the most relevant cooperative control problems, with applications in sensor and robotic networks, distributed signal processing, data fusion, and load balancing. Consensus and gossip algorithms have been investigated and successfully deployed in multi-agent systems to perform distributed averaging in synchronous and asynchronous settings. This study proposes a heuristic approach to estimate the convergence rate of averaging algorithms in a distributed manner, relying on the computation and propagation of local graph metrics while entailing simple data elaboration and small message passing. The protocol enables nodes to predict the time (or the number of interactions) needed to estimate the global average with the desired accuracy. Consequently, nodes can make informed decisions on their use of measured and estimated data while gaining awareness of the global structure of the network, as well as their role in it. The study presents relevant applications to outliers identification and performance evaluation in switching topologies.
## I Introduction
Distributed averaging is an instance of distributed computation aiming to determine the global average of a set of values by iterating local calculations. It has been extensively studied as the primary tool for solving cooperative control problems in multi-agent systems [1] such as distributed sensor networks, micro-grids, transport networks, power distribution systems, and biological networks [2]. In these settings, the value of an aggregate function over the entire network data is often more relevant than individual data at nodes. Networks of temperature sensors, for instance, are generally deployed to assess the average temperature in a given area. Similarly, peer-to-peer systems are primarily interested in calculating the average size of stored files [3]. Other collective behaviours leveraging distributed averaging include formation control of autonomous vehicles, network synchronisation, automated traffic networks, cooperative output regulation, and containment control [4].
The performance of a distributed averaging protocol is generally evaluated as the resources required to obtain an estimate of the global average with the desired level of accuracy. Performance analysis is fundamental to networked systems, as they typically suffer limitations in terms of communication bandwidth, memory, and computational power [5]. Previous work has focused on deriving theoretical performance guarantees in synchronous [6] and asynchronous [7] models and identified a dependency on the eigenvalues of the matrix characterising the algorithm. However, these results cannot help make local and accurate performance predictions, as they identify wide performance intervals and require knowledge of the entire network structure for eigenvalues calculations. In contrast, this work proposes a strategy to estimate global properties of the graph and the performance of the averaging algorithm in a distributed manner, entailing simple data elaboration and small message passing. As a result, agents can predict the time (or the number of interactions) needed to achieve the desired accuracy, make informed decisions on their use of measured and estimated data, and also gain awareness of the global network structure.
The remainder of the paper is organised as follows. Section II provides some relevant background on averaging algorithms, citing previous results on convergence for synchronous and asynchronous models. The problem is formulated in Section III, and the proposed strategy is outlined in Section IV. Section V presents relevant applications, and finally, Section VI provides conclusions and directions for future work.
## II Background
### _Network Topology_
The communication constraints in networked systems can be conveniently modelled via a graph \(G=(V,E)\), where \(V\) is the vertex set of \(n\) nodes \(v_{i}\), with \(i\in I=\{1,\ldots,n\}\) and \(n\in N\), and \(E\) is the edge set \(E\subseteq V\times V\) of the pairs \(e_{ij}=(v_{i},v_{j})\), so that there is an edge between nodes \(v_{i}\) and \(v_{j}\) iff \((v_{i},v_{j})\in E\). In undirected graphs, edges \(e_{ij}\in E\) are unordered pairs (\(v_{i}\), \(v_{j}\)) of elements of \(V\). All nodes that can transmit information to node \(v_{i}\) are said to be its neighbours and are represented by the set \(\Omega_{i}=\{v_{j}:(v_{i},v_{j})\in E\}\).
### _Distributed average_
Let \(x_{i}\) denote the value of node \(v_{i}\), representing a physical quantity such as position, temperature, light intensity or voltage, and \(\textbf{x}=(x_{1},...,x_{n})^{T}\) the vector of values so that \(i^{th}\) component of **x** is the value at node \(v_{i}\). The nodes \(v_{i}\) and \(v_{j}\) are said to agree in a network iff \(x_{i}=x_{j}\). All nodes in \(G\) are in agreement or have reached a consensus iff \(x_{i}=x_{j}\,\forall\,i,j\in I\). This agreement space can also be expressed as \(\textbf{x}=\alpha\textbf{1}\) where \(\textbf{1}=(1,...,1)^{T}\) and \(\alpha\in R\) is the collective decision value of the nodes. The system is said to reach asymptotic consensus if all nodes asymptotically converge to \(\alpha\), i.e.
\[\lim_{t\rightarrow+\infty}x_{i}(t)=\alpha,\forall i\in I.\]
A consensus algorithm (or protocol) is an interaction rule that specifies the information exchange between an agent and its neighbours to reach a consensus [6], i.e. to asymptotically
converge to the agreement space [8]. One of the benefits of using linear iteration-based schemes is that each node only transmits a single value to each of its neighbours [8]. In discrete time, the consensus protocol is
\[\textbf{x}(k)=\textbf{W}(k)\,\textbf{x}(k-1) \tag{1}\]
where \(\textbf{x}(k)\) is the vector of values at the end of time slot \(k\), and **W** is the \(n\times n\) matrix of averaging weights. When the consensus value corresponds to the average of all initial values, i.e. \(\alpha=\frac{1}{n}\sum_{i=1}^{n}x_{i}(0)\), the system is said to perform distributed averaging. In consensus protocols with time-invariant weight matrix **W**, the linear iteration implies
\[\textbf{x}(k)=\textbf{W}^{k}\,\textbf{x}(0). \tag{2}\]
To achieve asymptotic average consensus regardless of the initial values \(\textbf{x}(0)\)
\[\lim_{t\rightarrow+\infty}\textbf{W}^{t}=\frac{\textbf{1}\textbf{1}^{T}}{n} \tag{3}\]
which follows from
\[\lim_{t\rightarrow+\infty}\textbf{x}(t)=\lim_{t\rightarrow+\infty}\textbf{W} ^{t}\textbf{x}(0)=\frac{\textbf{1}\textbf{1}^{T}}{n}\,\,\textbf{x}(0). \tag{4}\]
Eq. 3 holds iff the following three properties are satisfied:
\[\textbf{1}^{T}\textbf{W}=\textbf{1}^{T} \tag{5}\]
i.e. **1** is a left eigenvector of **W** associated with the eigenvalue 1, implying that \(\textbf{1}^{T}\textbf{x}(k+1)=\textbf{1}^{T}\textbf{x}(k)\,\,\forall\,\,k\), i.e., the sum, and therefore the average, of the vector of node values is preserved at each step;
\[\textbf{W}\,\textbf{1}=\textbf{1} \tag{6}\]
i.e. **1** is also a right eigenvector of **W** associated with the eigenvalue 1, meaning that **1**, or any vector with constant entries, is a fixed point for the linear iteration;
\[\rho(\textbf{W}-\frac{\textbf{1}\textbf{1}^{T}}{n})<1 \tag{7}\]
where \(\rho(\cdot)\) denotes the spectral radius of a matrix, which, combined with the first two conditions, states that 1 is a simple eigenvalue of **W**, and all other eigenvalues have a magnitude strictly less than 1 [9].
The disagreement vector \(\boldsymbol{\delta}(k)\) quantifies the distance from consensus at time slot \(k\) and can be computed as
\[\boldsymbol{\delta}(k)=\textbf{x}(k)-\alpha\textbf{1}. \tag{8}\]
Notably, this vector evolves according to the same linear system as the vector of values:
\[\boldsymbol{\delta}(k)=\textbf{W}(k)\boldsymbol{\delta}(k-1). \tag{9}\]
Iterative distributed averaging algorithms are often classified based on the adopted time model. Gossip algorithms realise asynchronous averaging schemes so that a single pair of neighbouring nodes interact at each time \(k\), setting their values to the average of their previous values [10]. In contrast, consensus algorithms implement a synchronous model where time is commonly slotted across nodes, and all nodes simultaneously update their values with a linear combination of the values of their neighbours at discrete times \(k\). Gossip protocols are more suited to model real networks and are generally more accessible to implement for the lack of synchronisation requirements, unrealistic for most applications [11], but are harder to characterise mathematically due to the added randomness of the neighbour selection.
### _Gossip algorithms_
In asynchronous gossip protocols, a single node is active at each time slot \(k\) and selects one of its neighbours for interaction according to a given criterion. The \(n\times n\) probability matrix \(\textbf{P}=[p_{ij}]\) prescribes the probability \(p_{ij}\) that the node \(v_{i}\) selects node \(v_{j}\), with \(p_{ij}=0\) if \((v_{i},v_{j})\notin E\) due to the constraints of only interacting with neighbours. For instance, in random neighbour selection, where all neighbours are equally likely to be chosen, the matrix **P** is \([p_{ij}]\) such that \(p_{ij}=1/|\Omega_{i}|\,\,\forall v_{j}\in\Omega_{i}\) and 0 otherwise. A node \(v_{i}\) interacts with node \(v_{j}\) at time slot \(k\) with probability \(\frac{p_{ij}}{n}\), which is the joint probability that \(v_{i}\) is active at time slot \(k\) (\(p=\frac{1}{n}\)) and selects node \(v_{j}\) for interaction (\(p=p_{ij}\)). The weight matrix \(\textbf{W}_{ij}\) of this averaging scheme has elements
\[w_{kl}(t)=\begin{cases}\frac{1}{2}&\text{if }k,l\in\{i,j\}\\ 1&\text{if }k=l,k\notin\{i,j\}\\ 0&\text{otherwise.}\end{cases} \tag{10}\]
This is equivalent to nodes \(v_{i}\) and \(v_{j}\) setting their values to the average of their current values, leaving the others unchanged. The matrix **W** generally changes over time, as different pairs interact at each time slot. The averaging process is thus defined by the sequence of averaging matrices \(\{\textbf{W}(k)\}_{k}\) and the vector value at time step \(k\) can be computed as
\[\textbf{x}(k)=\textbf{W}(k-1)\textbf{W}(k-2)\,\,..\,\textbf{W}(0)\textbf{x}(0)= \boldsymbol{\phi}(k-1)\textbf{x}(0) \tag{11}\]
Recalling that for independent real-valued random matrices the expected matrix of the product is the product of the expected matrices, then
\[\mathbb{E}(\boldsymbol{\phi}(k))=\prod_{i=0}^{k}\mathbb{E}(\textbf{W}(i))= \bar{\textbf{W}}^{k} \tag{12}\]
where \(\bar{\textbf{W}}\) is the expected weight matrix
\[\bar{\textbf{W}}=\sum_{i,j}\frac{p_{ij}}{n}\textbf{W}_{ij} \tag{13}\]
most commonly written as
\[\bar{\textbf{W}}=\textbf{I}-\frac{1}{2n}\textbf{D}+\frac{\textbf{P}+\textbf{P} ^{T}}{2n}, \tag{14}\]
[10] where **I** is the identity matrix and **D** is the diagonal matrix with entries
\[\textbf{D}_{i}=\sum_{j=1}^{n}[p_{ij}+p_{ji}].\]
By definition, \(\bar{\mathbf{W}}\) is symmetric and doubly stochastic, i.e., all rows and columns sum up to 1. If the underlying graph is connected and non-bipartite, the expected matrix \(\bar{\mathbf{W}}\) fulfils all three conditions for convergence (Eq. 5, 6, 7), so the sequence of averaging matrices \(\{\mathbf{W}(k)\}_{k}\) drawn independently and uniformly and applied to any initial vector \(\mathbf{x}(0)\), converges to the vector average \(\frac{\mathbf{11}^{T}}{m}\mathbf{x}(0)\)[10]. Notably, the second largest eigenvalue of the matrix \(\bar{\mathbf{W}}\) determines the performance of the gossip scheme [7].
### _Consensus algorithms_
Consensus algorithms perform linear iterations where each node updates its value to a weighted average of its own previous value and those of its neighbours in a synchronous manner. The simple consensus scheme to reach an agreement regarding the state of \(n\) agents with dynamics \(\dot{x}_{i}=u_{i}\) can be expressed as the linear system
\[\dot{x}_{i}(k)=\sum_{v_{j}\in\Omega_{i}}a_{ij}(x_{j}(k)-x_{i}(k)) \tag{15}\]
where \(i\in I\), \(k\) is the discrete time index, and \(a_{ij}\) is a weight associated to the edge \((v_{i},v_{j})\)[12]. Setting \(a_{ij}=0\) for \(v_{j}\notin\Omega_{i}\), this iteration can be written as Eq. 1 and 2. The collective dynamics of the group of agents can be expressed in compact form as
\[\boldsymbol{\dot{x}}=-\boldsymbol{\mathcal{L}}\ \mathbf{x}, \tag{16}\]
so that \(\mathbf{W}=\mathbf{I}-\boldsymbol{\mathcal{L}}\), with identical disagreement dynamics
\[\boldsymbol{\dot{\delta}}=-\boldsymbol{\mathcal{L}}\ \boldsymbol{\delta}. \tag{17}\]
A state in the form \(\alpha\mathbf{1}\), where all nodes agree, is an asymptotically stable equilibrium of the dynamic system in Eq. 16 because
\[-\boldsymbol{\mathcal{L}}\ \alpha\mathbf{1}=\mathbf{0}. \tag{18}\]
The consensus algorithm asymptotically converges to the agreement space provided that \(\boldsymbol{\mathcal{L}}\) is a positive semidefinite matrix and \(\alpha\mathbf{1}\) is the only equilibrium of the system. It was shown that, in connected undirected graphs, the equilibrium is unique and corresponds to the vector average. Moreover, the convergence rate of the consensus algorithm depends on the second smallest eigenvalue of the Laplacian matrix defined on \(G\)[6].
From Eq. 12, it follows that the performance of a gossip protocol with expected weight matrix \(\bar{\mathbf{W}}\), as defined in Eq. 14, converges in expectation with that of a consensus scheme with time-invariant weight matrix \(\bar{\mathbf{W}}\). The agents dynamics of such system is
\[\dot{x}_{i}(t)=\sum_{v_{j}\in\Omega_{i}}\frac{p_{ij}+p_{ji}}{2n}(x_{j}(t)-x_{ i}(t)) \tag{19}\]
while the collective dynamics is given by Eq. 16 where
\[\boldsymbol{\mathcal{L}}=\frac{\mathbf{D}}{2n}-\frac{\mathbf{P}+\mathbf{P}^{ T}}{2n}.\]
So defined, \(\boldsymbol{\mathcal{L}}\) is positive-semidefinite, as it is symmetric and diagonally dominant, and has all row-sums and columns equal to zero, so \(\boldsymbol{\mathcal{L}}\) always has a zero eigenvalue corresponding to the eigenvector \(\mathbf{1}\). In connected graphs, the vector average is a unique equilibrium for the system, and the algorithm asymptotically converges to the agreement space.
### _Convergence rate and Accuracy_
The convergence of the distributed iteration is governed by the product of matrices, each of which satisfies certain communication constraints imposed by the graph topology and the consensus/gossip criterion. Let \(\hat{\delta}(t)\) be the collective disagreement of the estimates at time \(t\) normalised by the initial values
\[\hat{\delta}(t)=\frac{\|\boldsymbol{\delta}(t)\|}{\|\mathbf{x}(0)\|}, \tag{20}\]
where \(\|.\|\) is the \(l_{2}\) norm of the vector. Notably, the \(l_{2}\) norm of the disagreement vector \(\boldsymbol{\delta}\) corresponds to the standard deviation of the vector entries scaled by the network size
\[\sigma(t)=\frac{\|\boldsymbol{\delta}(t)\|}{\sqrt{n}}. \tag{21}\]
Numerical and theoretical results for synchronous and asynchronous averaging schemes indicate that the logarithm of the collective disagreement \(\hat{\delta}(t)\) decreases linearly after a faster transient phase, and that the decreasing rate is deterministic and independent of the initial measurements \(\mathbf{x}(0)\)[13]. Hence, a contraction rate \(\gamma\) can be defined as the angular coefficient of the linear stationary regime and used to characterise the algorithm performance [14]:
\[Log(\hat{\delta})=-\gamma\ t. \tag{22}\]
The accuracy \(R\) of the estimates at time \(t\) indicates by how many times the collective disagreement is reduced with respect to the initial disagreement
\[R(t)=-Log(\frac{\hat{\delta}(t)}{\hat{\delta}(0)})=-Log(\frac{\|\boldsymbol{ \delta}(t)\|}{\|\boldsymbol{\delta}(0)\|}) \tag{23}\]
so the time taken to achieve the desired level of accuracy can be computed as
\[t=R/\gamma \tag{24}\]
If nodes initiate, on average, one interaction per unit of time, the time parameter \(t\) approximates the number of interactions per node. Figure 1 shows the collective disagreement over time for several realisations of an asynchronous averaging algorithm and the corresponding synchronous scheme, as well as the regression line whose angular coefficient is the chosen metric of convergence rate \(\gamma\).
## III Problem Setup
Let us consider a network of agents, each with some initial value \(x_{i}(0)\) representing an opinion or a measurement, implementing a synchronous or asynchronous averaging strategy for reaching an agreement on the global average \(\alpha\). Each node is endowed by a parameter \(r_{i}\), indicating the accuracy of its estimated average \(x_{i}(t)\) required to confidently use it,
elaborate it, communicate it to a remote server or compare it to its initial measurement. An accuracy parameter \(r_{i}\) equal to \(R\) indicates that the node can only use its estimate when the collective disagreement in the system is smaller than the initial disagreement divided by \(10^{R}\). For instance, sensor nodes might confidently use their estimates when the group disagreement has been significantly reduced (e.g. \(R\) = 4), but nodes with control responsibilities might require estimates of many orders of magnitude more accurate (e.g. \(R\) = 8).
## IV Proposed Strategy and Main Results
In the considered setting, the global structure of the graph is unknown to the individual nodes and can change over time. However, nodes can exchange information with their immediate neighbours to gain awareness of their surroundings and compute local metrics, namely their degree, clustering coefficient and local efficiency. Simulations deployed on over 12000 graphs show that a linear combination of averages of local metrics highly predicts the convergence rate of distributed averaging algorithms in both synchronous and asynchronous time models. Consequently, it is suggested that nodes compute local metrics and estimate their average across the network by distributed averaging, exploiting the same principle used to calculate the average of measured quantities. Nodes then employ averaged local metrics to estimate the algorithm convergence rate and make predictions of the time (or number of interactions) needed to achieve the desired accuracy so that nodes use their estimate only when confident of their quality. Although a combination of more local metrics or more complex models could provide higher explanatory power and better predictions, the limited memory and computational capabilities available to nodes motivate the choice of a linear model of few parameters.
Besides enabling performance predictions, local metric averages offer nodes a glimpse into the global properties of the network. The average node degree, for instance, is known to play a universal role in cooperation and robustness [15]. Furthermore, by comparing its local metrics with the population averages, a node gains awareness of its role in the network without computing costly centrality measures. Nodes having a degree significantly lower than the average degree recognise that they limit the overall performance of the graph and are encouraged, if possible, to create new connections. Highly clustered nodes are known to reduce communication efficiency by propagating redundant information and can be programmed to remove or rewire some of their connections to increase performance [16].
### _Local Graph Metrics_
The degree of a node \(v_{i}\), here denoted by \(k(v_{i})\), is the cardinality of the neighbour set \(\Omega_{i}\) and quantifies its connections within the network. The clustering coefficient \(cl(v_{i})\) is defined as the number of triangles passing through the node \(T(v_{i})\) divided by the number of possible triangles
\[cl(v_{i})=\frac{2T(v_{i})}{k(v_{i})(k(v_{i})-1)},\]
and is a measure of the degree to which nodes tend to cluster together. The local efficiency \(\textit{eff}(v_{i})\) is the average efficiency of all node pairs in the sub-graph induced by the neighbours of \(v_{i}\), where the efficiency of a node pair is the multiplicative inverse of the shortest path distance. It quantifies the resistance to failure on a small scale as it measures how effectively information is exchanged after removing the node [17]. The computation complexity of local metrics largely depends on network density. For very sparse graphs, local metrics can be computed by each node in constant time and in a fully parallel fashion.
### _Regression Model_
The study generated over 12000 sparse fully-connected undirected networks belonging to different graph families (\(\approx\) 1600 Erdos-Renyi graphs [18], 1600 geometric random graphs [19], 4400 small world graphs [20], 4400 scale-free graphs [21] with adjusted clustering [22]) having sizes ranging from 100 to 1000 nodes and average degree up to 60. Averages of local metrics were calculated for all graphs, and the convergence rates of the asynchronous protocol were estimated through simulations. The adopted gossip scheme realises a random neighbour selection and activates each node at the times of a rate 1 Poisson process so that \(n\) interactions take place on average for each unit of time. A regression model in the form
\[\gamma=a<\!\!k\!\!>+b<\!\!cl\!\!>+c<\!\textit{eff}\!\!\!>+d, \tag{25}\]
where \(<\!\!>\) indicates the average of the corresponding local metric, records an r-squared of 0.92, confirming that local metrics alone predict the algorithm convergence rate with high accuracy. The negative coefficient \(b\) suggests that highly clustered areas of the graph are less effective in propagating
Fig. 1: Collective disagreement over time in 100 realisations of an asynchronous averaging protocol with random neighbour selection and the corresponding synchronous scheme, simulated on a geometric random graph of size \(n\) = 200 and average degree \(<\!\!k\!\!>\) = 25. The contraction rate \(\gamma\) is estimated as 0.018.
estimates because nodes are more likely to have shared neighbours passing redundant information. The positive coefficients \(a\) and \(c\) confirm the intuitive notions that more connections and more efficient information flows promote convergence.
The proposed approach can be implemented with any probability matrix **P** as long as the corresponding expected weight matrix satisfies the abovementioned convergence criteria. As seen in Section II, results for the asynchronous averaging scheme extend to the synchronous counterpart as the two algorithms converge in expectation for certain choices of **W**. A faster initial phase characterises the consensus protocol, as seen in Figure 1, so regression models based on asynchronous simulations might offer conservative estimates for the synchronous case.
### _Prediction accuracy_
Node predictions of the time needed to achieve the desired accuracy \(R\) are reliable and improve with time, as they normally distribute around a value very close to the actual time, with variance decreasing over time. Necessarily, the accuracy of predictions, like that of the estimates, depends on the algorithm performance, so faster convergence results in more precise time predictions. However, the numerical experiments found that accurate predictions come in a timely manner, as they stabilise to a value of \(t\) well before that time elapses, even in the least performing graphs. In Figure 2, already at \(t=300\), most predictions fall in a narrow interval around the actual value \(t=650\).
## V Illustrative Examples
### _Topology changes_
Distributed averaging algorithms have been widely investigated in static topologies, characterised by fixed and
Fig. 4: Distribution of node predictions in Erdős-Rényi graphs having fixed size \(n=500\) and decreasing average degree \(<\!\!k\!\!>\) due to link failure (from 50 to 15). The vertical lines represent the average times at which the desired level of accuracy was achieved in 100 realisations of the asynchronous averaging protocol.
Fig. 3: Distribution of node predictions in small world graphs, generated according to the Watts-Strogatz model with size \(n=800\), average degree \(<\!\!k\!\!>\) = 16, and increasing rewiring probability \(p_{r}\). For all graphs, 80% of nodes have \(R\) = 4, while the remaining 20% \(R\) = 8, modelling a network of agents with different accuracy requirements. The vertical lines represent the average times at which the desired level of accuracy was achieved in 100 realisations of the asynchronous averaging protocol.
Fig. 2: Distribution of node predictions of time needed to achieve accuracy \(R\) = 3 in a geometric random graph (size \(n\) = 1000, radius \(r\) = 0.07) at different time points (150, 200, 250, 300), and the corresponding best-fit normal distributions. The vertical black line indicates the average time at which the desired level of accuracy was achieved in 100 runs of the asynchronous averaging protocol (\(t\) = 650). Predictions are accurate and timely as 95% of them are found in the 650\(\pm\)50 interval at time \(t\) = 300.
reliable communication links throughout the observed time. This assumption is generally unrealistic for real networks, where the topology can differ in subsequent executions of the algorithm due to communication interference or agents changing positions [23]. However, agents are generally notified solely of changes in their immediate neighbourhood and are unable to assess topology changes on a large scale. The propagation and averaging of local metrics constitute a valuable tool for nodes to evaluate convergence rate in the current communication network, especially in the event of rewiring and link failure, which significantly affects performance.
#### V-A1 Rewiring
The convergence rate of averaging protocols can be dramatically increased without adding new links or nodes by means of _random rewiring_[20], leading to the design of small-world networks for ultra-fast consensus [24]. The proposed approach enables nodes to estimate the convergence rate of a network undergoing random rewiring at any given time. Figure 3 shows subsequent time predictions computed on a small world graph with increasing probability of random rewiring.
#### V-A2 Link failure
Erdos-Renyi graphs are generally adopted to model scenarios where edges fail with equal probability \(f\)[25]. If the failure probability varies over time due to changes in the network communication medium, the graph topology is defined by a series of Erdos-Renyi graphs, each with edge probability \(1-f\). The proposed method offers nodes a tool to compute the convergence rate and, thus, assess the communication efficiency at any given time. Figure 4 shows how predictions capture the lower convergence rate of a network that has lost over 70% of its links.
### _Anomaly detection_
Distributed averaging in networked systems allows individual nodes to evaluate how their sensing environment differs from that of the other nodes by comparing their measured value with their estimate of the group average. An anomalous value or outlier for the population can be defined as any measured quantity that is distant from the global average of a given amount \(M\) or a certain number of standard deviations \(m\). The corresponding node can then raise an alarm to inform a remote control centre of the unexpected measurement, perform a corrective action, limit its interactions, or signal the neighbouring nodes of the lower reliability of its value.
#### V-B1 Alarm system
In networks of sensing devices, any measurement distant at least \(M\) from the global average \(\alpha\) can be considered anomalous by the system and trigger an alarm. Each node can deploy the proposed approach to evaluate the time \(t\) needed to attain its desired accuracy \(r_{i}\) and only then compare its measured value \(x_{i}(0)\) with its estimated average \(x_{i}(t)\). If \(|x_{i}(0)-x_{i}(t)|>M\), the measurement is labelled as anomalous, and the node activates a response. This procedure is subject to _false negatives_, which do not detect anomalous measurements, and _false positives_, where regular values are erroneously detected as anomalous, because of the differences between the actual global average \(\alpha\) and its local estimate at time \(t\), \(x_{i}(t)\). Notably, nodes face a trade-off between timely detection and classification error, meaning that a lower value of \(r_{i}\) enables faster but less accurate feedback. Figure 5 exemplifies how classification errors decrease as accuracy requirements increase for two initial distributions of \(x_{i}(0)\).
#### V-B2 Outlier identification
In sensing systems where measurements are expected to distribute normally with standard deviation \(\sigma\), an outlier can be defined as any value distant more than \(m\) standard deviations from the group average \(\alpha\). From Eq. 21, 23 and 24, it follows that
\[-Log(\frac{\sigma(t)}{\sigma(0)})=t\gamma \tag{26}\]
so that, at any time \(t\), the standard derivation of the estimates \(\sigma(t)\) can be derived from the initial deviation \(\sigma(0)\) and the convergence rate of the graph \(\gamma\), which can be estimated using the proposed methods. Each node \(v_{i}\) can then keep track of a confidence interval where the group average \(\alpha\) is likely to be found. Since estimates are normally distributed, \(v_{i}\) has a high probability of finding the group average within three standard deviations from its estimate, i.e. \(\mathbb{P}(\alpha\in ci_{i})\approx 0.997\), where \(ci_{i}=[x_{i}(t)-3\sigma(t),x_{i}(t)+3\sigma(t)]\). If the node initial value \(x_{i}(0)\) is distant at least \(m\) standard deviations from this confidence interval, formally
\[\min_{\forall\,c\,\in\,ci_{i}}|x_{i}(0)-c|>m\;\sigma(0),\]
the measurement is detected as an outlier for the group, and the corresponding node initiate a response. Figure 6 shows that the standard deviation of the estimates evolves according to Eq. 26, allowing nodes to evaluate whether their initial value is an outlier for the population.
## VI Conclusions
The study proposes a distributed approach to estimate the convergence rate of an averaging scheme deployed on a network. The key idea is to approximate the graph convergence rate with a linear combination of averages of local metrics. Nodes can then be programmed to estimate these parameters by distributed averaging and to implement the regression model in order to compute the convergence rate and the time needed to achieve the desired level of accuracy. The approach enables nodes to make informed decisions on their use of measured and estimated data and gain awareness of the global structure of the network as well as their role in it. Future efforts will be directed toward identifying models able to provide more accurate predictions without increasing memory requirements or communication costs.
|
2308.09631 | Non existence of closed and bounded null geodesics in Kerr spacetimes | The Kerr-star spacetime is the extension over the horizons and in the
negative radial region of the slowly rotating Kerr black hole. It is known that
below the inner horizon, there exist both timelike and null (lightlike) closed
curves. Nevertheless, we prove that null geodesics can be neither closed nor
even contained in a compact subset of the Kerr-star spacetime. | Giulio Sanzeni | 2023-08-18T15:43:58Z | http://arxiv.org/abs/2308.09631v3 | # Non existence of closed null geodesics in Kerr spacetimes
###### Abstract.
The Kerr-star spacetime is the extension over the horizons and in the negative radial region of the slowly rotating Kerr black hole. It is known that below the inner horizon, there exist both timelike and null (lightlike) closed curves. Nevertheless, we prove that the null geodesics cannot be closed in the Kerr-star spacetime.
###### Contents
* 1 Introduction
* 1.1 Result
* 1.2 Physical motivation
* 1.3 Mathematical motivation: the space of null geodesics
* 1.4 History of null geodesics in Kerr
* 1.5 Organization of the paper
* 2 The Kerr-star spacetime
* 2.1 Time orientation of BL blocks
* 2.2 Kerr spacetimes
* 2.3 The Kerr-star spacetime
* 2.4 Totally geodesic submanifolds of the Kerr-star spacetime
* 2.5 Causal region of the Kerr-star spacetime
* 3 Geodesics in Kerr spacetimes
* 3.1 Constants of motion
* 3.2 Equations of motion
* 3.3 Dynamics of geodesics
* 4 Properties of null geodesics in Kerr spacetimes
* 4.1 Principal geodesics
* 4.2 Null geodesics with \(Q<0\)
* 5 Proof of Theorem 1.1
* 5.1 Strategy of the proof
* 5.2 Horizons and Axis cases
* 5.3 Steps of the proof for other cases
* 5.4 Case \(E=0\)
* 5.5 Case \(E\neq 0\)
## 1. **Introduction**
### Result
Given a spacetime \(\left(\mathcal{M},\mathbf{g}\right)\), _i.e._ a time-oriented connected Lorentzian manifold, and a geodesic curve \(\gamma:I=[a,b]\rightarrow\mathcal{M}\), we say that \(\gamma\) is a _closed geodesic_ if \(\gamma(a)=\gamma(b)\) and \(\gamma^{\prime}(a)=\lambda\gamma^{\prime}(b)\neq 0\), for some real number \(\lambda\neq 0\). The purpose of this paper is to prove the non existence of closed null (lightlike) geodesics in the Kerr-star spacetime, which is the extension of the slowly rotating Kerr black hole over the horizons and in the negative radial region. For a detailed construction of the Kerr-star spacetime, see SS2.
**Theorem 1.1**.: _Let \(K^{*}\) be the Kerr-star spacetime. Then there are no closed null (lightlike) geodesics in \(K^{*}\)._
### Physical motivation
Kerr spacetimes model the gravitational field in the presence of a rotating _black hole_ (BH), at least sufficiently far away from it. For a precise (attempt of a) definition of BH, see for instance [24]. The recent image of the supermassive black hole at the center of the galaxy M87, which is the first ever direct detection of a BH, obtained by the Event Horizon Telescope Collaboration [18], is in fact consistent with the shadow [5] predicted using the Kerr model. Another empirical motivation comes from the gravitational waves signal detected by the LIGO interferometers [6]: the decay of the waveform agrees with the damped oscillations of a BH relaxing to a final Kerr configuration. The Kerr spacetimes are solutions of Einstein's vacuum field equations, found by R. P. Kerr [27], which are stationary, axisymmetric and asymptotically flat (see [45]), parametrized by _mass parameter_\(M\) and _rotation parameter_\(a\) (angular momentum per mass unit). They are generalizations of the static spherically symmetric solution of Schwarzschild [39]; indeed, if we set the rotation parameter \(a\) to zero, we recover the Schwarzschild spacetime. Notice that the Schwarzschild spacetime is globally hyperbolic (see for instance [29]), hence it is causal, while the slowly rotating Kerr spacetime violates causality as first noticed by Carter [9] (see Ch. 2 of [35]): both closed timelike and null curves are present. Whether or not causality violating spacetimes can be considered physically reasonable is an open problem. Many classical solutions of the Einstein field equations do violate causality, for instance: van Stockum's spacetime [44] which have closed timelike curves (CTCs) [43] and closed timelike geodesics (CTGs) [40], Godel spacetime [20] which presents CTCs but neither closed null geodesics (CNGs) nor CTGs as pointed out by Kundt [28], Chandrasekhar and Wright [11] and then by Nolan in [34], Gott's spacetime [21] with CTCs. Note that all of these are solutions of non-vacuum Einstein equations: van Stockum's spacetime in presence of an infinite rotating dust cylinder and zero cosmological constant, Godel's solution describes a stationary and homogeneous universe with rotating dust in the presence of non-vanishing cosmological constant, and Gott's solution contains two moving rotating non-intersecting cosmic strings. Our result shows that the Kerr-star spacetime is a solution of the vacuum Einstein equations containing CTCs but not CNGs, as the solutions found by Li [30] and by Low [31]. Note that the existence of closed null geodesics is also related to the _chronology protection conjecture_ stated by Hawking in [23]. From the physical point of view, closed causal geodesics rise more conceptual problems than closed causal curves, since causal curves correspond to accelerated particles while geodesics are simply free-falling ones. Indeed, for example, it is known for the Godel spacetime that the required acceleration for such curves is incredibly high (see [34] for a discussion about this). Therefore, the absence of causal geodesics seems to be more physically relevant to ask for, compared to the general causality requirement, in order to get a realistic spacetime model. Notice that we only prove the non existence of closed lightlike geodesics but do not rule out closed timelike geodesics.
### Mathematical motivation: the space of null geodesics
The original motivation which led us to investigate the existence of closed null geodesics in the Kerr spacetime is the study of the space of null (lightlike) geodesics of this spacetime. The _"space of null geodesics"_ is the space of unparametrized inextendible future pointing null geodesics of a given spacetime. Penrose was the first to suggest the importance of the study of the space of null geodesics [36], [38]. First results in this context were obtained by Low [33]. He proved that if the spacetime is globally hyperbolic (we refer to [2] for causal theory definitions), then its space of null geodesics is contactomorphic to the spherical cotangent bundle of any Cauchy hypersurface of the spacetime, as explained in [32, 12]. For this reason, in the case of globally hyperbolic spacetimes, causality can be described in terms of the geometry of the emerging contact manifold, as shown by Chernov, Nemirovski [14], [13]. In the case of causally simple spacetimes, thanks to the result of Hedicke, Suhr [25], a sufficient condition to get a smooth contact manifold for the space of null geodesics is the existence of a conformal open embedding of the spacetime into a globally hyperbolic one. Furthermore, in [33] it is proven that if \(\mathcal{M}\) is a strongly causal spacetime, then its space of null geodesics inherits a smooth structure from the cotangent bundle of \(\mathcal{M}\). Nevertheless Low [33] also showed that strong causality is not sufficient to get an Hausdorff topological space. If instead the spacetime is not causal, like Kerr, we have no results which give us informations about its space of null geodesics, except for Zollfrei spacetimes [22, 41], in which all the null geodesics are closed and the space of null geodesics is well understood. From the study of null geodesic orbits we hope to obtain insights into the structure of the space of null geodesics of Kerr spacetimes.
### History of null geodesics in Kerr
Thanks to the existence of three obvious constants of motion (the energy, associated to the timelike Killing vector field, the angular momentum,
associated to the spacelike Killing vector field, and the Lorentzian energy of the geodesic), geodesic motion can be solved in some special submanifolds, since in such case three constants of motion are sufficient to completely integrate the system. First Boyer and Price [4], then Boyer and Lindquist [3], and then de Felice [16] studied geodesic motion in the equatorial hyperplane \(Eq=\{\theta=\pi/2\}\) in Kerr spacetime [27]. For the same reason, Carter [8] was able to study orbits in the symmetry axis \(A=\{\theta=0,\pi\}\). Bounded orbits, namely geodesics which run over a finite interval of radius, were studied by Wilkins [47]. After the maximal analytic extension of the Kerr metric by Boyer and Lindquist [3], Carter [9] found a fourth constant of motion, the Carter constant, which completed the integrability (see for instance [19]) and allowed the study of geodesics in full generality.
The study of the causality of geodesics in the Kerr spacetimes started with the work of Calvani, de Felice, Muchotrzeb, Salmistraro [7]. They considered the fast (\(|a|>M\)) Kerr spacetime and studied timelike geodesics moving on \(\{\theta=\text{const}\}\). They found that these geodesics cannot travel back at an earlier time with respect to the starting point. This seemed to suggest that for geodesic motion, there was some kind of obstruction in the violation of causality. For this reason, they conjectured that this would have been a general constraint that saved geodesics from violating causality. However, in a following paper [17], again for the fast Kerr spacetime, Calvani and de Felice showed that this is not true for null geodesics: there exist null geodesics that start their journey at \(r=-\infty\), enter the causality violating region, invert their radial motion and go back to asymptotic regions at an earlier time. Note that both [7] and [17] treat the fast Kerr spacetime, _i.e._ a naked singularity spacetime [37], and not the slow (\(|a|<M\)) Kerr which is the object of this work and in any case they do not prove the existence of closed causal geodesics.
### Organization of the paper
In SS2, we introduce the Kerr metric and discuss the definition and properties of the Kerr-star spacetime. In SS3 we recall the set of first order differential equations satisfied by geodesic orbits. In SS4, we study the properties of null geodesics required to prove the main theorem. In SS5, we give the proof of Thm. 1.1 split into several cases. The overall structure of the proof is detailed in 5.1, 5.3 and Fig. 3.
_Acknoundendgments._ I would like to thank my PhD supervisors S. Nemirovski and S. Suhr for many fruitful discussions and precious advices. I am also grateful to Liang Jin for the interesting and helpful conversations. This research is supported by the SFB/TRR 191 "Symplectic Structures in Geometry, Algebra and Dynamics", funded by the Deutsche Forschungsgemeinschaft.
## 2. **The Kerr-star spacetime**
Consider \(\mathbb{R}^{2}\times S^{2}\) with coordinates \((t,r)\in\mathbb{R}^{2}\) and \((\theta,\phi)\in S^{2}\). Fix two real numbers \(a\in\mathbb{R}\setminus\{0\}\), \(M\in\mathbb{R}_{>0}\) and define the functions
\[\rho(r,\theta):=\sqrt{r^{2}+a^{2}\cos^{2}\theta}\]
and
\[\Delta(r):=r^{2}-2Mr+a^{2}.\]
We study the case \(|a|<M\) called _slow Kerr_, for which \(\Delta(r)\) has two positive roots
\[r_{\pm}=M\pm\sqrt{M^{2}-a^{2}}>0\]
and define two sets
1. the _horizons_\(\mathscr{H}:=\{\Delta(r)=0\}=\{r=r_{\pm}\}:=\mathscr{H}_{\text{ }}\sqcup\mathscr{H}_{\text{}}\),
2. the _ring singularity_\(\Sigma:=\{\rho(r,\theta)=0\}=\{r=0,\ \theta=\pi/2\}\).
The _Kerr metric_[27] in _Boyer-Lindquist coordinates_ is
\[\mathbf{g}=-dt\otimes dt+\frac{2Mr}{\rho^{2}(r,\theta)}(dt-a\sin^{2}\theta\ d\phi)^{2}+\frac{\rho^{2}(r,\theta)}{\Delta(r)}dr \otimes dr+a^{2}\sin^{4}(\theta)d\phi\otimes d\phi+\rho^{2}(r,\theta)d\sigma ^{2}, \tag{1}\]
where \(d\sigma^{2}=d\theta\otimes d\theta+\sin^{2}\theta d\phi\otimes d\phi\) is the 2-dimensional (Riemannian) metric of constant unit curvature on the unit sphere \(S^{2}\subset\mathbb{R}^{3}\) written in spherical coordinates.
**Remark 2.1**.: _The components of \(\mathbf{g}\) in Boyer-Lindquist coordinates can be read off the common expression_
\[\mathbf{g} =-\bigg{(}1-\frac{2Mr}{\rho^{2}(r,\theta)}\bigg{)}\,dt\otimes dt- \frac{4Mar\sin^{2}\theta}{\rho^{2}(r,\theta)}\,dt\otimes d\phi+ \tag{2}\] \[\qquad+\bigg{(}r^{2}+a^{2}+\frac{2Mra^{2}\sin^{2}\theta}{\rho^{2}( r,\theta)}\bigg{)}\sin^{2}\theta\,d\phi\otimes d\phi+\frac{\rho^{2}(r,\theta)}{ \Delta(r)}\,dr\otimes dr+\rho^{2}(r,\theta)\,d\theta\otimes d\theta.\]
_Nevertheless this last expression does not cover the subsets \(\{\theta=0,\pi\}\)._
**Lemma 2.2**.: _The metric (1) is a Lorentzian metric on \(\mathbb{R}^{2}\times S^{2}\setminus(\Sigma\,\cup\mathscr{H})\)._
The sets on which the Boyer-Lindquist coordinates or the metric tensor fail are:
* the _horizons_\(\mathscr{H}=\{\Delta(r)=0\}=\{r=r_{\pm}\}=\mathscr{H}_{-}\,\sqcup\mathscr{H}_{+}\);
* the _ring singularity_\(\Sigma=\{\rho(r,\theta)=0\}=\{r=0,\;\theta=\pi/2\}\).
In order to extend the metric tensor to the horizons, one has to introduce a new set of coordinates. No change of coordinates can be found in order to extend the metric across the ring singularity. For a detailed study of the nature of the ring singularity, see for instance [15].
**Definition 2.3**.: _The subsets_
\[I:=\{r>r_{+}\},\,\text{II}:=\{r_{-}<r<r_{+}\},\,\text{III}:=\{r<r_{-}\}\subset \{(t,r)\in\mathbb{R}^{2},\;(\theta,\phi)\in S^{2}\}\setminus(\Sigma\,\cup \mathscr{H})\]
_are called the Boyer-Lindquist (BL) blocks._
**Remark 2.4**.: _The BL blocks I, II and III are the connected components of \(\mathbb{R}^{2}\times S^{2}\setminus(\Sigma\,\cup\mathscr{H})\). Each block with the restriction of the metric tensor (1) is a connected Lorentzian \(4\)-manifold. To get spacetimes, one has to choose a time orientation on each block._
### Time orientation of BL blocks
We define a future time-orientation of block I using the gradient timelike vector field \(-\nabla t\). Indeed, the hypersurfaces \(\{t=\text{const}\}\) are spacelike in block I. Notice that the coordinate vector field \(\partial_{t}\) is timelike future-directed for \(r\gg r_{+}\) on block I, since \(\mathbf{g}(-\nabla t,\partial_{t})=-1\).
We define a time-orientation of block II by declaring the vector field \(-\partial_{r}\), which is timelike in II, to be future-oriented.
We define a time-orientation of block III by declaring the vector field \(V:=(r^{2}+a^{2})\partial_{t}+a\partial_{\phi}\), which is timelike in III, to be future-oriented.
Figure 1. This picture shows a \(t\)-slice \(\{t\}\times\mathbb{R}\times S^{2}\), with the radius drawn as \(e^{r}\), so that \(r=-\infty\) is at the center of the figure. The _Ergoregion_\(\{\mathbf{g}(\partial_{t},\partial_{t})>0\}\) (at fixed time \(t\)) is the region between the purple ellipsoids in which \(\partial_{t}\) becomes spacelike.
With this choice of time orientations, each block is a _spacetime_, _i.e._ a connected time-oriented Lorentzian 4-manifold.
### Kerr spacetimes
**Definition 2.5**.: _A Kerr spacetime is an analytic spacetime \((K,g_{K})\) such that_
1. _there exists a family of open disjoint isometric embeddings_ \(\Phi_{i}\colon\mathcal{B}_{i}\hookrightarrow K\ (i\in\mathbb{N})\) _of BL blocks_ \((\mathcal{B}_{i},\mathcal{B}_{K}|_{\mathcal{B}_{i}})\)_, such that_ \(\cup_{i\in\mathbb{N}}\Phi_{i}(\mathcal{B}_{i})\) _is dense in_ \(K\)_;_
2. _there are analytic functions_ \(r\) _and_ \(C\) _on K such that their restriction on each_ \(\Phi_{i}(\mathcal{B}_{i})\) _of condition_ \((1)\) _is_ \(\Phi_{i}\)_-related to the Boyer-Lindquist functions_ \(r\) _and_ \(C=\cos\theta\) _on_ \(\mathcal{B}_{i}\)_;_
3. _there is an isometry_ \(e:K\to K\) _called the equatorial isometry whose restrictions to each BL block sends_ \(\theta\) _to_ \(\pi-\theta\)_, leaving the other coordinates unchanged;_
4. _there are Killing vector fields_ \(\tilde{\partial}_{t}\) _and_ \(\tilde{\partial}_{\phi}\) _on K that restrict to the Boyer-Lindquist coordinate vector fields_ \(\partial_{t}\) _and_ \(\partial_{\phi}\) _on each BL block._
**Remark 2.6**.: _With abuse of notation, we identify each block \(\mathcal{B}_{i}\) with its image via the isometric embedding \(\Phi_{i}(\mathcal{B}_{i})\subset K\)._
**Lemma 2.7**.: _Each time-oriented BL block is a Kerr spacetime._
**Definition 2.8**.: _In a Kerr spacetime K, on any BL block \(\mathcal{B}_{i}\)_
1. _the axis_ \(A=\{\theta=0,\pi\}\) _is the set of zeroes of the Killing vector field_ \(\tilde{\partial}_{\phi}\) _as in_ \((4)\) _of Def._ 2.5_;_
2. _the equatorial hyperplane_ \(Eq=\{\theta=\pi/2\}\) _is the set of fixed points of the equatorial isometry_ \(\epsilon\) _as in_ \((3)\) _of Def._ 2.5_._
### The Kerr-star spacetime
**Definition 2.9**.: _On each BL block, we define the Kerr-star coordinate functions:_
\[t^{*}:=t+\mathcal{T}(r)\in\mathbb{R},\hskip 28.452756pt\phi^{*}:=\phi+ \mathcal{A}(r)\in S^{1}, \tag{3}\]
_with \(d\mathcal{T}/dr:=(r^{2}+a^{2})/\Delta(r)\) and \(d\mathcal{A}/dr:=a/\Delta(r)\)._
**Lemma 2.10** ([35], Lemma 2.5.1).: _For each BL block \(B\), the map \(\xi^{*}=(t^{*},r,\theta,\phi^{*}):B\setminus A\to\xi^{*}(B)\subseteq\mathbb{R} ^{4}\) is a coordinate system on \(B\setminus A\), where \(A\) is the axis. We call \(\xi^{*}\) a Kerr-star coordinate system._
Because the Kerr-star coordinate functions differ from BL coordinates only by additive functions of \(r\), the coordinate vector fields \(\partial_{t},\partial_{\theta},\partial_{\phi}\) are the same in the two systems, except that in \(K^{*}\) they extend over the horizons. Instead, the new coordinate vector field \(\partial_{r}^{*}=\partial_{r}-\Delta(r)^{-1}V\), where \(V\) is one of the canonical vector fields defined in Section 3. Note that if we use Kerr-star coordinates, we get \(\mathbf{g}(\partial_{r}^{*},\partial_{r}^{*})=0\), _i.e._\(\partial_{r}^{*}\) is a null vector field of \(K^{*}\), while in BL coordinates, \(\mathbf{g}(\partial_{r},\partial_{r})=\rho^{2}(r,\theta)/\Delta(r)\), which is singular when \(\Delta(r)=0\).
**Lemma 2.11**.: _The Kerr metric, expressed in Kerr-star coordinates, takes the form_
\[\begin{split}\mathbf{g}=&-\left(1-\frac{2Mr}{\rho^ {2}(r,\theta)}\right)dt^{*}\otimes dt^{*}-\frac{4Mar\sin^{2}\theta}{\rho^{2}(r,\theta)}\,dt^{*}\otimes d\phi^{*}+\\ &+\left(r^{2}+a^{2}+\frac{2Mra^{2}\sin^{2}\theta}{\rho^{2}(r, \theta)}\right)\sin^{2}\theta\,d\phi^{*}\otimes d\phi^{*}+2\,dt^{*}\otimes dr +\\ &-2a\sin^{2}\theta\,d\phi^{*}\otimes dr+\rho^{2}(r,\theta)\,d \theta\otimes d\theta.\end{split} \tag{4}\]
Now all coefficients in \(\mathbf{g}\) are well defined on the horizons \(\mathscr{H}=\{\Delta(r)=0\}\), hence it is a well defined Lorentzian metric on \(\mathbb{R}^{2}\times S^{2}\setminus\Sigma\) and constitutes an analytic extension of (1) over \(\mathscr{H}\).
**Definition 2.12**.: _The Kerr-star spacetime is a Kerr spacetime as defined in 2.5 given by the tuple \((K^{*},\mathbf{g},o)\) with \(K^{*}=\{(t^{*},r)\in\mathbb{R}^{2},\,(\theta,\phi^{*})\in S^{2}\}\setminus\Sigma\), \(\mathbf{g}\) as in Lemma 2.11 (extended over the axis) and \(o\) is the future time-orientation induced by the null vector field \(-\partial_{r}^{*}\)._
**Remark 2.13**.: _Note that the time-orientations on individual BL blocks agree with the ones defined for the Kerr-star spacetime: \(\mathbf{g}(-\partial_{r}^{*},\partial_{t})=-1<0\) on I, \(\mathbf{g}(-\partial_{r}^{*},-\partial_{r})=\mathbf{g}(\partial_{r},\partial_{ r})=\rho^{2}(r,\theta)/\Delta(r)<0\) on II and \(\mathbf{g}(-\partial_{r}^{*},V)=\frac{1}{\Delta(r)}\mathbf{g}(V,V)=-\rho^{2}(r, \theta)<0\) on III._
### Totally geodesic submanifolds of the Kerr-star spacetime
**Lemma 2.14**.: _Let \(K^{*}\) be the Kerr-star spacetime as in Def. 2.12. The axis \(A\) and the equatorial hyperplane \(Eq\) of \(K^{*}\) are closed totally geodesic submanifolds of \(K^{*}\)._
**Proposition 2.15**.: _[_35_]_ _Let \(K^{*}\) be the Kerr-star spacetime. Then the horizon \(\mathscr{H}\) is a closed totally geodesic null hypersurface, with future hemicone on the \(-\partial_{*}^{*}\) side. Moreover, the restriction of \(V:=(r^{2}+a^{2})\partial_{t}+a\partial_{\phi}\) (called canonical vector field in SS3) on \(\mathscr{H}\) is the unique null vector field on \(\mathscr{H}\) that is tangent to \(\mathscr{H}\), hence also normal to \(\mathscr{H}\)._
### Causal region of the Kerr-star spacetime
**Proposition 2.16** ([35], Proposition 2.4.6).: _The BL blocks I and II are causal._
**Corollary 2.17**.: _Let \(K^{*}\) be the Kerr-star spacetime. Then the region \(I\cup\,\varPi\cup\{r=r_{\pm}\}=\{t^{*}\in\mathbb{R},r\in[r_{-},+\infty),( \theta,\phi^{*})\in S^{2}\}\setminus\Sigma\subset K^{*}\) is causal._
Proof.: Let \(\gamma\) be a future pointing curve. If \(\gamma\) is entirely contained either in I or in II, then by Prop. 2.16, \(\gamma\) cannot be closed. If \(\gamma\) is entirely contained in \(\mathscr{H}=\{r=r_{\pm}\}\) (closed totally geodesic null hypersurface of \(K^{*}\) by Prop. 2.15), then by Lem. 1.5.11 of [35], except for restphotons, all other curves are spacelike, but restphotons are integral curves of \(V|_{\mathscr{H}}=(r_{\pm}^{2}+a^{2})\partial_{t}+a\partial_{\phi}\), which cannot be closed. Since the time orientation \(-\partial_{r}^{*}\) is null and transverse to the null hypersurface \(\mathscr{H}\), the future directed curves always go in the direction of \(-\partial_{r}^{*}\), if they hit \(\mathscr{H}\) transversally. Henceforth, if \(\gamma\) starts in the BL block I (II), crosses \(\mathscr{H}_{+}\) (\(\mathscr{H}_{-}\)) transversally, enters the block II (III), then \(\gamma\) cannot re-intersect \(\mathscr{H}_{+}\) from II to I (\(\mathscr{H}_{-}\) from III to II). The last possibility is the following: \(\gamma\) starts in I (II), becomes tangent to \(\mathscr{H}_{+}\) (\(\mathscr{H}_{-}\)), hence either lies forever on \(\mathscr{H}_{+}\) (\(\mathscr{H}_{-}\)) or leaves it at some point. In the first case, \(\gamma\) is obviously not closed, while in the second, it cannot be closed because it will necessarily have to enter the region \(\{r<r_{+}\}\) (\(\{r<r_{-}\}\)), according to the time orientation.
## 3. **Geodesics in Kerr spacetimes**
### Constants of motion
Let \((K,\mathbf{g})\) be a Kerr spacetime as in Def. 2.5. Recall that there are two Killing vector fields \(\tilde{\partial}_{t}\) and \(\tilde{\partial}_{\phi}\) on \(K\).
**Definition 3.1** (_Energy and angular momentum_).: _For a geodesic \(\gamma\) of \((K,\mathbf{g})\), the constants of motion_
\[E=E(\gamma):=-\mathbf{g}(\gamma^{\prime},\tilde{\partial}_{t})\]
_and_
\[L=L(\gamma):=\mathbf{g}(\gamma^{\prime},\tilde{\partial}_{\phi})\]
_are called its energy and its angular momentum (around the axis of rotation of the black hole), respectively._
**Definition 3.2**.: _For every BL block \(\mathcal{B}_{i}\) define the canonical vector fields_
\[V:=(r^{2}+a^{2})\partial_{t}+a\partial_{\phi}\quad\text{ and }\quad W:= \partial_{\phi}+a\sin^{2}\theta\,\partial_{t}\]
_via the isometry \(\Phi_{i}\colon\mathcal{B}_{i}\hookrightarrow K\)._
**Remark 3.3**.: \(V\) _and \(W\) are not Killing vectors._
**Definition 3.4**.: _Let \(\gamma\) be a geodesic in \(K\) with energy \(E\) and angular momentum \(L\). Define the functions \(\mathbb{P}\) and \(\mathbb{D}\) along \(\gamma\) by_
\[\mathbb{P}(r):=-\mathbf{g}(\gamma^{\prime},V)=(r^{2}+a^{2})E-La\]
_and_
\[\mathbb{D}(\theta):=\mathbf{g}(\gamma^{\prime},W)=L-Ea\sin^{2}\theta.\]
A geodesic in a Kerr spacetime has two additional constants of motions. First, there is the _Lorentian energy_\(q:=\mathbf{g}(\gamma^{\prime},\gamma^{\prime})\), which is always constant along every geodesic in any pseudo-Riemannian manifold. The second one is \(K\), which was first found by Carter in [9] using the separability of the Hamilton-Jacobi equation. \(K\) can be defined (see Ch. 7 in [10]) by
\[K:=2\rho^{2}(r,\theta)\mathbf{g}(l,\gamma^{\prime})\mathbf{g}(n,\gamma^{\prime })+r^{2}q,\]
where \(l=\frac{1}{\Delta(r)}V+\partial_{r}\) and \(n=\frac{1}{2\rho^{2}(r,\theta)}V-\frac{\Delta(r)}{2\rho^{2}(r,\theta)}\partial_{r}\). See also [46] for a definition using a Killing tensor for the Kerr metric.
**Definition 3.5** (_Carter constant_).: _On a Kerr spacetime, the constant of motion_
\[Q:=K-(L-aE)^{2}\quad\text{ or }\quad\mathcal{Q}:=Q/E^{2}\ \ \text{if}\ \ \ E\neq 0\]
_is called the Carter constant._
### Equations of motion
**Proposition 3.6** ([35], Proposition 4.1.5, Theorem 4.2.2).: _Let \(B\) be a BL block and \(\gamma\) be a geodesic with initial position in \(B\subset K\) and constants of motion \(E,L,Q,q\). Then the components of \(\gamma\) in the BL coordinates \((t,r,\theta,\phi)\) satisfy the following set of first order differential equations_
\[\begin{cases}\rho^{2}(r,\theta)\phi^{\prime}=\frac{\mathbb{D}(\theta)}{\sin^ {2}\theta}+a\frac{\mathbb{P}(r)}{\Delta(r)}\\ \rho^{2}(r,\theta)t^{\prime}=a\mathbb{D}(\theta)+(r^{2}+a^{2})\frac{\mathbb{P }(r)}{\Delta(r)}\\ \rho^{4}(r,\theta)r^{\prime 2}=R(r)\\ \rho^{4}(r,\theta)\theta^{\prime 2}=\Theta(\theta)\end{cases} \tag{5}\]
_where_
\[R(r):= \Delta(r)\left[(qr^{2}-K(E,L,Q)\right]+\mathbb{P}^{2}(r)=\] \[= (E^{2}+q)r^{4}-2Mqr^{3}+\mathfrak{X}(E,L,Q)r^{2}+2MK(E,L,Q)r-a^{2 }Q,\] \[\Theta(\theta):= K(E,L,Q)+qa^{2}\cos^{2}\theta-\frac{\mathbb{D}(\theta)^{2}}{ \sin^{2}\theta}=\] \[= Q+\cos^{2}\theta\left[a^{2}(E^{2}+q)-L^{2}/\sin^{2}\theta\right],\]
_with_
\[\mathfrak{X}(E,L,Q):=a^{2}(E^{2}+q)-L^{2}-Q,\text{ and }K(E,L,Q)=Q+(L-aE)^{2}.\]
**Remark 3.7**.: _Since in the third and in the fourth differential equations of Prop. 3.6 the left-hand sides are clearly non-negative, we see that the polynomials \(R(r)\) and \(\Theta(\theta)\) are non-negative along the geodesics. Hence the geodesic motion can only happen in the \(r,\theta\)-region for which \(R(r),\Theta(\theta)\geq 0\)._
In order to study geodesics that cross the horizons
\[\mathscr{H}=\{\Delta(r)=0\}=\{r=r_{\pm}\},\]
it is necessary to introduce the Kerr-star coordinate system. Note however that since the change of coordinates modifies only the \(t\) and the \(\phi\) coordinates and the \(r,\theta\)-differential equations do not involve \(t\) and \(\phi\), the last two differential equations do extend over \(\mathscr{H}\). Observe also that the \(r,\theta\)-differential equations are not singular on \(\mathscr{H}\), while the \(t,\phi\)-differential equations are.
Notice that \(\Theta(\theta)\) is also well-defined if the null geodesic crosses \(A=\{\theta=0,\pi\}\). Indeed, \(L=0\) (because \(\partial_{\phi}\equiv 0\) on \(A\)), hence \(\mathbb{D}(\theta)=-Ea\sin^{2}\theta\), and then
\[\Theta(\theta)=K(E,0,Q)-(-Ea\sin^{2}\theta)^{2}/\sin^{2}\theta=Q+a^{2}E^{2}-a^ {2}E^{2}\sin^{2}\theta=Q+a^{2}E^{2}\cos^{2}\theta.\]
Thus the \(r,\theta\)-differential equations can be used to study geodesics on the whole Kerr-star spacetime.
**Remark 3.8**.: _The system (5) is composed of first order differential equations, while the geodesic equation is second order. There exist solutions of (5), called singular, which do not correspond to geodesics. For example, if \(r_{0}\in\mathbb{R}\) is a multiplicity one zero of \(r\mapsto R(r)\), then \(r_{0}\) solves the radial equation in (5), since in this case \(r^{\prime}(s)=0\) for all \(s\), but we do not have a geodesic._
### Dynamics of geodesics
The non-negativity of \(R(r)\) and \(\Theta(\theta)\) in the first order differential equations of motion (5) can be used to study the dynamics of the \(r,\theta\)-coordinates of the geodesics, together with the next proposition.
**Proposition 3.9** ([35], Corollary 4.3.8).: _Suppose \(R(r_{0})=0\). Let \(\gamma\) be a geodesic whose \(r\)-coordinate satisfies the initial conditions \(r(s_{0})=r_{0}\) and \(r^{\prime}(s_{0})=0\)._
1. _If_ \(r_{0}\) _is a multiplicity one zero of_ \(R(r)\)_, i.e._ \(R^{\prime}(r_{0})\neq 0\)_, then_ \(r_{0}\) _is an_ \(r\)_-turning point, namely_ \(r^{\prime}(s)\) _changes sign at_ \(s_{0}\)
_._
2. _If_ \(r_{0}\) _is a higher order zero of_ \(R(r)\)_, i.e. at least_ \(R^{\prime}(r_{0})=0\)_, then_ \(\gamma\) _has constant_ \(r(s)=r_{0}\)_. Analogous results hold for_ \(r\) _and_ \(R(r)\) _replaced by_ \(\theta\) _and_ \(\Theta(\theta)\)_._
## 4. **Properties of null geodesics in Kerr spacetimes**
### Principal geodesics
Since the vector fields \(V,W,\partial_{r},\partial_{\theta}\) are mutually orthogonal, the tangent vector to a geodesic \(\gamma\) can be decomposed as \(\gamma^{\prime}=\gamma^{\prime}_{\Pi}+\gamma^{\prime}_{\perp}\) where \(\Pi:=span\{\partial_{r},V\}\) (timelike plane) and \(\Pi^{\perp}:=span\{\partial_{\theta},W\}\) (spacelike plane).
**Definition 4.1**.: _A Kerr geodesic \(\gamma\) is said to be principal if \(\gamma^{\prime}=\gamma^{\prime}_{\Pi}\)._
**Proposition 4.2** ([35], Corollary 4.2.8(2)).: _If \(\gamma\) is a null geodesic, then \(K\geq 0\), and \(K=0\iff\gamma\) is principal._
**Definition 4.3**.: _A null geodesic is called a restphoton if it lies in \(\mathcal{H}\)._
Restphotons are integral curves of \(V|_{\mathcal{H}}\) by Prop. 2.15.
**Proposition 4.4** ([35], Lemma 4.2.9(3,4)).: _For a null geodesic \(\gamma\),_
1. \(K=L=0\) _but_ \(E\neq 0\Leftrightarrow\gamma\in A\setminus\mathcal{H}\)_._
2. \(K=L=E=0\Leftrightarrow\gamma\) _is a restphoton._
### Null geodesics with \(Q<0\)
**Proposition 4.5**.: _Let \(\gamma\) be a null geodesic with \(Q<0\). Then_
1. \(\gamma\) _does not intersect_ \(Eq=\{\theta=\pi/2\}\)_;_
2. \(a^{2}E^{2}>L^{2}\) _and in particular_ \(E\neq 0\)_._
Proof.: If \(\gamma\cap A=\emptyset\), then from the \(\theta\)-equation of (5) we have
\[\cos^{2}\theta[L^{2}/\sin^{2}\theta-a^{2}E^{2}]=Q-\rho^{4}(r,\theta)\theta^{ \prime 2}<0.\]
Hence \(\cos^{2}\theta\neq 0\) and \(L^{2}/\sin^{2}\theta-a^{2}E^{2}<0\), hence \(\gamma\cap Eq=\emptyset\) and \(a^{2}E^{2}>L^{2}\), so \(E\neq 0\).
If \(\gamma\cap A\neq\emptyset\), then by Prop. 4.4\(L=0\) and
\[-a^{2}E^{2}\cos^{2}\theta=Q-\rho^{4}(r,\theta)\theta^{\prime 2}<0.\]
Therefore \(\cos^{2}\theta\neq 0\) and \(a^{2}E^{2}>0\), so \(\gamma\cap Eq=\emptyset\) and \(E\neq 0\).
**Remark 4.6**.: _If a geodesic \(\gamma\) has \(Q>0\), then its \(\theta\)-motion is an oscillation around \(\theta=\pi/2\), so it cuts repeatedly through \(Eq\) (see Propositions 4.5.4, 4.5.5 in [35])._
**Proposition 4.7**.: _For \(Q<0\) null geodesics, \(R(r)\) has either zero or two negative roots._
Proof.: First we show that \(R(r)\) does not have zeroes in \(r\geq 0\). It is sufficient to show that every coefficient of the polynomial function \(R(r)\) is non-negative, with at least one positive. We know from Prop. 3.6 that for null geodesics
\[R(r)=E^{2}r^{4}+\mathfrak{X}r^{2}+2MKr-a^{2}Q. \tag{6}\]
From Prop. 4.5 and Prop. 4.2, we see that the coefficients are
* \(E^{2}>0\) at \(r^{4}\);
* \(\mathfrak{X}=a^{2}E^{2}-L^{2}-Q>-Q>0\) at \(r^{2}\);
* \(K\geq 0\) at \(r\);
* \(-a^{2}Q>0\) at \(r^{0}\).
We now show that \(R(r)\) has at most two real zeroes. We have \(R^{\prime}=4E^{2}r^{3}+2\mathfrak{X}r+2MK\), hence \(R^{\prime\prime}=12E^{2}r^{2}+2\mathfrak{X}\). Since \(\mathfrak{X}>0\), \(R^{\prime\prime}\) never vanishes and \(R^{\prime\prime}(0)=2\mathfrak{X}>0\). Then \(R^{\prime\prime}>0\) everywhere, which means that \(R^{\prime}\) has a unique zero. This implies that \(R(r)\) has a unique critical point and hence we have either \(0\) or \(2\) roots, which have to be negative.
## 5. **Proof of Theorem 1.1**
### Strategy of the proof
Let \(\gamma\colon I\to K^{*}\) be a closed null geodesic (CNG). Since the radius function \(r\colon K^{*}\to\mathbb{R}\) is everywhere smooth the composition \(r\circ\gamma\) has at least two critical points \(s_{0}<s_{1}\) in each period \([a,a+T)\), i.e. \((r\circ\gamma)^{\prime}(s_{0})=(r\circ\gamma)^{\prime}(s_{1})=0\). Since \(\rho\colon K^{*}\to\mathbb{R}\) does not vanish on \(K^{*}\) the differential equation for \(r\circ\gamma\)
\[(\rho\circ\gamma)^{4}[(r\circ\gamma)^{\prime}]^{2}=R(r\circ\gamma)\]
implies that \(R(r\circ\gamma(s_{0,1}))=0\). Because of the differential equation, the geodesic motion must happen in the \(r\)-region on which \(R(r\circ\gamma)\geq 0\). Further since \(R\) is a polynomial in \(r\) we can distinguish two cases:
1. The zeros \(r\circ\gamma(s_{0,1})\) of \(R\) are simple, i.e. \(dR/dr\neq 0\) at these points. Then \(r\circ\gamma(s_{0,1})\) are turning points of \(r\circ\gamma\), i.e. \((r\circ\gamma)^{\prime}\) changes its sign at \(s_{0}\) and \(s_{1}\).
2. One of the zeros \(r\circ\gamma(s_{0})\) or \(r\circ\gamma(s_{1})\) is a higher order zero of \(R\). Then \(r\circ\gamma\) is constant.
Both the two facts follow from Proposition 3.9.
Most possible CNGs can be ruled out by comparing the location of the zeros of \(R(r)\) with the following consequence of the causal structure of Kerr:
**Lemma 5.1**.: _Let \(\gamma\colon I\to K^{*}\) be a closed null geodesic. Then \(r\circ\gamma\subset\{r<r_{-}\}\)._
Proof.: The region
\[\{r\geq r_{-}\}=\{t^{*}\in\mathbb{R},r\in[r_{-},+\infty),(\theta,\phi^{*})\in S ^{2}\}\setminus\Sigma\subset K^{*}\]
is causal by Corollary 2.17 and closed null geodesics cannot intersect \(\{r=r_{-}\}\) by Prop. 5.2.
There are two cases in which we need additional arguments:
1. to exclude CNGs with \(Q>0\) confined in \(\{0<r<r_{-}\}\), we use the fact that this region of the spacetime is foliated by spacelike hypersurfaces.
2. To exclude CNGs with \(Q<0\) and \(r=\mathrm{const}<0\), we show that the \(\theta\)-coordinate of such a geodesic is periodic whereas the \(t\)-coordinate is quasi-periodic with a non-zero increment, see 5.5.3.
### Horizons and Axis cases
First we rule out CNGs entirely contained in the axis \(A=\{\theta=0,\pi\}\) and CNGs entirely contained/intersecting the horizon \(\mathscr{H}=\{r=r_{\pm}\}\).
_The case of the horizon \(\mathscr{H}=\{r=r_{\pm}\}\)._
**Proposition 5.2**.: _There are no CNGs intersecting \(\mathscr{H}=\{r=r_{\pm}\}\)._
Figure 2. For \(Q<0\) null geodesics, \(R(r)\) has either zero or two negative roots (multiplicity two is allowed).
Proof.: On the submanifolds \(W:=\{r=\text{const}\}\), we have \(\det g^{*}|_{W}=-\rho^{2}(r,\theta)\Delta(r)\sin^{2}\theta\), where \(g^{*}\) is the Kerr-star coordinates expression of \(\mathbf{g}\). Therefore the metric \(g^{*}\) degenerates on the tangent spaces to \(\mathscr{H}=\{\Delta(r)=0\}=\{r=r_{\pm}\}\). Therefore \(T_{p}\mathscr{H}\) is a null subspace of \(T_{p}K^{*}\) for every \(p\in K^{*}\), i.e. the submanifolds \(\mathscr{H}=\{\Delta(r)=0\}\) are null hypersurfaces. Then, by Lem. 1.5.11 of [35], every vector in \(T_{p}K^{*}\) is spacelike, except for the intersection of \(T_{p}\mathscr{H}\) with the null cone of \(T_{p}K^{*}\). Note that the vector field \(V=(r^{2}+a^{2})\partial_{t}+a\partial_{\phi}\) is tangent to \(\mathscr{H}\) and we have \(\mathbf{g}(V,V)=-\Delta(r)\rho^{2}(r,\theta)\), i.e. \(V\) is null along \(\mathscr{H}\). Hence \(V|_{\mathscr{H}}\) generates the unique null tangent line to \(H\). Further note that no flowline of \(V\) closes. By Prop. 2.15, the flowlines of \(V|_{\mathscr{H}}\) are null pregeodesics, hence the null geodesics tangent to \(\mathscr{H}\) do not close.
It remains to consider null geodesics intersecting \(\mathscr{H}\) transversally. Since each connected component \(\{r=r_{\pm}\}\) of \(\mathscr{H}\) is an orientable hypersurface separating the orientable manifold \(K^{*}\), every closed curve transversal to \(H\) has to intersect \(\{r=r_{\pm}\}\) an even number of times. Further since \(K^{*}\) is time-oriented by \(-\partial_{r}^{*}\), all tangent vectors to a null geodesics transversal to \(H\) have to lie on one side of \(H\). Therefore a null geodesic transversal to \(H\) can intersect each connected component \(\{r=r_{\pm}\}\) only once. This shows that no null geodesic transversal to \(\mathscr{H}\) can close.
_The case of the axis \(A=\{\theta=0,\pi\}\)._
**Proposition 5.3**.: _There are no CNGs which are tangent at some point to \(A=\{\theta=0,\pi\}\). In particular, there are no CNGs entirely contained in \(A\)._
Proof.: First of all, \(A=\{\theta=0,\pi\}\) is a \(2\)-dimensional closed totally geodesic submanifold by Lem. 2.14. Hence if a geodesic \(\gamma\) is tangent to \(A\) at some point, it will always lie on \(A\). By Prop. 4.4, if \(\gamma\in A\), then \(L=K=0\). Hence there are two possible cases: if \(E=0\), by Prop. 4.4, then \(\gamma\) is a restphoton, _i.e._ an integral curve of \(V|_{\mathscr{H}}=(r_{\pm}^{2}+a^{2})\partial_{t}+a\partial_{\phi}\), which is not closed. If \(E\neq 0\), then using (5), we have
\[R(r)=E^{2}(r^{2}+a^{2})^{2}>0.\]
So \(R(r)\) has no zeroes and therefore the geodesics cannot be bounded.
### Steps of the proof for other cases
The proof splits into two main cases \(E=0\) and \(E\neq 0\).
If \(E=0\) (5.4), we analyse two subcases \(L^{2}+Q=0\) and \(L^{2}+Q\neq 0\) (this last splits into \(L\neq 0\) and \(L=0\)).
If \(E\neq 0\) (5.5), we analyse three subcases \(Q=0\) (5.5.1), \(Q>0\) (5.5.2) with \(Q<0\) (5.5.3).
**Remark 5.4**.: _The only case which requires a detailed analysis of the differential equations is the case \(E\neq 0\) and \(Q<0\) (see 5.5.3)._
### Case \(E=0\)
From Prop. 3.6, for null (\(q=0\)) geodesics we have
\[R(r) =\mathfrak{X}(0,L,Q)r^{2}+2MK(0,L,Q)r-a^{2}Q\geq 0, \tag{8}\] \[\Theta(\theta) =Q-\frac{\cos^{2}\theta}{\sin^{2}\theta}L^{2}\geq 0, \tag{7}\]
Figure 3. All the geodesic types which have to be studied.
with \(\mathfrak{X}(0,L,Q)=-(L^{2}+Q)\) and \(K(0,L,Q)=L^{2}+Q\), _i.e._\(\mathfrak{X}=-K\). Notice that we must have \(Q\geq 0\) by (8), hence \(L^{2}+Q\geq 0\).
#### 5.4.1. Subcase \(L^{2}+q=0\)
We have
\[R(r)=-a^{2}Q\geq 0. \tag{9}\]
Since \(Q\geq 0\), the only possibility is \(Q=0\), hence \(L=0\) by (8). Moreover we must have \(\rho(r,\theta)^{4}r^{2}=R(r)\equiv 0\), hence \(r(s)=\text{const}\). However, by Prop. 4.4, we know that a null geodesic with \(L=E=K=0\) is a restphoton, _i.e._ an integral curve of the canonical vector field \(V|_{\mathscr{H}}=(r_{\pm}^{2}+a^{2})\partial_{t}+a\partial_{\phi}\). Since \(r(s)=r_{\pm}=\text{const}\), this integral curve would be \(\left((r_{\pm}^{2}+a^{2})s+c_{t},\ r_{\pm},\ \theta_{0},\ as+c_{\phi}\right)\) with \(c_{t}\in\mathbb{R},\theta_{0}\in[0,\pi],c_{\phi}\in S^{1}\), which is clearly not closed.
#### 5.4.2. Subcase \(L^{2}+q\neq 0\)
Then we must have \(L^{2}+Q>0\). Since \(|a|<M\) and \(L^{2}+Q>Q\geq 0\), the discriminant of (7) is \(\text{dis}=4M^{2}(L^{2}+Q)^{2}-4a^{2}Q(L^{2}+Q)>0\). Therefore \(R(r)\) has two roots given by
\[M\pm\sqrt{M^{2}-\frac{a^{2}Q}{L^{2}+Q}}.\]
_If \(L\neq 0\),_ then we have (see Figures 4, 5)
\[M+\sqrt{M^{2}-\frac{a^{2}Q}{L^{2}+Q}}>r_{+}=M+\sqrt{M^{2}-a^{2}}.\]
Figure 4. Plot of \(R(r)\) in the case \(E=0,\ L^{2}+Q\neq 0,\ Q>0\) with \(a=3,M=6,L=2,Q=4\).
So if \(L\neq 0\), the geodesics will have to cross \(\mathscr{H}\) since one of the two zeros is bigger than \(r_{+}\) and the other is smaller than \(r_{-}\), which is impossible for a CNG by Prop. 5.2.
_If \(L=0\),_ then \(Q>0\), hence \(R^{\prime\prime}(r)=-2Q<0\). Moreover \(R(r)=-Q\Delta(r)\) because \(E=L=0\). Therefore the two (multiplicity one) roots are \(r_{-}\) and \(r_{+}\) (see Fig. 6).
This polynomial \(R(r)\) cannot produce a CNG since the hypersurfaces \(\mathscr{H}=\{r=r_{\pm}\}\) are closed totally geodesic submaninfolds by Prop. 2.15 and a geodesic cannot have turning points on such hypersurfaces because it would be tangent to them there.
### Case \(E\neq 0\)
#### 5.5.1. Subcase \(Q=0\)
We have
\[R(r)=E^{2}r^{4}+(a^{2}E^{2}-L^{2})r^{2}+2M(L-aE)^{2}r\geq 0. \tag{10}\]
From the \(\theta\)-equation, since \(Q=0\), we get
\[a^{2}E^{2}\geq\frac{L^{2}}{\sin^{2}\theta}\geq L^{2}.\]
Observe that
\[R(0) =0,\] \[R^{\prime}(r) =4E^{2}r^{3}+2(a^{2}E^{2}-L^{2})r+2M(L-aE)^{2},\] \[R^{\prime\prime}(r) =12E^{2}r^{2}+2(a^{2}E^{2}-L^{2})\geq 0.\]
Therefore \(R\) is convex and can only produce bounded radial behaviour with constant \(r(s)\equiv 0\) if \(r=0\) is a multiple root of \(R(r)\), _i.e._ if \(L=aE\). The polynomial reduces then to \(R(r)=E^{2}r^{4}\). But \(\Theta(\theta)=-\frac{\cos^{4}\theta}{\sin^{2}\theta}L^{2}\geq 0\) and \(L\neq 0\), hence we can only have \(\theta(s)=\pi/2\equiv\text{const}\), which is not possible because the geodesic would then intersect the ring singularity (see Fig.7).
#### 5.5.2. Subcase \(Q>0\)
From Prop. 4.2, we know that for null geodesics \(K=Q+(L-aE)^{2}\geq 0\). Hence if \(Q>0\), then \(K>0\). Let us again consider \(R(r)=E^{2}r^{4}+\mathfrak{X}(E,L,Q)r^{2}+2MK(E,L,Q)r-a^{2}Q\). Since \(R(0)=-a^{2}Q<0\), if a null geodesic has bounded radial behaviour either it must be confined entirely in the negative region or entirely in the positive region, since we must have \(R(r)\geq 0\). Bounded radial behaviour in the negative region is impossible in this subcase. Indeed, the signs of the coefficients of \(R(-r)\) are \(+\) sign\((\mathfrak{X})\ -\ -\,\) so for every sign of \(\mathfrak{X}\), there would be only one change of sign, hence there can be only one single real negative root by the _"Descartes' rule of signs"_.1
Footnote 1: Wilkins [47] was apparently the first to apply this rule to the polynomial \(R\) from the radial equation of motion.
**Remark 5.5**.: _The same conclusion about the sign of the coefficients of \(R(r)\) also holds in the case \(L=aE\), indeed \(R(r)=E^{2}r^{4}-Qr^{2}+2MQr-a^{2}Q\) in such a case._
By Lem. 5.1, we could only have a bounded (either in a compact \(r\)-interval or constant \(r\)) radial behaviour in the positive region \(\{0<r<r_{-}\}\). However the hypersurfaces \(\mathcal{N}:\{t=t_{0}\}\cap\{0<r<r_{-}\}\) are spacelike. Indeed, if \(p\in\mathcal{N}\setminus A\), where \(A=\{\theta=0,\pi\}\), then \(T_{p}\mathcal{N}\) is spanned by \(\partial_{r},\partial_{\theta},\partial_{\phi}\) which are spacelike and orthogonal to each other. If \(p\in A\subset\mathcal{N}\), then \(p=(t_{0},r,q)\) with \(q=(0,0,\pm 1)\in S^{2}\), and we may replace \(\partial_{\theta},\partial_{\phi}\) by any basis of \(T_{q}S^{2}\). Hence we cannot have CNGs
Figure 7. Plot of \(R(r)\) in the case \(E\neq 0,\ Q=0,\ L=aE\neq 0\) with \(a=3\), \(M=10,E=5,Q=0,L=15\).
in \(\{0<r<r_{-}\}\) because otherwise there would be a point on a spacelike hypersurface \(\{t=t_{0}\}\) at which the null tangent vector of the geodesic would be tangent to this hypersurface.
#### 5.5.3. Subcase \(Q<0\)
This is the last remaining case and the most difficult one. By Prop. 4.7, the only possible bounded behaviour is \(r(s)=\mathrm{const}<0\) (see Fig.8). Such geodesics are known in the literature as _spherical geodesics_, see e.g. [42].
By Prop. 4.5, null geodesics with negative Carter constant do not meet \(Eq=\{\theta=\pi/2\}\), hence \(\cos^{2}\theta\neq 0\). Then we may define \(u:=\cos^{2}\theta\in(0,1]\). Since we are in the case \(E\neq 0\), we can re-write the \(\theta\)-equation in (5) as
\[\left(\frac{\rho^{2}(r,u)}{E}\right)^{2}\frac{(u^{\prime})^{2}}{4u}=-a^{2}u^{2 }+(a^{2}-\Phi^{2}-\mathcal{Q})u+\mathcal{Q}=:\tilde{\Theta}(u), \tag{11}\]
where \(\Phi:=L/E\) and \(\mathcal{Q}:=Q/E^{2}\). Since we must have \(\tilde{\Theta}(u)\geq 0\) somewhere in \((0,1]\), \(w:=a^{2}-\Phi^{2}-\mathcal{Q}>0\) because \(\mathcal{Q}<0\) and the coefficient of the second order term is negative. Therefore \(\tilde{\Theta}\) must have roots given by
\[u_{\pm}=\frac{w\pm\sqrt{\mathrm{dis}}}{2a^{2}} \tag{12}\]
where \(\mathrm{dis}:=w^{2}+4a^{2}\mathcal{Q}\), so that
\[\tilde{\Theta}(u)=-a^{2}(u-u_{+})(u-u_{-}). \tag{13}\]
Hence we have
\[0<u_{-}\leq u_{+}.\]
We can write the necessary condition to have roots as
\[\mathrm{dis}=[\mathcal{Q}+(|\Phi|-|a|)^{2}][\mathcal{Q}+(|\Phi|+|a|)^{2}]\geq 0.\]
Notice that \(\mathcal{Q}+(|\Phi|+|a|)^{2}\geq\mathcal{Q}+(\Phi-a)^{2}=K/E^{2}\geq 0\) by Prop. 4.2. If we have \(\mathcal{Q}+(|\Phi|+|a|)^{2}=0\), then \(\tilde{\Theta}(u)=-a^{2}(u-1)^{2}-\Phi^{2}-2|a\Phi|(1-u)\geq 0\) is satisfied only for \(u=u_{+}=u_{-}=1\) and \(\Phi=0\), which falls into Prop. 5.6. If instead \(\mathcal{Q}+(|\Phi|+|a|)^{2}>0\), we must have \(\mathcal{Q}+(|\Phi|-|a|)^{2}\geq 0\). Then by the AM-GM inequality we have
\[\sqrt{\mathrm{dis}}\leq\frac{[\mathcal{Q}+(|\Phi|-|a|)^{2}]+[\mathcal{Q}+(| \Phi|+|a|)^{2}]}{2}=a^{2}+\Phi^{2}+\mathcal{Q},\]
and so
\[u_{+}\leq\frac{a^{2}-\Phi^{2}-\mathcal{Q}+a^{2}+\Phi^{2}+\mathcal{Q}}{2a^{2}}=1.\]
Therefore we have
\[0<u_{-}\leq u_{+}\leq 1.\]
**Proposition 5.6**.: _In the Kerr-star spacetime, consider a null geodesic \(\gamma\) with \(\mathcal{Q}<0\) and \(r=\text{const}\). If \(\text{dis}=0\), then \(\theta=\text{const}\) and the geodesic cannot be closed._
Proof.: Since \(\text{dis}=0\), we have \(u_{-}=u_{+}\). Then
\[\tilde{\Theta}(u)=-a^{2}\big{(}u-u_{+}\big{)}^{2}\geq 0.\]
Hence the last inequality is satisfied only if \(u=u_{+}=\text{const}\), therefore \(\theta=\text{const}\). Then the geodesic \(\gamma\) cannot be closed. Indeed, there are two possibilities. First, if \(\gamma\) is entirely contained in \(A\), then it cannot be closed by Prop. 5.3. Second, if \(\gamma\) is not entirely contained in \(A\), by Prop. 3.6 a geodesic of the form \(s\mapsto(t(s),r_{0},\theta_{0},\phi(s))\) has \(t^{\prime}\equiv\text{const}\) and \(\phi^{\prime}\equiv\text{const}\). It follows that \(s\mapsto t(s)\) and \(s\mapsto\phi(s)\) are affine functions. If the geodesic is bounded in \(K^{*}\), then \(t(s)\) must be constant. Note that curves of the kind \(\gamma(s)=(t_{0},r_{0},\theta_{0},b_{0}s+b_{1})\), \(b_{0},b_{1}\in\mathbb{R}\), are geodesics if and only if \(b_{0}=0\) since the geodesic equation can be written in BL coordinates as \(\Gamma^{\alpha}_{\phi\phi}(\gamma(s))b_{0}^{2}=0\) but the Christoffel symbol \(\Gamma^{\theta}_{\phi\phi}\) cannot vanish at points where \(\partial_{\phi}\) is null. Indeed,
\[\Gamma^{\theta}_{\phi\phi} =-\frac{\sin\theta\cos\theta}{\rho^{6}(r,\theta)}\bigg{[}\rho^{4 }(r,\theta)\frac{\mathbf{g}(\partial_{\phi},\partial_{\phi})}{\sin^{2}\theta} +2M(r^{2}+a^{2})a^{2}r\sin^{2}\theta\bigg{]}\] \[=-\frac{\sin\theta\cos\theta}{\rho^{6}(r,\theta)}\bigg{[}2M(r^{2 }+a^{2})a^{2}r\sin^{2}\theta\bigg{]}\neq 0,\]
since \(\theta\neq 0,\pi\) because we have already ruled out closed null geodesics in \(A\), \(\theta\neq\pi/2\) by Prop. 4.5 and \(r<0\). Hence, \(\phi(s)\) is also constant and the geodesic degenerates to a point.
**Remark 5.7**.: _Closed null curves exist in the Kerr-star spacetime: they are given by the integral curves of the vector field \(\partial_{\phi}\), whenever this last happens to be null, for some negative \(r\). Such curves cannot be geodesics by Prop. 5.6._
We may now assume \(\text{dis}>0\). Therefore we have the following chain of inequalities
\[0<u_{-}<u_{+}\leq 1. \tag{14}\]
We hence define
\[\theta_{1}:=\arccos(\sqrt{u_{+}}),\theta_{2}:=\arccos(\sqrt{u_{-}}),\theta_{ 3}:=\arccos(-\sqrt{u_{-}}),\theta_{4}:=\arccos(-\sqrt{u_{+}}) \tag{15}\]
so that
\[0\leq\theta_{1}<\theta_{2}<\frac{\pi}{2}<\theta_{3}<\theta_{4}\leq\pi. \tag{16}\]
**Proposition 5.8**.: _In the Kerr-star spacetime, null geodesics with \(\mathcal{Q}<0\), \(r=\text{const}\) and \(\theta\neq\text{const}\) can have one of the following \(\theta\)-behaviours:_
* _if_ \(0<u_{-}<u_{+}<1\)_, then the_ \(\theta\)_-coordinate oscillates periodically in one of the following intervals_ \(0<\theta_{1}\leq\theta\leq\theta_{2}<\pi/2\) _or_ \(\pi/2<\theta_{3}\leq\theta\leq\theta_{4}<\pi\)_;_
* _if_ \(0<u_{-}<u_{+}=1\)_, then_ \(\Phi(=L/E)=0\) _and the_ \(\theta\)_-coordinate oscillates periodically in one of the following intervals_ \(0=\theta_{1}\leq\theta\leq\theta_{2}<\pi/2\) _or_ \(\pi/2<\theta_{3}\leq\theta\leq\theta_{4}=\pi\)_,_
_where the \(\theta_{i}\), \(i=1,2,3,4\), are given by (15)._
Proof.: If \(0<u_{-}<u_{+}<1\), the graph of \(\Theta(\theta)/E^{2}\) is shown in Fig. 9.
The \(\theta\)-motion is allowed in the region where \(\tilde{\Theta}(u)\) and hence \(\Theta(\theta)/E^{2}\) are non-negative:
\[0<\theta_{1}\leq\theta\leq\theta_{2}<\pi/2\quad\text{ in the orange region, above the }Eq,\] \[\pi/2<\theta_{3}\leq\theta\leq\theta_{4}<\pi\quad\text{ in the purple region, below the }Eq.\]
This \(\theta\)-motion is shown in Fig. 11 and the corresponding \(\theta\)-\(\phi\)-motion in Fig. 12.
If \(0<u_{-}<u_{+}=1\), the graph of \(\Theta(\theta)/E^{2}\) is shown in Fig. 10.
These geodesics intersect the axis \(A=\{\theta=0,\pi\}\) and hence \(\Phi=0\) because \(\tilde{\partial}_{\phi}=0\) on \(A\). The \(\theta\)-motion is allowed where \(\tilde{\Theta}(u)\) and hence \(\Theta(\theta)/E^{2}\) are non-negative:
\[0=\theta_{1}\leq\theta\leq\theta_{2}<\pi/2\quad\text{ in the orange region, above the }Eq,\] \[\pi/2<\theta_{3}\leq\theta\leq\theta_{4}=\pi\quad\text{ in the purple region, below the }Eq.\]
This \(\theta\)-motion is shown in Fig. 13 and the corresponding \(\theta\)-\(\phi\)-motion in Fig. 14.
There is a difference between the motions of Fig. 9 and Fig. 10. In the first case, the geodesics oscillates between \(\theta_{1}\) and \(\theta_{2}\) (or between \(\theta_{3}\) and \(\theta_{4}\)), corresponding to half \(\theta\)-oscillation. In the second case instead, the geodesics complete symmetric oscillations around the axis, either above the equatorial hyperplane, crossing \(\{\theta=0\}\) or below the equatorial hyperplane crossing \(\{\theta=\pi\}\). However, the motion between \(\theta_{1}=0\) and \(\theta_{2}\) (or between \(\theta_{3}\) and \(\theta_{4}=\pi\)) still corresponds to half \(\theta\)-oscillation (see Figures 13, 10 and Cor. 4.5.6 in [35]).
Figure 10. Plot of \(\Theta(\theta)/E^{2}\) for \(a=4,\mathcal{Q}=-2,\Phi=0\). The \(\theta\)-motion is allowed in the orange region, oscillating simmetrically around \(\{\theta=0\}\) above the equatorial hyperplane and in the purple region, oscillating simmetrically around \(\{\theta=\pi\}\) below the equatorial hyperplane.
Figure 9. Plot of \(\Theta(\theta)/E^{2}\) for \(a=4,\mathcal{Q}=-2,|\Phi|=\sqrt{3}\). The \(\theta\)-motion is allowed in the orange region, above the equatorial hyperplane and in the purple region, below it.
In both oscillating cases, since \(r=\text{const}\), (11) implies that the coordinate \(\theta(s)\) oscillates periodically in the corresponding \(\theta\)-interval (see Figures 11 and 13).
Consider the first order equations of motion (with the rescaled constants of motion \(\mathcal{Q}:=Q/E^{2},\Phi:=L/E\)), for a constant \(r<0\):
\[\frac{\rho^{2}(r,\theta)}{E}\frac{d\theta}{ds} =\pm\sqrt{\Theta(\theta)}=\pm\sqrt{\mathcal{Q}+a^{2}\cos^{2} \theta-\Phi^{2}\frac{\cos^{2}\theta}{\sin^{2}\theta}} \tag{18}\] \[\frac{\rho^{2}(r,\theta)}{E}\frac{dt}{ds} =\frac{r^{2}+a^{2}}{\Delta}(r^{2}+a^{2}-a\Phi)+a(\Phi-a\sin^{2} \theta). \tag{17}\]
Because of the \(\theta\)-differential equation, we can restrict to an interval \(\mathcal{U}\subset\theta^{-1}\big{(}(\theta_{1},\theta_{2})\big{)}\) on which \(d\theta/ds\) is either everywhere positive or everywhere negative (depending on the initial condition). Due to the symmetry in (17) and the fact that \(r=\text{const}\), \(\theta(s)\) is periodic over twice the interval \(\mathcal{U}\). For instance, set \(\mathcal{U}=(0,T/2)\), starting from \(\theta(0)=\theta_{1}\), hence \(\theta^{\prime}(s)=+\sqrt{\Theta(\theta)}>0\) for \(s\in(0,T/2)\), then \(\theta^{\prime}(s)=-\sqrt{\Theta(\theta)}<0\) for \(s\in(T/2,T)\), where \(\theta^{\prime}(T/2)=0\), because \(\theta(T/2)=\theta_{2}\) (\({}^{\prime}\equiv d/ds\)) and Prop. 3.9, which explains the change of sign of \(\theta^{\prime}(s)\) (using the fact that \(\theta_{1},\theta_{2}\) are multiplicity one zeroes of \(\Theta(\theta)\)). Hence every \(\Delta s=T/2\), \(\theta^{\prime}(s)\) changes sign.
Figure 11. \(\theta(s)\) obtained numerically from (17), with \(a=3,M=8\), \(E=1,\mathcal{Q}=-1.252,\Phi=1.407\) and \(r=-1\). The open subset \(\mathcal{U}\) (black in the figure) is an interval on which \(\theta(s)\) is strictly monotonic.
Figure 13. \(\theta(s)\) obtained numerically from the geodesic equation with \(a=3,M=8,r=r_{\text{crit}}<0\) where \(r_{\text{crit}}\) is the radius at which \(\mathbf{g}(\partial_{\phi},\partial_{\phi})|_{r=r_{\text{crit}},\theta=\theta _{2}}=0\) and initial conditions \(\gamma(0)=(0,r_{\text{crit}},\theta_{2},0)\) and \(\gamma^{\prime}(0)=(0,0,0,-1)\). Hence, since \(\gamma^{\prime}(0)=-\partial_{\phi}\), the geodesic has \(\Phi=0\) (\(L=0\)). The Carter constant is set to \(\mathcal{Q}=-5.409\). The open subset \(\mathcal{U}\) (black in the figure) is an interval on which \(\theta(s)\) is strictly monotonic.
At parameters where \(\Theta(\theta)\neq 0\) we can combine (18) and (17) to get
Figure 14. \(\theta\)-\(\phi\) motion obtained numerically from the geodesic equation with \(a=3,M=8,r=r_{\rm crit}<0,\mathcal{Q}=-5.409,\Phi=0\) where \(r_{\rm crit}\) is the radius at which \(\mathbf{g}(\partial_{\phi},\partial_{\phi})|_{r=r_{\rm crit},\theta=\theta_{2}}=0\) and initial conditions \(\gamma(0)=(0,r_{\rm crit},\theta_{2},0)\) and \(\gamma^{\prime}(0)=(0,0,0,-1)\). Notice that the geodesic crosses \(\theta_{1}=0\) and \(\theta_{4}=\pi\) with non-zero velocity.
\[\frac{dt}{d\theta}=\frac{r^{2}\Delta+2Mr(r^{2}+a^{2}-a\Phi)}{\pm\Delta\sqrt{\Theta( \theta)}}+a^{2}\frac{\cos^{2}\theta}{\pm\sqrt{\Theta(\theta)}}=B(r)\frac{1}{\pm \sqrt{\Theta(\theta)}}+a^{2}\frac{\cos^{2}\theta}{\pm\sqrt{\Theta(\theta)}}, \tag{19}\]
with \(B(r):=\frac{r^{2}\Delta+2Mr(r^{2}+a^{2}-a\Phi)}{\Delta}\).
**Proposition 5.9** (See [10], [42]).: _In a Kerr spacetime, a null geodesic with negative Carter constant \(\mathcal{Q}\) and constant radial coordinate has the following pair of constants of motion_
\[\Phi=\Phi(r)=\frac{r^{2}(r-3M)+a^{2}(M+r)}{a(M-r)}\,\qquad\quad\mathcal{Q}= \mathcal{Q}(r)=-r^{3}\frac{(r^{3}-6Mr^{2}+9M^{2}r-4a^{2}M)}{a^{2}(M-r)^{2}}. \tag{20}\]
Proof.: Since \(E\neq 0\) by Prop. 4.5, we can divide the \(r\)-equation by \(E^{2}\) to get
\[\left(\frac{\rho^{2}}{E}\right)^{2}(r^{\prime})^{2}=r^{4}+(a^{2}-\Phi^{2}- \mathcal{Q})r^{2}+2M\bigg{(}(a-\Phi)^{2}+\mathcal{Q}\bigg{)}r-a^{2}\mathcal{Q }=:\mathcal{R}(r), \tag{21}\]
where \(\Phi:=L/E\) and \(\mathcal{Q}:=Q/E^{2}\). A geodesic has constant radial behaviour if and only if \(\mathcal{R}(r)=0\) and \(d\mathcal{R}(r)/dr=0\). These two equations can be solved for \(\mathcal{Q}\) and \(\Phi\). The two resulting pairs of constants of motion are (20) and
\[\Phi=\Phi(r)=\frac{r^{2}+a^{2}}{a}\,\qquad\qquad\qquad\qquad\qquad \qquad\mathcal{Q}=\mathcal{Q}(r)=-\frac{r^{4}}{a^{2}}, \tag{22}\]
where \(r\) is the constant radius of the geodesic. Recall that \(u_{-}+u_{+}=w/a^{2}>0\). However (22) implies
\[w=-2r^{2}<0.\]
Hence no null geodesic with constant radial coordinate and \(\mathcal{Q}<0\) satisfies (22).
The first integral \(\Phi=\Phi(r)\) is given by (20). Hence \(B(r)\) reduces to
\[B(r)=\frac{r^{2}(3M+r)}{r-M}. \tag{23}\]
**Remark 5.10**.: _Notice that if \(B(r)\geq 0\), there would be nothing to prove. Indeed, in this case the \(t\)-coordinate would be non-decreasing, hence non-periodic. However \(B(r)<0\) for a negative \(r\) sufficiently close to zero. (In Fact, this holds for all spherical geodesics with \(Q<0\), see Appendix B.) Moreover, numerical simulation shows that the \(t\)-component is not necessarily monotonic for a \(Q<0\) geodesic with constant \(r<0\) close to zero, see Fig. 15. Therefore we shall evaluate \(\Delta t\) on a full \(\theta\)-oscillation._
Figure 15. \(\theta(s)\) obtained numerically from the geodesic equation with \(a=3,M=8,r=\bar{r}=-1<0\) and initial conditions \(\gamma(0)=(0,\bar{r},\theta_{1},0)\) and \(\gamma^{\prime}(0)=(1,0,0,0.710)\). Here \(\mathcal{Q}=-1.252\) and \(\Phi=1.407\).
We are now finally ready to rule out constant radius geodesics in the subcase \(E\neq 0\), \(Q<0\).
By contradiction, suppose there exists such a closed null geodesic \(\gamma:I\to K^{*}\) with non-constant coordinate functions \(s\mapsto t(s),\theta(s),\phi(s)\) and constant negative radial coordinate such that \(B(r)<0\). The differential equation (19) has the form
\[\frac{dt}{d\theta}=F(\theta),\]
for some function \(F\). The variation of the \(t\)-coordinate on a full \(\theta\)-oscillation is given by
\[\Delta t=2\int_{\theta_{1}}^{\theta_{2}}F(\theta)d\theta.\]
**Remark 5.11**.: _Notice the factor \("2"\) in the last expression. On a full \(\theta\)-oscillation, we have_
\[\int_{\theta_{1}}^{\theta_{2}}F(\theta)d\theta+\int_{\theta_{2}}^{\theta_{1}} -F(\theta)d\theta=2\int_{\theta_{1}}^{\theta_{2}}F(\theta)d\theta.\]
Therefore the variation of the \(t\)-coordinate after \(n\)\(\theta\)-oscillations is \(n\Delta t\) because of the periodicity of the \(\theta\)-coordinate. If the geodesic is closed, \(\Delta t=0\), otherwise the coordinate \(t(s)\) cannot be periodic. Hence it suffices to study what happens on a single \(\theta\)-oscillation.
**Remark 5.12**.: _A motion of the kind \(\pi\geq\theta_{4}\geq\theta\geq\theta_{3}>\pi/2\) produces the same integrals since in this \(\theta\)-interval \(\cos\theta<0\), hence with the substitution \(u=\cos^{2}\theta\) we have \(d\theta=\frac{1}{2}\frac{du}{\sqrt{u}\sqrt{1-u}}\). Therefore_
\[\int_{\theta_{1}}^{\theta_{2}}\frac{d\theta}{\sqrt{\Theta(\theta)}}=\int_{ \theta_{3}}^{\theta_{4}}\frac{d\theta}{\sqrt{\Theta(\theta)}},\qquad\quad\int _{\theta_{1}}^{\theta_{2}}\frac{\cos^{2}\theta\ d\theta}{\sqrt{\Theta(\theta) }}=\int_{\theta_{3}}^{\theta_{4}}\frac{\cos^{2}\theta\ d\theta}{\sqrt{\Theta( \theta)}}.\]
_Hence \(\Delta t\) is the same._
So without any loss of generality, we may consider a motion of the type \(0\leq\theta_{1}\leq\theta\leq\theta_{2}<\pi/2\). Then we can integrate (19) on a full oscillation to get
\[\Delta t=2B(r)\int_{\theta_{1}}^{\theta_{2}}\frac{d\theta}{\sqrt{\Theta( \theta)}}+2a^{2}\int_{\theta_{1}}^{\theta_{2}}\frac{cos^{2}\theta\ d\theta}{ \sqrt{\Theta(\theta)}}. \tag{24}\]
We now have to compute the following integrals
\[I_{1}:= \int_{\theta_{1}}^{\theta_{2}}\frac{d\theta}{\sqrt{\Theta(\theta)}},\] \[I_{2}:= \int_{\theta_{1}}^{\theta_{2}}\frac{\cos^{2}\theta\ d\theta}{ \sqrt{\Theta(\theta)}}.\]
Let us start from the first integral:
\[I_{1}=-\frac{1}{2}\int_{u_{+}}^{u_{-}}\frac{du}{\sqrt{u}\sqrt{\tilde{\Theta}(u )}}, \tag{25}\]
where we have used the substitution \(u:=\cos^{2}\theta\), hence \(d\theta=-\frac{1}{2}\frac{du}{\sqrt{u}\sqrt{1-u}}\) since \(\sin\theta\geq 0\) and \(\cos\theta>0\) if \(\theta_{1}\leq\theta\leq\theta_{2}\). Now we can use (11) and the substitution \(u=:u_{-}+(u_{+}-u_{-})y^{2}\) adopted in [26] to get
\[I_{1}= \frac{1}{2}\int_{u_{-}}^{u_{+}}\frac{du}{\sqrt{u}\sqrt{a^{2}(u_{ +}-u)(u-u_{-})}}\] \[= \frac{1}{2|a|}\int_{0}^{1}\frac{2(u_{+}-u_{-})ydy}{\sqrt{u_{-}+(u _{+}-u_{-})y^{2}}\sqrt{\big{(}u_{+}-u_{-}-(u_{+}-u_{-})y^{2}\big{)}(u_{+}-u_{-} )y^{2}}}\] \[= \frac{1}{|a|}\int_{0}^{1}\frac{dy}{\sqrt{u_{-}+(u_{+}-u_{-})y^{2} }\sqrt{1-y^{2}}}\] \[= \frac{1}{|a|\sqrt{u_{-}}}\int_{0}^{1}\frac{dy}{\sqrt{1-y^{2}} \sqrt{1-\big{(}1-\frac{u_{+}}{u_{-}}\big{)}y^{2}}}.\]
With the same substitutions, we also get
\[I_{2}= -\frac{1}{2}\int_{u_{+}}^{u_{-}}\frac{udu}{\sqrt{u}\sqrt{\hat{\Theta} (u)}}\] \[= \frac{1}{2}\int_{u_{-}}^{u_{+}}\frac{udu}{\sqrt{u}\sqrt{a^{2}(u_{+} -u)(u-u_{-})}}\] \[= \frac{1}{|a|}\int_{0}^{1}\frac{\sqrt{u_{-}+(u_{+}-u_{-})y^{2}}}{ \sqrt{1-y^{2}}}dy\] \[= \frac{\sqrt{u_{-}}}{|a|}\int_{0}^{1}\frac{\sqrt{1-\big{(}1-\frac{ u_{+}}{u_{-}}\big{)}y^{2}}}{\sqrt{1-y^{2}}}dy.\]
Then with the definition of the elliptic integrals in Appendix A we have
\[I_{1}= \frac{1}{|a|\sqrt{u_{-}}}\mathcal{K}\bigg{(}1-\frac{u_{+}}{u_{-}} \bigg{)}, \tag{27}\] \[I_{2}= \frac{\sqrt{u_{-}}}{|a|}\mathcal{E}\bigg{(}1-\frac{u_{+}}{u_{-}} \bigg{)}. \tag{26}\]
Hence, we get
\[\Delta t=\frac{2B(r)}{|a|\sqrt{u_{-}}}\mathcal{K}\bigg{(}1-\frac{u_{+}}{u_{-} }\bigg{)}+2|a|\sqrt{u_{-}}\mathcal{E}\bigg{(}1-\frac{u_{+}}{u_{-}}\bigg{)}. \tag{28}\]
Note that, since \(u_{+}>u_{-}>0\), we have \(1-\frac{u_{+}}{u_{-}}<0\), and hence \(\mathcal{E}(1-u_{+}/u_{-})>\mathcal{K}(1-u_{+}/u_{-})>0\) (see Appendix A). However, the prefactor of \(\mathcal{E}\) does not dominate the opposite of the prefactor of \(\mathcal{K}\) for every negative \(r\), as one may check substituting \(\Phi(r)\) and \(\mathcal{Q}(r)\) from (20) into \(u_{-}\) given by (12).
From now on set \(x:=1-u_{+}/u_{-}\). The elliptic integral \(\mathcal{K}\) can be written as a hypergeometric function (see A.5):
\[\mathcal{K}(x)=\frac{\pi}{2}F\bigg{(}\frac{1}{2},\frac{1}{2};1;x\bigg{)}.\]
Using the Pfaff transformation (see A.6)
\[F\bigg{(}\alpha,\beta;\gamma;x\bigg{)}=(1-x)^{-\alpha}F\bigg{(}\alpha,\gamma- \beta;\gamma;\frac{x}{x-1}\bigg{)}, \tag{29}\]
we can decrease the modulus of the prefactor in front of the elliptic integral \(\mathcal{K}\):
\[\mathcal{K}(x)=\frac{\sqrt{u_{-}}}{\sqrt{u_{+}}}\mathcal{K}\bigg{(}\frac{x}{ x-1}\bigg{)}. \tag{30}\]
Hence we get
\[\Delta t=2|a|\sqrt{u_{-}}\mathcal{E}(x)+\frac{2B(r)}{|a|\sqrt{u_{+}}}\mathcal{ K}\bigg{(}\frac{x}{x-1}\bigg{)}. \tag{31}\]
Now we compare the elliptic integrals, after the Pfaff transformation. Since \(x<0\), we have
\[\mathcal{E}(x)>\mathcal{K}\bigg{(}\frac{x}{x-1}\bigg{)}>0, \tag{32}\]
by Rmk. A.2. Next we claim that the prefactors of the elliptic integrals in (31) satisfy
\[2|a|\sqrt{u_{-}}>-\frac{2B(r)}{|a|\sqrt{u_{+}}}. \tag{33}\]
Indeed, both sides of the inequality are positive, so we can square them and use that \(u_{+}u_{-}=-\mathcal{Q}(r)/a^{2}\) by (11) to get an equivalent inequality
\[-\mathcal{Q}(r)a^{2}>B^{2}(r),\]
where \(\mathcal{Q}(r)\) is given by (20) and \(B(r)\) by (23), or equivalently
\[(-12Mr^{2}-4a^{2}M)r>0.\]
This last inequality is clearly satisfied in \(r<0\). Combining (31), (32) and (33), we conclude that \(\Delta t>0\) for all \(r<0\), which shows that the spherical geodesics cannot be closed.
In Fig. 16 we see the plot of \(\Delta t\) given by (31) as function of the fixed radius \(r\) after the substitutions of \(\Phi(r)\) and \(\mathcal{Q}(r)\) from (20) into \(u_{\pm}\) given by (12).
We have ruled out all the possibilities on Fig. 3, therefore there are no closed null geodesics in the Kerr-star spacetime.
## Appendix A Elliptic integrals and hypergeometric functions
**Definition A.1**.: _Let \(\phi\in[-\pi/2,\pi/2]\). The elliptic integral of the first kind is_
\[\mathcal{F}(\phi|k):=\int_{0}^{\sin\phi}\frac{ds}{\sqrt{(1-s^{2})(1-ks^{2})}}.\]
_The complete \((\phi=\pi/2)\) elliptic integral of the first kind is_
\[\mathcal{K}(k):=\mathcal{F}(\pi/2|k)=\int_{0}^{1}\frac{ds}{\sqrt{(1-s^{2})(1- ks^{2})}}.\]
_The elliptic integral of the second kind is_
\[\mathcal{E}(\phi|k):=\int_{0}^{\sin\phi}\sqrt{\frac{1-ks^{2}}{1-s^{2}}}ds.\]
_The complete \((\phi=\pi/2)\) elliptic integral of the second kind is_
\[\mathcal{E}(k):=\mathcal{E}(\pi/2|k)=\int_{0}^{1}\sqrt{\frac{1-ks^{2}}{1-s^{2 }}}ds.\]
_We define also_
\[\mathcal{D}(k):=\int_{0}^{1}\frac{s^{2}ds}{\sqrt{(1-s^{2})(1-ks^{2})}}=\frac{ \mathcal{K}(k)-\mathcal{E}(k)}{k}=-2\frac{\partial\mathcal{E}(k)}{\partial k}.\]
**Remark A.2**.: _Let \(0<z,s<1,\;x<0\)._
\[\sqrt{\frac{1-zs^{2}}{1-s^{2}}}>\frac{1}{\sqrt{1-s^{2}}\sqrt{1-xs^{2}}}\quad \iff\quad(1-zs^{2})(1-xs^{2})>1\quad\implies\quad\mathcal{E}(x)>\mathcal{K}(z).\]
_If \(z=x/(x-1)\), it satisfies \(0<z<1\) and we have_
\[(1-zs^{2})(1-xs^{2})>1\quad\iff\quad x+z<xzs^{2}\quad\iff\quad 1>s^{2},\]
_hence \(\mathcal{E}(x)>\mathcal{K}(z)\)._
**Definition A.3** ([1]).: _The hypergeometric function \(F(\alpha,\beta;\gamma;x)\) is defined by the series_
\[\sum_{n=0}^{\infty}\frac{(\alpha)_{n}(\beta)_{n}}{(\gamma)_{n}n!}x^{n},\]
_where \((\alpha)_{n}:=\alpha(\alpha+1)\cdot...\cdot(\alpha+n-1)\) for \(n>0\), \((\alpha)_{0}\equiv 1\) (analogous for the others), for \(|x|<1\), and by continuation elsewhere._
**Proposition A.4** (_Euler's integral representation_, see [1]).: _If \(\text{Re }\gamma>\text{Re }\beta>0\), then_
\[F(\alpha,\beta;\gamma;x)=\frac{\Gamma(\gamma)}{\Gamma(\beta)\Gamma(\gamma- \beta)}\int_{0}^{1}t^{\beta-1}(1-t)^{\gamma-\beta-1}(1-xt)^{-\alpha}dt\]
_in the complex \(x-\)plane cut along the real axis from \(1\) to \(+\infty\), where \(\Gamma(x):=\int_{0}^{\infty}t^{x-1}e^{-t}dt\) is the Euler's gamma function._
**Proposition A.5** ([1]).: _We can write the complete elliptic integral of the first kind as_
\[\mathcal{K}(x)=\frac{\pi}{2}F\bigg{(}\frac{1}{2},\frac{1}{2};1;x\bigg{)}.\]
Proof.: Use the integral representation of the hypergeometric function given in Prop. A.4, the integral substitution \(t=s^{2}\), with \(\Gamma(\frac{1}{2})=\sqrt{\pi}\), \(\Gamma(1)=1\).
**Proposition A.6** (_"Pfaff's formula"_, see Theorem \(2.2.5\) of [1]).: \[F(\alpha,\beta;\gamma;x)=(1-x)^{-\alpha}F\bigg{(}\alpha,\gamma-\beta;\gamma; \frac{x}{x-1}\bigg{)}.\]
Proof.: Use A.4 and the integral substitution \(t=1-s\)
## Appendix B Spherical null geodesics with \(Q<0\)
**Proposition B.1**.: _In the Kerr-star spacetime \(K^{*}\), null geodesics with constant radial coordinate and \(\mathcal{Q}<0\) exist if and only if \(r\in\big{[}R_{2}(a,M),0\big{)}\), where \(R_{2}(a,M)\) is given by (35)._
Proof.: (\(\Rightarrow\)) Consider (11). We must have \(\tilde{\Theta}(u)\geq 0\), hence \(\mathrm{dis}:=w^{2}+4a^{2}\mathcal{Q}\geq 0\), where \(w:=a^{2}-\Phi^{2}-\mathcal{Q}\). We know that for spherical geodesics \(\Phi\) and \(\mathcal{Q}\) are given by (20). Therefore we have
\[\mathrm{dis}=\frac{16Mr^{2}}{(M-r)^{4}}\Delta(r)(2r^{3}-3Mr^{2}+a^{2}M)=:\frac {16Mr^{2}}{(M-r)^{4}}\Delta(r)k(r)\geq 0. \tag{34}\]
Since \(\Delta(r)>0\) for negative \(r\), the last inequality is equivalent to \(k(r)\geq 0\). The signs of the coefficients of \(k(r)\) are \(+\ -\ +\), hence either there are two or zero positive roots. But the sum of the roots is \(3M/2>0\), hence there are two positive roots. Moreover \(k(0)=a^{2}M>0\) and \(\lim_{x\rightarrow-\infty}k(r)=-\infty\), hence the third root is negative. These roots can be expressed in the following way (see [48])
\[R_{j}(a,M)=M\cos\left[\frac{1}{3}\arccos\left(1-2\frac{a^{2}}{M^{2}}\right)- \frac{2}{3}j\pi\right]+\frac{M}{2}\hskip 28.452756ptj=0,1,2, \tag{35}\]
with \(R_{2}(a,M)<0<R_{1}(a,M)<R_{0}(a,M)\).
Hence (34) can only be satisfied for \(r\in\big{[}R_{2}(a,M),0\big{)}\).
(\(\Leftarrow\)) Assume first that \(r=r_{0}\in\big{(}R_{2}(a,M),0\big{)}\) and fix \(\Phi=\Phi(r_{0})\), \(\mathcal{Q}=\mathcal{Q}(r_{0})\) by (20) so that
\[\mathrm{dis}=w^{2}(r_{0})+4a^{2}\mathcal{Q}(r_{0})=[\mathcal{Q}(r_{0})+(|\Phi (r_{0})|-|a|)^{2}][\mathcal{Q}(r_{0})+(|\Phi(r_{0})|+|a|)^{2}]>0, \tag{36}\]
with \(w(r):=a^{2}-\Phi^{2}(r)-\mathcal{Q}(r)\). Notice that \([\mathcal{Q}(r)+(|\Phi(r)|+|a|)^{2}]\geq[\mathcal{Q}(r)+(\Phi(r)-a)^{2}]=4r^{2 }\Delta(r)/(M-r)^{2}>0\) since \(\Delta(r)>0\) for \(r<0\). Therefore (36) implies that \([\mathcal{Q}(r_{0})+(|\Phi(r_{0})|-|a|)^{2}]>0\). Since \(r_{0}\in\big{(}R_{2}(a,M),0\big{)}\), we have \(\mathcal{Q}=\mathcal{Q}(r_{0})<0\) by (20) and \(w(r_{0})>0\). Indeed the inequality takes the form
\[w(r_{0})=a^{2}-\Phi^{2}(r_{0})-\mathcal{Q}(r_{0})=-2r_{0}\frac{r_{0}^{3}-3M^{ 2}r_{0}+2a^{2}M}{(M-r_{0})^{2}}>0.\]
This inequality is automatically satisfied for \(r_{0}\in(R_{2}(a,M),0)\), since \(r_{0}^{3}-3M^{2}r_{0}\geq 0\) for \(r_{0}\in[-\sqrt{3}M,0)\) and \(-\sqrt{3}M<-M/2\leq R_{2}(a,M)\) by (35).
Using (12) we have then
\[0<u_{-}:=\frac{w(r_{0})-\sqrt{w^{2}(r_{0})+4a^{2}\mathcal{Q}(r_{0})}}{2a^{2} }<\frac{w(r_{0})}{2a^{2}}<\frac{a^{2}-\Phi(r_{0})^{2}+(|\Phi(r_{0})|-|a|)^{2} }{2a^{2}}=1-\frac{|\Phi(r_{0})|}{|a|}\leq 1,\]
so that we can define \(\theta_{2}:=\arccos(\sqrt{u_{-}})\in(0,\pi)\). We now fix the initial point \(p_{0}=(0,r_{0},\theta_{0},0)\in K^{*}\) with \(\theta_{0}:=\theta_{2}\), \(r_{0}\in(R_{2}(a,M),0)\) and the following set of constants of motion (\(q=0,E=1,L=\Phi(r_{0}),K=\mathcal{Q}(r_{0})+(\Phi(r_{0})-a)^{2}\)) by (20). Then \(\Theta(\theta_{0})=0\) by (11) and \(R(r_{0})=0\) by (20).
By Prop. 4.2.6 in [35], there exists a null geodesic starting at \(p_{0}\) with that particular set of constants of motion.
Figure 17. Plot of \(k(r)\) for \(a=3,M=5\).
Assume now \(r=R_{2}(a,M)\). Choose a converging sequence of initial points \(p_{n}=(0,r_{n},\theta_{n},0)\in K^{*}\) with \((R_{2}(a,M),0)\ni r_{n}\to R_{2}(a,M)\), \(\theta_{n}:=\arccos(\sqrt{u_{-}(r_{n})})\in(0,\pi)\) with
\[u_{-}(r):=\frac{w(r)-\sqrt{w^{2}(r)+4a^{2}\mathcal{Q}(r)}}{2a^{2}},\]
and consider the corresponding geodesic \(\gamma_{n}\) constructed above. The constants of motion of \(\gamma_{n}\) converge and hence so do the tangent vectors \(\gamma_{n}^{\prime}(p_{n})\) by (5). By continuity, there exists a geodesic \(\gamma=\lim\gamma_{n}\) with \(\mathcal{Q}=\mathcal{Q}[R_{2}(a,M)]<0\) by (20), starting at the point \(p_{0}=(0,R_{2}(a,M),\arccos\left(\sqrt{u_{-}[R_{2}(a,M)]}\right),0)\) with \((q=0,E=1,L=\Phi[R_{2}(a,M)],K=\mathcal{Q}[R_{2}(a,M)]+[\Phi(R_{2}(a,M))-a]^{2})\) as constants of motion. The limit geodesic has constant \(\theta\)-coordinate and falls in the class described in Prop. 5.6.
**Remark B.2**.: _Note that \(R_{2}(a,M)\geq-M/2\), so \(B(r)<0\) by (23) for all geodesics of this type._
|
2307.16194 | CMB Polarization by the Asymmetric Template of Scalar Perturbations | Inspired by a dipole asymmetric template for the CMB temperature map in the
primordial scalar fluctuations observed by Planck at a large scale, we examine
the contribution of a similar template for power asymmetry in modifying the
linear polarization pattern of CMB. Replacing un-modulated temperature
fluctuation with dipolar modulated one in time evolution equations somehow
breaks linear perturbation in the various components of the CMB map. This
non-linearity allows deflecting CMB polarization in patterns that contain
divergence-free components. The explicit expressions for the angular power
spectra of the electric and magnetic-type parities of linear polarization are
derived in the form of the line of sight integral solutions. Our results
demonstrate that the electric-type polarization is modified and the
magnetic-type polarization would be produced. Such imprints depend on the
linear and square of the asymmetric amplitude for $E$- and $B$-modes power
spectra, respectively. For the observed dipole template, the value of $B$
polarization spectrum at the large scale ($\ell\lesssim 10$) is almost
equivalent to the power spectrum obtained from Compton scattering in the
presence of tensor perturbation with tensor to scalar ratio about
$r\simeq0.005$. | Jafar Khodagholizadeh, Rohoollah Mohammadi, S. M. S. Movahed | 2023-07-30T10:21:14Z | http://arxiv.org/abs/2307.16194v1 | # CMB Polarization by the Asymmetric Template of Scalar Perturbations
###### Abstract
Inspired by a dipole asymmetric template for the CMB temperature map in the primordial scalar fluctuations observed by Planck at a large scale, we examine the contribution of a similar template for power asymmetry in modifying the linear polarization pattern of CMB. Replacing un-modulated temperature fluctuation with dipolar modulated one in time evolution equations somehow breaks linear perturbation in the various components of the CMB map. This non-linearity allows deflecting CMB polarization in patterns that contain divergence-free components. The explicit expressions for the angular power spectra of the electric and magnetic-type parities of linear polarization are derived in the form of the line of sight integral solutions. Our results demonstrate that the electric-type polarization is modified and the magnetic-type polarization would be produced. Such imprints depend on the linear and square of the asymmetric amplitude for \(E\)- and \(B\)-modes power spectra, respectively. For the observed dipole template, the value of \(B\) polarization spectrum at the large scale (\(\ell\lesssim 10\)) is almost equivalent to the power spectrum obtained from Compton scattering in the presence of tensor perturbation with tensor to scalar ratio about \(r\simeq 0.005\).
## I Introduction
In the standard cosmology models, the homogeneity and isotropy of the universe are almost well-quantified assumptions and they essentially lead to CMB fluctuations behaving as the isotropic random field when the secondary anisotropies are well crossed out. Mentioned assumptions cause many noticeable impacts not only on the computational methods pipelines but also lead to ruling out scenarios that give rise to anisotropic primordial fluctuations [1; 2; 3; 4; 5; 6]. Due to the vital influences of homogeneity and isotropy, there are many researches have been focused on the examining mentioned
property on small and large scales fluctuations of the CMB map in both intensity and polarizations [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21], and references therein]. Although, the previous extensive analysis indicated that the CMB field is consistent with the Gaussian prediction of \(\Lambda\)CDM scenario with statistical isotropy behavior [17], improving the accuracy and precision in the observing CMB stochastic field opened new rooms for the footprints of anomalies such as power asymmetry and deviation from statistical isotropy in a range of multipoles [12; 13; 14; 15; 16; 17; 18; 22; 23; 24; 25; 26; 27; 28], and references therein]. Among the anomalies, the hemispherical asymmetry parameterized by a dipolar modulation [24; 29] has been substantially investigated by different methods [15; 30; 31]. A well-known template in the position space for dipolar modulation in the CMB temperature for the large scale reads as:
\[\tilde{\Delta}_{T}(\hat{n})=\Delta_{T}(\hat{n})[1+A_{T}\hat{P}.\hat{n}] \tag{1}\]
where \(\Delta_{T}(\hat{n})\equiv\delta T(\hat{n})/\langle T\rangle\) is un-modulated temperature fluctuations in an arbitrary direction on the sky, \(\hat{n}\), and \(\hat{P}\) is the direction of dipolar modulation. The best-fit values for the amplitude of modulation in the temperature dipole and the corresponding preferred direction for low multi-poles \(\ell\in[2-64]\) have been reported as \(A_{T}=0.072^{+0.031}_{-0.015}\) and \(\hat{P}=(218,-19)\pm 29\) and for \(\ell\in[2-220]\), the amplitude is \(A_{T}=0.023^{+0.008}_{-0.004}\) in direction \(\hat{P}=(220,-5)\pm 25\) for joint analysis of \(TT,TE,EE\) and for Commander observed data, respectively [17]. In addition, the power asymmetry in the CMB polarization map has been examined in [21; 27; 32; 33; 34]. Many phenomenological models have been proposed to elucidate observed asymmetry as of corresponding significance at the \(\sim 3\sigma\) level such as the asymmetry in the initial condition of perturbations with different scenarios [35; 36; 37; 38; 39; 40; 41; 42], super-horizon perturbations [43; 44; 45; 29], dipolar asymmetry due to cosmic string [46]. The primordial dipole asymmetries generated from different types of models could be a source of CMB polarizations [32].
In the standard scenario of cosmology, the contribution of primordial scalar perturbations in the generation of temperature anisotropies and linear polarization (\(E\)-mode) is dominant compared to other kinds of perturbations, while the \(B\)-mode polarization can not be generated by Compton scattering without considering the gravitational wave perturbations or non-linear density perturbation [47; 48; 49]. However, if we consider any interaction terms which can violate the Lorentz symmetry, it could be a nontrivial source of \(B\)-mode polarization. More precisely, even with only asymmetric scalar perturbations, we expect to have an additional term with respect to the symmetric case and it has a footprint in the \(B\)-mode polarization. As an illustration, taking into account the photon-neutrino forward scattering without the tensor perturbation yields the \(B\)-mode polarization which can be significant for \(50<\ell<200\)[50; 51]. The Majorana dark matter can generate the \(B\)-mode
in the presence of primordial scalar perturbations [52] and it also has a contribution in cosmic birefringence [53]. Non-commutative space time framework can also generate the magnetic type polarization for CMB radiation [54]. The polarized Compton scattering is also a feasible approach to make a magnetic-like pattern in the linear polarization of CMB radiation [55]. In the presence of a homogeneous magnetic field, the Faraday rotation produces the CMB \(B\)-mode [56, 57, 58]. Also, if we have any conditions which can violate the Lorentz symmetry in the matter or the radiation distributions (like tensor perturbation of the matter or non-linear perturbation of radiation and matter), such conditions can generate \(B\)-mode linear polarization even by considering Thomson scattering [47, 48, 49]. Accordingly, by replacing \(\Delta_{T}(\hat{n})\) which is un-modulated temperature fluctuation by dipolar modulated one \(\tilde{\Delta}_{T}(\hat{n})\) in the time evolution equations, we somehow break the linear perturbations in the radiation which plays roles similar to the non-linear perturbations in radiation. The evolution of cosmological perturbations which breaks linear regime such as our considered dipole asymmetry template is characterized by mode-mixing, consequently it implies not only the different Fourier modes of temperature perturbations are influenced by dipole mode, but also the non-linear perturbations in CMB intensity (in the presence of Thompson scattering) can play as a source to deflect CMB linear polarization in the patterns containing divergence-free components.
Motivated by detecting the asymmetry anomaly in the CMB data modeled by a viable template, Eq. (1), and various sources of magnetic-type CMB polarization, we would like to examine the influence of taking into account a similar template for initial scalar perturbations in the Fourier space on the CMB polarization. We will show that the imprint of such an asymmetric model on the magnetic-type CMB polarization for small \(\ell\) is almost equivalent to the \(B\)-mode produced by primordial tenor perturbations and it is proportional to the square value of asymmetry amplitude. The rest part of this paper is organized as follows: considering the dipole asymmetry in the initial perturbations, we revisit the Boltzmann equations of CMB temperature and polarization anisotropies in an exact approach in section II. Section III is devoted to calculating the \(E\)-mode and \(B\)-mode power spectra. We also compare our results with that achieved by considering the primordial tensor perturbation. Summary and conclusions will be given in section IV. To make our analysis more sense, we will derive some important equations in Appendices.
Methodology
In the standard scenario, the symmetric scalar perturbation at the linear order can not generate the \(B\)-mode [59; 60]. While incorporating the asymmetric part for scalar perturbation may produce nontrivial terms in the evolution of Stokes parameters which effectively indicates the non-linear order of interactions. In this section, we will rely on a famous asymmetry template for the scalar fluctuations, to examine the generating CMB polarization. We start with the scattering theory and try to calculate the time evolution of the quantum number density of the photons including local interaction as perturbation term in the associated Hamiltonian according to \(H=H_{0}+H_{I}\). Now, we pursue the second-quantized formalism with creation and annihilation operators for the photons and electrons obeying the canonical commutation relations as:
\[[a_{s}(p),a^{\dagger}_{s^{{}^{\prime}}}(p^{{}^{\prime}})]=(2\pi)^ {3}2p^{0}\delta^{3}(\mathbf{p}-\mathbf{p}^{{}^{\prime}})\delta_{ss^{{}^{\prime}}} \tag{2}\] \[\{b_{r}(p),b^{\dagger}_{r^{{}^{\prime}}}(p^{{}^{\prime}})\}=(2 \pi)^{3}\frac{q^{0}}{m}\delta^{3}(\mathbf{q}-\mathbf{q}^{{}^{\prime}})\delta_{ rr^{{}^{\prime}}}\]
where \(s\) and \(s^{\prime}\) labels denote the photon polarization while the \(r\) and \(r^{\prime}\) labels refer to the electron spin. The bold momentum variables represent three-momenta while plain momentum variables represent four-momenta. The photon density matrix incorporating the linear and circular polarizations reads as [59]:
\[\hat{\rho}=\int\frac{d^{3}k}{(2\pi)^{3}}\rho_{ij}(k)D_{ij}(\vec{k}) \tag{3}\]
here \(D_{ij}(\vec{k})\equiv a^{\dagger}_{i}(\vec{k})a_{j}(\vec{k})\) is the photon number operator written in the Fourier space and \(\rho_{ij}\) is the density matrix. The expectation value of \(D\) is proportional to density matrix. Direct calculation shows:
\[\langle D_{ij}\rangle=tr[\hat{\rho}D_{ij}]=\int\frac{d^{3}p}{(2\pi)^{3}}\langle p |\hat{\rho}D_{ij}(\mathbf{k})|p\rangle=(2\pi)^{3}\delta^{(3)}(0)2k^{0}\rho_{ ij}(\mathbf{k}) \tag{4}\]
The right-hand side of the above equation achieves from the successive implementation of the commutation relation Eq.(2); the infinite delta function results from the infinite quantization volume necessary with continuous momentum variables, and cancels out all physical results. According to the Stokes parameters, the photon density matrix also becomes:
\[\rho_{ij}\equiv\frac{1}{2}\begin{pmatrix}T+Q&U-iV\\ U+iV&T-Q\end{pmatrix} \tag{5}\]
The number density operator evolution equation in the presence of perturbation term in Hamiltonian and considering the free fields assumption is given by the quantum Boltzmann equation as
[59]
\[(2\pi)^{3}\delta^{3}(0)2k^{0}\frac{d}{dt}\rho_{ij}(\vec{k})=i\langle[H^{0}_{I}(t); D^{0}_{ij}(\vec{k})]\rangle-\frac{1}{2}\int_{-\infty}^{+\infty}dt\langle[H^{0}_{I}(t); [H^{0}_{I}(t);D^{0}_{ij}(\vec{k})]]\rangle \tag{6}\]
Now, we can calculate the first order of interaction Hamiltonian of QED (\(H^{0}_{I}(t)\)), as a function of the free fields for photon-electron scattering as (see Fig. (1)):
\[H^{0}_{I}(t) = \int d{\bf q}d{\bf q^{\prime}}d{\bf p}d{\bf p^{\prime}}(2\pi)^{3} \delta^{3}({\bf q^{\prime}}+{\bf p^{\prime}}-{\bf q}-{\bf p})\exp[it(q^{\prime 0 }+p^{\prime 0}-q^{0}-p^{0})]\] \[\times [b^{\dagger}_{r^{\prime}}(q^{\prime})a^{\dagger}_{s^{\prime}}(p ^{\prime})((M_{1}+M_{2}))a_{s}(p)b_{r}(q)],\]
Where \({\bf p}\) and \({\bf q}\) are incoming photon momentum vector and electron momentum vector respectively, \(|{\bf p}|=p^{0}\) and \(|{\bf q}|=q^{0}\). It is noticed that the prime of these will be the outgoing photons and electrons. Also, \(M_{1}\) and \(M_{2}\) are scattering amplitudes which are shown as below
\[M_{1}(q^{\prime}r^{\prime},p^{\prime}s^{\prime},qr,ps)\equiv e^{2 }\frac{\bar{u}_{r^{\prime}}(q^{\prime})\hbox to 0.0pt{/}{\epsilon}_{s^{ \prime}}(p^{\prime})(\hbox to 0.0pt{/}{p}+\hbox to 0.0pt{/}{q}+m)\hbox to 0.0pt{/}{ \epsilon}_{s}(p)u_{r}(q)}{2p.q}\] \[M_{2}(q^{\prime}r^{\prime},p^{\prime}s^{\prime},qr,ps)\equiv e^ {2}\frac{\bar{u}_{r^{\prime}}(q^{\prime})\hbox to 0.0pt{/}{\epsilon}_{s}(p)(\hbox to 0.0pt{/}{q}- \hbox to 0.0pt{/}{p}+m)\hbox to 0.0pt{/}{\epsilon}_{s^{\prime}}(p^{\prime})u_{r}(q)}{2p^{ \prime}.q} \tag{8}\]
with the abbreviations
\[d{\bf q}\equiv\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac{m}{q^{0}}\ \,\ \ d{\bf p}\equiv \frac{d^{3}{\bf p}}{(2\pi)^{3}p^{0}} \tag{9}\]
for electrons and photons respectively. Where \(u_{r}\) is a spinor solution to the Dirac equation with spin index \(r=1,2\) and \(\epsilon_{s}\) are photon polarization four-vector with index \(s=1,2\). Also \(r,r^{\prime}\) are incoming and outgoing electron spin indices and \(s,s^{\prime}\) are the exactly same for photons. In addition, the first and second terms on the right-hand side of Eq. (6) are the forward scattering and the higher-order collision terms, respectively. We proceed with our calculation by emphasizing the polarization sums explicitly and up to the first order in scattering terms. After tedious but
Figure 1: Full detail of the Compton scattering Feynman diagrams in the presence of \(H^{0}_{I}(t)\).
straightforward calculation, we have [59]:
\[\frac{d}{dt}\rho_{ij}(\vec{x},\vec{k}) = \frac{e^{4}n_{e}(\vec{x})}{16\pi m^{2}k}\int_{0}^{\infty}pdp\int \frac{d\Omega_{p}}{4\pi}\left[\delta(k-p)+(\vec{k}-\vec{p}).\vec{v}(\vec{x}) \frac{\partial\delta(k-p)}{\partial p}\right] \tag{10}\] \[\times \big{[}-2\left(\frac{p}{k}+\frac{k}{p}\right)\rho_{ij}(\vec{x}, \vec{k})+4\hat{p}.\hat{\varepsilon_{i}}(\vec{k})\hat{p}.\hat{\varepsilon_{1}} \rho_{1j}(\vec{x},\vec{k})+4\hat{p}.\hat{\varepsilon_{i}}(\vec{k})\hat{p}.\hat{ \varepsilon_{2}}\rho_{2j}(\vec{x},\vec{k})\] \[+ \left(\frac{p}{k}+\frac{k}{p}-2\right)\delta_{ij}(\rho_{11}(\vec {x},\vec{p})-\rho_{22}(\vec{x},\vec{p}))\] \[+ \left(\frac{p}{k}+\frac{k}{p}\right)\left(\varepsilon_{i}(\vec{ k}).\varepsilon_{1}(\vec{p})\varepsilon_{j}(\vec{k}).\varepsilon_{2}(\vec{p})- \varepsilon_{i}(\vec{k}).\varepsilon_{2}(\vec{p})\varepsilon_{j}(\vec{k}). \varepsilon_{1}(\vec{p})\right)(\rho_{12}(\vec{x},\vec{p})-\rho_{21}(\vec{x}, \vec{p}))\] \[+ 2\left(\varepsilon_{i}(\vec{k}).\varepsilon_{1}(\vec{p}) \varepsilon_{j}(\vec{k}).\varepsilon_{2}(\vec{p})+\varepsilon_{i}(\vec{k}). \varepsilon_{2}(\vec{p})\varepsilon_{j}(\vec{k}).\varepsilon_{1}(\vec{p}) \right)(\rho_{12}(\vec{x},\vec{p})+\rho_{21}(\vec{x},\vec{p}))\] \[+ 4\varepsilon_{i}(\vec{k}).\varepsilon_{1}(\vec{p})\varepsilon_{j }(\vec{k}).\varepsilon_{1}(\vec{p})\rho_{11}(\vec{x},\vec{p})+4\varepsilon_{ i}(\vec{k}).\varepsilon_{2}(\vec{p})\varepsilon_{j}(\vec{k}).\varepsilon_{2}(\vec{p}) \rho_{22}(\vec{x},\vec{p})\big{]},\]
here \(\vec{v}(\vec{x})\) is the electron bulk velocity. Using Eqs. (3), (5) and (10), the collisional term of the Boltzmann equation for the polarization parts can be written as:
\[\dot{\Delta}_{Q} = \sigma_{T}\int\frac{d\Omega_{p}}{4\pi}[F_{QT}^{p}(\Omega_{p}) \Delta_{T}(\vec{p})+F_{QT}^{k}(\Omega_{p})\Delta_{T}(\vec{k})+F_{QU}^{p}( \Omega_{p})\Delta_{U}(\vec{p})+F_{QQ}^{p}(\Omega_{p})\Delta_{U}(\vec{p})],\] \[\dot{\Delta}_{U} = \sigma_{T}\int\frac{d\Omega_{p}}{4\pi}[F_{UT}^{p}(\Omega_{p}) \Delta_{T}(\vec{p})+F_{UT}^{k}(\Omega_{p})\Delta_{T}(\vec{k})+F_{UU}^{p}( \Omega_{p})\Delta_{U}(\vec{p})+F_{UQ}^{p}(\Omega_{p})\Delta_{U}(\vec{p})], \tag{11}\]
where \(\dot{\Delta}_{Q}\equiv\frac{d}{d\tau}\,\Delta_{Q}\) and \(\dot{\Delta}_{U}\equiv\frac{d}{d\tau}\,\Delta_{U}\). To avoid the messy mathematical formula, we move the detailed explanation of variables in Appendix A. Now, we rely on a famous asymmetric template for the scalar fluctuations, to examine the CMB polarizations. To this end, we start with the asymmetric template for scalar perturbations in the Fourier space which is used in the hierarchical Boltzmann equation as:
\[\tilde{\Delta}_{T}(\vec{k},K,\tau)=\Delta_{T}(\vec{k},K,\tau)(1+A_{T}\,\hat{P}.\hat{k}) \tag{12}\]
where the \(\Delta_{T}(\vec{k},K,\tau)\) is fluctuation contrast at direction \(\hat{n}=\vec{k}/k\) to the line of sight and \(\hat{P}\) is the assumed direction of the dipole asymmetry. Also, \(\tau\) and \(K\) are the conformal time and the value of comoving Fourier mode, respectively. To take into account the contribution of dipole asymmetry (Eq. (12)), the \(\Delta_{T}(\vec{p},K)\) and \(\Delta_{T}(\vec{k},K)\) in Eq. (11) should be replaced by \(\tilde{\Delta}_{T}(\vec{p},K)=\Delta_{T}(\vec{p},K)(1+A_{T}\hat{P}\cdot\hat{p})\) and \(\tilde{\Delta}_{T}(\vec{k},K)=\Delta_{T}(\vec{k},K)(1+A_{T}\hat{P}.\hat{k})\), respectively. Therefore, the evolution equation for the Stokes parameters, \(Q^{(S)}\) and \(U^{(S)}\) would be modified in the presence of dipole asymmetric scalar perturbation (see Appendix A for more details):
\[\frac{d}{d\tau}(\tilde{Q}^{(\rm S)}\pm i\tilde{U}^{(\rm S)})+iK\mu(\tilde{Q}^{ \rm S}\pm i\tilde{U}^{(\rm S)}) = -\dot{\kappa}[(\tilde{Q}^{(\rm S)}\pm i\tilde{U}^{(\rm S)})+\frac{1}{2}[1-P_{2} (\mu)]\Pi-\Pi^{\pm(\rm S)}]; \tag{13}\]
where \(\Pi\equiv\Delta_{T2}^{(\rm S)}+\Delta_{P2}^{(\rm S)}+\Delta_{P0}^{(\rm S)}\) and \((S)\) denotes the scalar perturbations of matter. The differential optical depth for Thomson scattering is denoted by \(\dot{\kappa}=an_{e}x_{e}\sigma_{T}\), where \(a(\tau)\) is the scale factor
normalized to unity at the present as a function of conformal time (\(\tau\)). The electron density and the ionization fraction are denoted by \(n_{e}\) and \(x_{e}\), respectively. Also, \(\sigma_{T}\) is the Thomson cross-section. The above equation without considering the last term on the right side, e.g \(\Pi^{\pm(\mathrm{S})}(k,K,\tau)\), is a general Boltzmann equation for the linear polarization for CMB, which can not generate \(B\)-mode as carried out in the context of standard scenario [59; 60; 61]. Note replacing \(\tilde{\Delta}_{T}(\vec{k},K,\tau)\) in the Boltzmann equation somehow breaks the linear regime of radiation perturbation and this non-linearity in CMB temperature fluctuations generates the nontrivial terms (last terms in the right-hand side of Eq. (13)) which can deflect CMB linear polarization in the patterns containing divergence-free components. The mentioned nontrivial terms are given by:
\[\Pi^{\pm(\mathrm{S})}(k,K,\tau)=A_{T}\int\frac{d\Omega_{p}}{4\pi}\,(\hat{p} \cdot\hat{P})\,\Delta_{T}(\vec{p},K,\tau)\,(F^{p}_{QT}\pm i\,F^{p}_{UT}), \tag{14}\]
also by using \(\hat{p}\cdot\hat{P}=\frac{4\pi}{3}\sum\,Y^{*}_{1,m}(\hat{P})Y_{1,m}(\hat{p})\), the Eq. (14) reads as:
\[\Pi^{\pm(\mathrm{S})}(k,K,\tau)=A_{T}\,\sum_{m=-3}^{3}\,\Delta_{T,3m}(k,K, \tau)\,F^{\pm}_{3m}, \tag{15}\]
the \(F^{p}_{QT}\), \(F^{p}_{UT}\) and \(F^{\pm}_{3m}\) are respectively defined by Eqs. (25) and Eqs. (30) presented in Appendix A. Also \(\Delta_{3m,T}(k,K,\tau)\equiv\int d\Omega_{k}Y^{*}_{3m}(\hat{k})\,\Delta_{T}( \vec{k},K,\tau)\). The \(\hat{p}\) and \(\hat{n}\) are the directions of photon propagation and the line of sight, respectively. Considering the \(\Delta^{\pm(\mathrm{S})}_{P}=Q^{\pm(\mathrm{S})}\pm iU^{\pm(\mathrm{S})}\) for polarization anisotropy in the context of dipole asymmetry template for the scalar perturbations, we can separate \(\Delta^{\pm(\mathrm{S})}_{P}\) into the symmetric and asymmetric parts as:
\[\tilde{\Delta}^{\pm(\mathrm{S})}_{P}(\hat{n})\;=\;\Delta^{\pm(\mathrm{S})}_{P }(\hat{n})\,+\,\Delta^{\pm(\mathrm{S})}_{P}(\hat{n})\Big{|}_{\mathrm{Asymmetry}}, \tag{16}\]
\[\Delta^{\pm(\mathrm{S})}_{P}(\hat{n})\Big{|}_{\mathrm{Asymmetry}}=\int d^{3} \vec{K}\xi(\vec{K})e^{\mp 2i\phi_{K,n}}\Delta^{\pm(\mathrm{S})}_{P}(K,\mu, \tau_{0})\Big{|}_{\mathrm{Asymmetry}} \tag{17}\]
where
\[\Delta^{\pm(\mathrm{S})}_{P}(K,\mu,\tau_{0})\Big{|}_{\mathrm{Asymmetry}}\;= \;\int_{0}^{\tau_{0}}d\tau\,g(\tau)\,e^{ix\mu}\,\,\Pi^{\pm(\mathrm{S})}(K, \tau). \tag{18}\]
where \(\mu\equiv|\hat{K}.\hat{k}|\), \(g(\tau)=\hat{\kappa}\exp(\kappa)\) is visibility function which is written in terms of optical depth, \(\kappa\). The differential optical depth is The differential optical depth for Thomson scattering is also denoted by \(\hat{\kappa}=an_{e}x_{e}\sigma_{T}\) and \(x=K(\tau_{0}-\tau)\). The \(\vec{K}\) and \(\hat{n}\) can be rotated to a fixed frame in the sky by the angle named by \(\phi_{K,n}\) and \(\xi(\vec{K})\) is a random variable used to characterize the initial amplitude of the \(\vec{k}\)-mode which has the following statistical property as \(\langle\xi^{*}(\vec{K}_{1})\xi(\vec{K}_{2})\rangle=\mathcal{P}_{\xi}(\vec{K}) \delta_{D}(\vec{K}_{1}-\vec{K}_{2})\), where \(\mathcal{P}_{\xi}(K)\) is initial power spectrum which depends only on the magnitude \(K\) of the wave vector \(\vec{K}\). The \(\delta_{D}\) is Dirac delta function.
## III Dipole asymmetry impact on the E- and B-modes
In this section, by using our methodology presented in the previous section, we turn to compute the \(E\)- and \(B\)-modes power spectra modified by a dipole asymmetric template for scalar perturbations. One can expand \(\tilde{\Delta}_{P}^{\pm(\mathrm{S})}(\hat{n})\) (Eq. (16)) in the appropriate spin-weighted basis, \(\tilde{a}_{\circ,\ell m}\), resulting in:
\[\tilde{a}_{E,\ell m} = \left[\frac{(\ell+2)!}{(\ell-2)!}\right]^{-\frac{1}{2}}\int d \Omega Y_{\ell m}^{*}[\bar{\eth}^{2}\tilde{\Delta}_{P}^{+(\mathrm{S})}(\hat{n })+\eth^{2}\tilde{\Delta}_{P}^{-(\mathrm{S})}(\hat{n})],\] \[\tilde{a}_{B,\ell m} = \left[\frac{(\ell+2)!}{(\ell-2)!}\right]^{-\frac{1}{2}}\int d \Omega Y_{\ell m}^{*}[\bar{\eth}^{2}\tilde{\Delta}_{P}^{+(\mathrm{S})}(\hat{n })-\eth^{2}\tilde{\Delta}_{P}^{-(\mathrm{S})}(\hat{n})], \tag{19}\]
separating to the symmetric and asymmetric parts leads to \(\tilde{a}_{\circ,\ell m}=a_{\circ,\ell m}+a_{\circ,\ell m}\Big{|}_{\mathrm{ Asymmetry}}\) where the \(\circ\) can be replaced by \(E\)- and \(B\)-modes. The asymmetric part reads as:
\[a_{\circ,\ell m}\Big{|}_{\mathrm{Asymmetry}}=\left[\frac{(\ell+2)!}{(\ell-2)! }\right]^{-\frac{1}{2}}\int d\Omega Y_{\ell m}^{*}\left[\bar{\eth}^{2}\Delta_{ P}^{+(\mathrm{S})}(\hat{n})\Big{|}_{\mathrm{Asymmetry}}\pm\eth^{2}\Delta_{P}^{-( \mathrm{S})}(\hat{n})\Big{|}_{\mathrm{Asymmetry}}\right], \tag{20}\]
the "\(\pm\)" is replaced by "\(+\)" for \(E\)-mode and by "\(-\)" for \(B\)-mode. The associated power spectra are given by \(\tilde{C}_{\circ,\ell}^{(\mathrm{S})}=\frac{1}{2\ell+1}\sum_{m=-\ell}^{\ell} \langle\tilde{a}_{\circ,\ell m}^{*}\,\tilde{a}_{\circ,\ell m}\rangle\). Finally the leading order terms of the \(E\)- and \(B\)-modes power spectra in the presence of scalar perturbations with a dipole asymmetry are:
\[C_{EE,\ell}^{(\mathrm{S})}\Big{|}_{\mathrm{Asymmetry}}=\frac{8 \pi}{2\ell+1}\frac{(\ell-2)!}{(\ell+2)!}\int K^{2}dK\mathcal{P}_{\xi}(K)\] \[\times\sum_{m=-\ell}^{\ell}\left[\int d\Omega Y_{\ell m}^{*}(\hat {n})\int_{0}^{\tau_{0}}d\tau\,g(\tau)\,[\bar{\eth}^{2}\Pi^{+(\mathrm{S})}+ \eth^{2}\Pi^{-(\mathrm{S})}]e^{ix\mu}\right]^{*} \tag{21}\] \[\left[\int d\Omega Y_{\ell m}^{*}(\hat{n})\int_{0}^{\tau_{0}}d \tau\,g(\tau)\,\Pi\,\partial_{\mu}^{2}[(1-\mu^{2})(1-P_{2})e^{ix\mu}]\right]\]
and
\[C_{BB,\ell}^{(\mathrm{S})}\Big{|}_{\mathrm{Asymmetry}} = \frac{(4\pi)}{2\ell+1}\frac{(\ell-2)!}{(\ell+2)!}\int K^{2}dK \mathcal{P}_{\xi}(K)\sum_{m}\Big{|}\int d\Omega Y_{\ell m}^{*}(\hat{n})\int_{0 }^{\tau_{0}}d\tau\,g(\tau)\,[\bar{\eth}^{2}\Pi^{+(\mathrm{S})}-\eth^{2}\Pi^{- (\mathrm{S})}]e^{ix\mu}\Big{|}^{2}\]
here \(g(\tau)=\dot{\kappa}e^{\kappa}\) is visibility function. For the symmetric scalar perturbations, there is no source to generate \(B\)-mode polarization as we expect, while in our case, due to the asymmetric template for the scalar perturbations, the \(B\)-mode is generated even without any tensor perturbations. After doing some mathematical derivations, Eq. (22) becomes:
\[C_{BB,\ell}^{(\mathrm{S})}\Big{|}_{\mathrm{Asymmetry}} = (4\pi)^{2}\frac{(\ell+2)!}{(\ell-2)!}\int K^{2}dKP_{\varphi}(K) \left[\int_{0}^{\tau_{0}}d\tau\,g(\tau)\,A(K,\tau)\Big{|}_{\mathrm{Asymmetry}} \,\frac{(\ell-2)j_{\ell}(x)-xj_{\ell+1}(x)}{x^{3}}\right]^{2}, \tag{23}\]
Figure 2: The \(B\)-mode power spectra for different components. The \(C^{\rm(T)}_{BB,\ell}\) is for tensor mode (thick dash-dot line for \(r=0.05\) and thin dashed-dot curve for \(r=0.005\). The thick dashed line indicates the contribution of lensing (\(C^{\rm(L)}_{BB,\ell}\))). The thick solid line corresponds to a summation of tensor (\(r=0.05\)) and lensing parts (\(C^{\rm(T+L)}_{BB,\ell}\)). The thins curves are devoted to the \(B\)-mode generated by Compton scattering in the presence of dipole asymmetry in the scaler fluctuations for various amplitudes (the upper panel is for \(A_{T}=0.068,0.072\) & \(0.075\) when \(\ell\in[2,100]\) while the lower panel is for \(A_{T}=0.018,0.023\) & \(0.030\) when \(\ell\in[2,250]\)). The filled circle and triangle symbols are associated with BICEP2+Keck Array/Planck and BICEP2+Keck at 150 GHz band, respectively.
where (see the Appendix B for more details):
\[A(K,\tau)\Big{|}_{\rm Asymmetry} = 0.753\,A_{T}\,\Delta_{T,3m}(k,K,\tau) \tag{24}\]
The amplitude and the direction of the low-\(\ell\) dipole asymmetry signal have been determined from quadratic maximum likelihood in the range \(\ell\in[2,65]\) from joint analysis of \(TT,TE,EE\) and for Commander observed by \(Planck\). The corresponding values at \(1\sigma\) confidence interval are \(A_{T}=0.072^{+0.031}_{-0.015}\) and \(\hat{P}=(218,-19)\pm 29\)[17]. On the other hand, for the range \(\ell\in[2,220]\), the observational constraint on the amplitude is \(A_{T}=0.023^{+0.008}_{-0.004}\) for the direction \(\hat{P}=(220,-5)\pm 25\). Our results demonstrate that additional contributions are assigned to the \(E\)- and \(B\)-modes power spectra sourced by \(\Delta_{3,T}\). In addition, the power spectrum of \(E\)- and \(B\)-modes have a linear and square dependency on \(A(K,\tau)\Big{|}_{\rm Asymmetry}\), respectively. In another word, mentioned results illustrate a lower bound on the \(B\)-mode irrespective of the existence of primordial and secondary sources which generate the CMB \(B\)-mode. Comparing the Eq. (23) with the dominant part of standard linear polarization represented by \(\bar{C}^{\rm(S)}_{EE,\ell}\), one can deduce an upper limit for the \(B\)-mode generated by asymmetric scalar perturbations such that \(C^{\rm(S)}_{BB,\ell}\Big{|}_{\rm Asymmetry}\lesssim\bigg{(}A(K,\tau)\Big{|}_{ \rm Asymmetry}\bar{C}^{\rm(S)}_{EE,\ell}\bigg{)}\).
In Fig. 2, we compute the \(B\)-mode power spectra (\(\mathcal{D}_{BB}\equiv\ell(\ell+1)C_{BB,\ell}/2\pi\)) for the different components. The \(C^{\rm(T)}_{BB,\ell}\) and \(C^{\rm(L)}_{BB,\ell}\) correspond to the tensor (T) and lensed-\(\Lambda\)CDM (L) modes, respectively. The \(C^{\rm(S)}_{BB,\ell}\) is for the \(B\)-mode power spectrum generated by the dipole asymmetry in the scale perturbations with different amplitudes (Eq. (23)). The filled circle symbols are devoted to the dust-subtracted \(B\)-mode power provided by BICEP2+Keck data and \(Planck\) Joint analysis [62][64], while the filled triangle symbols illustrate the BICEP2+Keck auto-correlation at 150 GHz band [63][65]. Interestingly, for small \(\ell\), the value of \(B\)-mode spectrum generated by Compton scattering in the presence of dipole asymmetry in the scalar perturbations is almost equivalent to the \(B\)-mode due to the tensor perturbations for \(r\simeq 0.005\). The asymmetric template has a dominant contribution for large angular scale (small \(\ell\)), therefore for \(\ell\in[2,100]\) the observational constraint leads to a higher value for asymmetric amplitude compared to taking into account the higher multiples or small scales \(\ell\in[2,250]\). To describe the various physical influences on the CMB fluctuations, in principle, the Boltzmann-Einstein equations governing the evolution of the anisotropies in the cosmic distribution of photons as well as matter inhomogeneities should be solved. The CMB anisotropies are sourced by the primary and secondary signatures. However, the mentioned framework provides a straightforward approach to tracing the various phenomena in generating CMB anisotropies. Still, from the observational point of view, it is helpful to rely on the probabilistic approach constructed for quantifying specific stochastic fields such as CMB. The CMB
two-point correlation functions and corresponding expansions in Legendre polynomials and spherical harmonics denoted by power spectra are practically well-defined observables. The asymmetric part of \(C_{BB,\ell}^{\rm(S)}\) (Eq. (23)), reveals that the initial condition (power spectrum of initial fluctuations) is convolved by a transfer function determined by the Boltzmann equation modified due to the presence of asymmetric template for scalar perturbations. The modified transfer function includes the integral form of the generalized amplitude of the asymmetric template (Eq. (24)) multiplied by the visibility function and a functional form of the spherical Bessel function. Comparing the mentioned part with the standard \(E\)- and \(B\)-modes power spectra indicates a significant discord in addition to the new functional form of spherical Bessel function, which is implying the presence of hexapole temperature fluctuations instead of quadrupole. The contribution of the asymmetric part for the almost large scale, \(\ell\lesssim[10-20]\), looks like the \(B\)-mode power spectrum generated by tensor perturbation without any gravitational redshift term. At this scale, the contribution of Thomson scattering plays a curtail role in the polarization spectra. To carry out a more precise evaluation of the B-mode spectrum at intermediate and small scales due to the given asymmetric template for scalar perturbations, we should take into account the better approximation done for the \(A(K,\tau)\) (Eq. (24)) which is acceptable for determining the upper limit on the B-mode at large scale. However, the oscillation behavior of B-mode is due to the spherical Bessel function, the visibility function accompanying the generalized amplitude of the asymmetric template of scalar perturbations having a sharp maximum at \(\ell=K(\tau_{0}-\tau)\). Mentioned behavior is illustrated in the upper and lower panels of Fig. 2. The upper panel is for \(A_{T}=0.068,0.072\) & \(0.075\) when \(\ell\in[2,100]\) while the lower panel is for \(A_{T}=0.018,0.023\) & \(0.030\) when \(\ell\in[2,250]\).
## IV Summary and conclusion
In this study, inspired by the observed asymmetric template for the CMB temperature power spectrum on large scales, we considered the asymmetric part for the scalar perturbations. We revisited the quantum Boltzmann equations for the density matrix of the CMB temperature as well as the CMB Stokes parameters via Compton scattering in the presence of a dipole asymmetric template for the scalar perturbations. In the Boltzmann equations, we have replaced un-modulated temperature fluctuations \(\Delta_{T}(\hat{n})\) by dipolar modulated one \(\tilde{\Delta}_{T}(\hat{n})\) which breaks the linear perturbations in the radiation and plays a role similar to the non-linear perturbations in radiation. We derived the explicit expressions for the angular power spectra of the electric and magnetic-types parities of linear polarization in the form of the line of sight integral solutions and finally, we ob
tained the departure from the results given by considering the symmetric template for the scalar perturbations.
Our results demonstrated that scalar fluctuations with a dipole asymmetric template modified the standard Boltzmann equations for the linear polarization of the CMB map. The \(E\)-mode received an additional term with a linear dependency on the amplitude of the temperature asymmetric template according to Eqs. (21). We obtained the \(B\)-mode power spectrum achieved a contribution via Compton scattering in the presence of the primordial scalar fluctuations with dipole asymmetric template which not be zero in contrast with the contemporary model. As shown in Eq. (23), \(C_{BB,\ell}^{\rm(S)}\) depends on the square of dipole asymmetry amplitude. To make more sense, we compared the generated \(B\)-mode due to asymmetric scalar perturbations with magnetic-type parity of linear polarization in the presence of tenor perturbations with tensor to scalar ratio about \(r\simeq 0.005\), which are almost equivalent for \(\ell\lesssim 10\) (Fig. 2).
It could be interesting to assess the contributions of \(\ell=3\) multi-poles, \(\Delta_{T,3m}(k,K,\tau)\), more precisely, to achieve a more accurate estimation of the \(B\)-mode instead of our current result which is an upper estimation. Also applying the computational data modeling to put a precise constraint on the model-free parameters improves our results [33].
## Acknowledgment
This research is supported in part by INSF with Grant No: 95838970. We are grateful to Sorhab Rahvar for his constructive comments.
## Appendix A
Accounting for the scalar mode perturbations and by using the Thomson scattering of CMB photons by cosmic electrons, the time evolution of \(\rho_{ij}(\vec{x},\vec{k})\) (Eq. (3)) as well as the Stokes parameters (Eq. (5)) is given by Eq. (10) [59], (Fig.(1)). More precisely, the evolution of polarization terms of CMB is indicated by Eq. (11). All coefficients appear in the mentioned equations are
defined below:
\[F_{QT}^{k} \equiv 2[|\hat{p}.\epsilon_{1}(\vec{k})|^{2}-|\hat{p}.\epsilon_{2}(\vec{k}) |^{2}]\] \[F_{QT}^{p} \equiv 4[|\epsilon_{1}.\epsilon_{2}|^{2}-|\epsilon_{2}.\epsilon_{2}|^{2} +|\epsilon_{1}(\vec{k}).\epsilon_{2}|^{2}-|\epsilon_{1}(\vec{k}).\epsilon_{1}( \vec{p})|^{2}]\] \[F_{QQ}^{p} \equiv 4\{[\epsilon_{1}(\vec{k})\epsilon_{1}(\vec{p})\epsilon_{1}(\vec {k})\epsilon_{2}(\vec{p})+|\epsilon_{1}(\vec{k}).\epsilon_{1}(\vec{p})|^{2}-| \epsilon_{1}(\vec{k}).\epsilon_{2}(\vec{p})|^{2}]\] \[- [\epsilon_{2}(\vec{k}).\epsilon_{2}(\vec{p})\epsilon_{2}(\vec{k}) \epsilon_{1}(\vec{p})+|\epsilon_{2}(\vec{k}).\epsilon_{1}(\vec{k})|^{2}-| \epsilon_{2}(\vec{k}).\epsilon_{2}(\vec{p})|^{2}]\},\] \[F_{UT}^{k} \equiv 4[\hat{p}.\epsilon_{1}(\vec{k})\hat{p}\epsilon_{2}(\vec{k})+p. \epsilon_{1}p.\epsilon_{2}]\] \[F_{UQ}^{p} \equiv 4[p.\epsilon_{1}(k)p.\epsilon_{2}(k)-p.\epsilon_{1}(k)p.\epsilon _{2}]\] \[F_{UU}^{p} \equiv 4[p.\epsilon_{1}(k)p.\epsilon_{1}(k)-p.\epsilon_{2}(k)p. \epsilon_{2}]\] \[F_{UT}^{p} \equiv 4\{[\epsilon_{1}.\epsilon_{1}\epsilon_{2}(\vec{k}).\epsilon_{1 }(\vec{p})+\epsilon_{2}.\epsilon_{2}\epsilon_{1}(\vec{k}).\epsilon_{2}(\vec{p })]+[\epsilon_{1}.\epsilon_{1}\epsilon_{2}(\vec{k}).\epsilon_{1}(\vec{p})+ \epsilon_{2}.\epsilon_{2}\epsilon_{1}(\vec{k}).\epsilon_{2}(\vec{p})]\}\] \[F_{UQ}^{p} = 4\{[\epsilon_{1}.\epsilon_{1}\epsilon_{2}.\epsilon_{2}+\epsilon _{1}(\vec{k}).\epsilon_{2}(\vec{p})\epsilon_{2}(\vec{k}).\epsilon_{1}(\vec{p })+\epsilon_{1}(\vec{k}).\epsilon_{1}(\vec{p})\epsilon_{2}(\vec{k}).\epsilon _{1}(\vec{p})-\epsilon_{1}(\vec{k}).\epsilon_{2}(\vec{p})\epsilon_{2}(\vec{k}).\epsilon_{2}(\vec{p})] \tag{25}\] \[+ [\epsilon_{1}.\epsilon_{1}\epsilon_{2}.\epsilon_{2}+\epsilon_{1} (\vec{k}).\epsilon_{2}(\vec{p})\epsilon_{2}(\vec{k}).\epsilon_{1}(\vec{p})+ \epsilon_{1}(\vec{k}).\epsilon_{1}(\vec{p})\epsilon_{2}(\vec{k}).\epsilon_{1 }(\vec{p})-\epsilon_{1}(\vec{k}).\epsilon_{2}(\vec{p})\epsilon_{2}(\vec{k}). \epsilon_{2}(\vec{p})]\}.\]
Matter perturbations in the Fourier modes are characterized by the wave vector \(\vec{K}\) and the coordinate system \(\hat{K}\|\hat{z}\), and their amplitudes depend on the angle between the photon direction and the wave vector \(\mu=\hat{n}.\hat{K}\). The Eq. (11) is in a standard scenario written without considering dipole asymmetry. To take into account the contribution of dipole asymmetry (Eq. (12)), the \(\Delta_{T}(\vec{p},K)\) and \(\Delta_{T}(\vec{k},K)\) should be replaced by \(\tilde{\Delta}_{T}(\vec{p},K)=\Delta_{T}(\vec{p},K)(1+A_{T}\hat{P}\cdot\hat{p})\) and \(\tilde{\Delta}_{T}(\vec{k},K)=\Delta_{T}(\vec{k},K)(1+A_{T}\hat{P}.\hat{k})\), respectively in the Boltzmann equations (Eq. (11)). Therefore, the evolution equation for Stokes parameters, \(Q^{(S)}\) and \(U^{(S)}\) are modified in the presence of dipole asymmetric scalar perturbations as:
\[\frac{d}{d\tau}(\tilde{Q}^{(S)}+i\tilde{U}^{(S)})+iK\mu(\tilde{Q} ^{(S)}\pm i\tilde{U}^{(S)}) = \dot{\kappa}[-(\tilde{Q}^{(S)}\pm i\tilde{U}^{(S)})-\frac{1}{2}[ 1-P_{2}(\mu)]\Pi+\Pi^{+(S)}(K,\tau)]\] \[\frac{d}{d\tau}(\tilde{Q}^{(S)}-i\tilde{U}^{(S)})+iK\mu(\tilde{Q} ^{(S)}\pm i\tilde{U}^{(S)}) = \dot{\kappa}[-(\tilde{Q}^{(S)}\pm i\tilde{U}^{(S)})-\frac{1}{2}[ 1-P_{2}(\mu)]\Pi+\Pi^{-(S)}(K,\tau)],\]
where the last terms in the right-hand side of the above equations in the symmetric scalar perturbations vanished and therefore they are produced due to the asymmetric part. Mentioned terms are as follows:
\[\Pi^{\pm(S)}(K,\tau)=A_{T}\int\frac{d\Omega_{p}}{4\pi}\left(\hat{p}\cdot\hat{P} \right)\Delta_{T}(\vec{p},K,\tau)\left(F_{QT}^{p}\pm i\,F_{UT}^{p}\right), \tag{27}\]
where \(\hat{p}\cdot\hat{P}\) has below general form
\[\hat{p}\cdot\hat{P}=\frac{4\pi}{3}\sum_{1,m}Y_{1,m}^{*}(\hat{P})Y_{1,m}(\hat{p }). \tag{28}\]
To go further, plugging the Eq. (28) into Eq. (27), we obtain
\[\Pi^{\pm(S)}(K,\tau)=A_{T}\sum_{m^{\prime}=-3}^{3}\,\Delta_{T,3m^{\prime}}\,F_{3 m^{\prime}}^{\pm}(\hat{n},\hat{P}), \tag{29}\]
Therefore, the necessary functions to clarify the evolution of Stokes parameters which are given by Eq. (26) are:
\[F_{30}^{\pm}=A_{1}(\hat{P})Y_{2\,-1}(\hat{n})+A_{2}(\hat{P})Y_{2 \,0}(\hat{n})+A_{3}(\hat{P})Y_{2\,1}(\hat{n})\] \[F_{31}^{\pm}=B_{1}(\hat{P})Y_{2\,-2}(\hat{n})+B_{2}(\hat{P})Y_{2 \,-1}(\hat{n})+B_{3}(\hat{P})Y_{2\,0}(\hat{n})\pm B_{4}(\hat{P})Y_{3\,-2}(\hat {n})+B_{5}(\hat{P})Y_{4\,-2}(\hat{n})\] \[F_{3-1}^{\pm}=C_{1}(\hat{P})Y_{2\,0}(\hat{n})+C_{2}(\hat{P})Y_{2 \,1}(\hat{n})+C_{3}(\hat{P})Y_{2\,2}(\hat{n})\pm C_{4}(\hat{P})Y_{3\,2}(\hat{n })+C_{5}(\hat{P})Y_{4\,2}(\hat{n})\] \[F_{3-2}^{\pm}=D_{1}(\hat{P})Y_{2\,1}(\hat{n})+D_{2}(\hat{P})Y_{2 \,2}(\hat{n})\mp D_{3}(\hat{P})Y_{3\,2}(\hat{n})+D_{4}(\hat{P})Y_{4\,2}(\hat{n})\] \[F_{32}^{\pm}=E_{1}(\hat{P})Y_{2\,-2}(\hat{n})+E_{2}(\hat{P})Y_{2 \,-1}(\hat{n})\pm E_{3}(\hat{P})Y_{3\,-2}(\hat{n})+E_{4}(\hat{P})Y_{4\,-2}( \hat{n})\] \[F_{3-3}^{\pm}=F_{1}(\hat{P})Y_{2\,2}(\hat{n})\mp F_{2}(\hat{P})Y_ {3\,2}(\hat{n})+F_{3}(\hat{P})Y_{4\,2}(\hat{n})\] \[F_{33}^{\pm}=G_{1}(\hat{P})Y_{2\,-2}(\hat{n})\mp G_{2}(\hat{P})Y_ {3\,-2}(\hat{n})+G_{3}(\hat{P})Y_{4\,-2}(\hat{n}) \tag{30}\]
The coefficients \(A_{1},A_{2},...G_{3}\) in the above equations read as:
\[A_{1}(\hat{P}) = -\frac{8\pi}{5}\sqrt{\frac{2}{105}}\sqrt{\frac{2\pi}{3}}Y_{1\,- 1}(\hat{P})\quad,\quad A_{2}(\hat{P})=\frac{16\pi}{5\sqrt{35}}\sqrt{\frac{\pi} {3}}Y_{1\,0}(\hat{P})\quad,\quad A_{3}(\hat{P})=-\frac{8\pi}{5}\sqrt{\frac{2}{ 105}}\sqrt{\frac{2\pi}{3}}Y_{1\,1}(\hat{P})\] \[B_{1}(\hat{P}) = \frac{-2A_{T}}{5\sqrt{70}}\sqrt{\frac{2\pi}{3}}Y_{1\,-1}(\hat{P}) \quad,\quad\quad B_{2}(\hat{P})=\frac{-8}{15}\sqrt{\frac{2}{35}}\sqrt{\frac{2 \pi}{3}}Y_{1\,0}(\hat{P})\;\;,\;B_{3}(\hat{P})=\frac{4}{5\sqrt{105}}\sqrt{\frac {2\pi}{3}}Y_{1\,1}(\hat{P})\] \[B_{4}(\hat{P}) = \frac{2}{30\sqrt{10}}\sqrt{\frac{2\pi}{3}}Y_{1\,-1}(\hat{P}) \quad,\quad\quad B_{5}(\hat{P})=-\frac{2}{5\sqrt{210}}\sqrt{\frac{2\pi}{3}}Y _{1\,-1}(\hat{P})\;\;,\;C_{1}(\hat{P})=\frac{4}{5\sqrt{105}}\sqrt{\frac{2\pi} {3}}Y_{1\,-1}(\hat{P})\] \[C_{2}(\hat{P}) = \frac{-8}{15}\sqrt{\frac{2}{35}}\sqrt{\frac{2\pi}{3}}Y_{1\,0}(\hat {P})\quad,\quad\quad C_{3}(\hat{P})=-\frac{2}{5\sqrt{70}}\sqrt{\frac{2\pi}{3}}Y _{1\,1}(\hat{P})\quad,\quad C_{4}(\hat{P})=-\frac{2}{30\sqrt{10}}\sqrt{\frac{2 \pi}{3}}Y_{1\,1}(\hat{P})\] \[C_{5}(\hat{P}) = -\frac{2}{5\sqrt{210}}\sqrt{\frac{2\pi}{3}}Y_{1\,1}(\hat{P})\quad,\quad\quad D_{1}(\hat{P})=-\frac{4}{15\sqrt{7}}\sqrt{\frac{2\pi}{3}}Y_{1\,-1}( \hat{P})\quad,\;\;D_{2}(\hat{P})=-\frac{2}{5\sqrt{7}}\sqrt{\frac{2\pi}{3}}Y_{1 0}(\hat{P})\] \[D_{3}(\hat{P}) = \frac{2}{30}\sqrt{\frac{2\pi}{3}}Y_{1\,0}(\hat{P})\quad\quad\quad,\quad D_{4}(\hat{P})=-\frac{2}{5\sqrt{21}}\sqrt{\frac{2\pi}{3}}Y_{1\,0}(\hat {P})\quad,\quad\;\;E_{1}(\hat{P})=-\frac{2}{5\sqrt{7}}\sqrt{\frac{2\pi}{3}}Y_{ 1\,0}(\hat{P})\] \[E_{2}(\hat{P}) = -\frac{4}{15\sqrt{7}}\sqrt{\frac{2\pi}{3}}Y_{1\,1}(\hat{P})\quad,\quad\quad E_{3}(\hat{P})=\frac{2}{30}\sqrt{\frac{2\pi}{3}}Y_{1\,0}(\hat{P}) \quad,\quad\quad\quad E_{4}(\hat{P})=-\frac{2}{5\sqrt{21}}\sqrt{\frac{2\pi}{3}}Y _{1\,0}(\hat{P})\] \[F_{1}(\hat{P}) = -\frac{2}{5}\sqrt{\frac{3}{14}}\sqrt{\frac{2\pi}{3}}Y_{1\,-1}(\hat {P})\quad,\quad F_{2}(\hat{P})=\frac{2}{10\sqrt{6}}\sqrt{\frac{2\pi}{3}}Y_{1\,- 1}(\hat{P})\quad,\quad\quad\quad F_{3}(\hat{P})=-\frac{2}{5\sqrt{14}}\sqrt{ \frac{2\pi}{3}}Y_{1\,-1}(\hat{P})\] \[G_{1}(\hat{P}) = -\frac{2}{5}\sqrt{\frac{3}{14}}\sqrt{\frac{2\pi}{3}}Y_{1\,1}(\hat {P})\quad,\quad\quad G_{2}(\hat{P})=-\frac{2}{10\sqrt{6}}\sqrt{\frac{2\pi}{3}}Y _{1\,1}(\hat{P})\quad,\quad\quad G_{3}(\hat{P})=-\frac{2}{5\sqrt{14}}\sqrt{ \frac{2\pi}{3}}Y_{1\,1}(\hat{P}) \tag{31}\]
## Appendix B
To compute the \(B\)-mode power spectrum and to achieve the Eq. (23), we need to clarify following term:
\[\bar{\eth}^{2}\Pi^{+(\rm S)}(K,\tau)e^{ix\mu}-\eth^{2}\Pi^{-(\rm S)} (K,\tau)e^{ix\mu}=A_{T}\Delta_{30}[\bar{\eth}^{2}F_{30}^{+}e^{ix\mu}-\eth^{2}F_{ 30}^{-}e^{ix\mu}]+A_{T}\Delta_{31}[\bar{\eth}^{2}F_{31}^{+}e^{ix\mu}-\eth^{2}F _{31}^{-}e^{ix\mu}]\] \[+A_{T}\Delta_{3-1}[\bar{\eth}^{2}F_{3-1}^{+}e^{ix\mu}-\eth^{2}F_{ 3-1}^{-}e^{ix\mu}]+A_{T}\Delta_{3-2}[\bar{\eth}^{2}F_{3-2}^{+}e^{ix\mu}-\eth^{2 }F_{3-2}^{-}e^{ix\mu}]\] \[+A_{T}\Delta_{32}[\bar{\eth}^{2}F_{32}^{+}e^{ix\mu}-\eth^{2}F_{32 }^{-}e^{ix\mu}]+A_{T}\Delta_{3-3}[\bar{\eth}^{2}F_{3-3}^{+}e^{ix\mu}-\eth^{2} F_{3-3}^{-}e^{ix\mu}]\] \[+A_{T}\Delta_{33}[\bar{\eth}^{2}F_{33}^{+}e^{ix\mu}-\eth^{2}F_{3 3}^{-}e^{ix\mu}]\] \[= A_{T}\Delta_{30}[\bar{\eth}^{2}F_{30}^{+}-\eth^{2}F_{30}^{-}]e^{ ix\mu}+A_{T}\Delta_{30}[F_{30}^{+}\ \eth^{2}e^{ix\mu}-F_{30}^{-}\ \eth^{2}e^{ix\mu}]\] \[+A_{T}\Delta_{31}[\bar{\eth}^{2}F_{31}^{+}-\eth^{2}F_{31}^{-}]e^{ ix\mu}+A_{T}\Delta_{31}[F_{31}^{+}\bar{\eth}^{2}e^{ix\mu}-F_{31}^{-}\bar{\eth}^{2}e ^{ix\mu}]\] \[+A_{T}\Delta_{3-1}[\bar{\eth}^{2}F_{3-1}^{+}-\eth^{2}F_{3-1}^{-}] e^{ix\mu}+A_{T}\Delta_{3-1}[F_{3-1}^{+}\bar{\eth}^{2}e^{ix\mu}-F_{3-1}^{-}\eth^{2}e ^{ix\mu}]\] \[+A_{T}\Delta_{3-2}[\bar{\eth}^{2}F_{3-2}^{+}-\eth^{2}F_{3-2}^{-}] e^{ix\mu}+A_{T}\Delta_{3-2}[F_{3-2}^{+}\bar{\eth}^{2}e^{ix\mu}-F_{3-2}^{-}\eth^{2}e ^{ix\mu}]\] \[+A_{T}\Delta_{32}[\bar{\eth}^{2}F_{32}^{+}-\eth^{2}F_{32}^{-}]e^{ ix\mu}+A_{T}\Delta_{32}[F_{32}^{+}\bar{\eth}^{2}e^{ix\mu}-F_{32}^{-}\bar{\eth}^{2}e ^{ix\mu}]\] \[+A_{T}\Delta_{3-2}[\bar{\eth}^{2}F_{33}^{+}-\eth^{2}F_{3-2}^{-}]e ^{ix\mu}+A_{T}\Delta_{33}[F_{33}^{+}\bar{\eth}^{2}e^{ix\mu}-F_{33}^{-}\eth^{2} e^{ix\mu}]\] \[+A_{T}\Delta_{3-3}[\bar{\eth}^{2}F_{3-3}^{+}-\eth^{2}F_{3-3}^{-}] e^{ix\mu}+A_{T}\Delta_{3-3}[F_{3-3}^{+}\bar{\eth}^{2}e^{ix\mu}-F_{3-3}^{-}\eth^{2}e ^{ix\mu}]\]
Because in the \(\vec{K}\ ||\ z\) coordinate frame, \(U\) and \(Q\) are only a function of \(\mu\) so \(\bar{\eth}^{2}=\eth^{2}\). By using the identity \(\exp(i\vec{k}_{1}\cdot\vec{x})=\sum_{\ell}(-i)^{\ell}\sqrt{4\pi(2\ell+1)}j_{ \ell}(k_{1}r)Y_{\ell}^{0}(\hat{n})\), and
\[{}_{s}Y_{\ell m} = \left[\frac{(\ell-s)!}{(\ell+s)!}\right]\frac{1}{2}\ \eth^{s}Y_{\ell m}\ \ \ \ \,\ \ \ \ \ (0\leq s\leq\ell)\] \[{}_{s}Y_{\ell m} = \left[\frac{(\ell+s)!}{(\ell-s)!}\right]\frac{1}{2}\ \ (-1)^{s}\bar{\eth}^{-s}Y_{\ell m}\ \ \ \,\ \ \ \ (-\ell\leq s\leq 0) \tag{33}\]
according to the equation of \((A3)\) of reference [60], the Eq. (32) becomes:
\[\bar{\eth}^{2}\Pi^{+(\rm S)}(K,\tau)e^{ix\mu}-\eth^{2}\Pi^{-(\rm S )}(K,\tau)e^{ix\mu}=(5!)^{1/2}A_{T}(B_{4}\Delta_{31}-E_{3}\Delta_{32}-G_{2} \Delta_{33})\big{[}_{-2}Y_{32}+\ _{2}Y_{32}\big{]}\] \[+(5!)^{1/2}A_{T}(C_{4}\Delta_{3-1}-D_{3}\Delta_{3-2}-F_{2}\Delta _{3-3})\big{[}_{-2}Y_{3-2}+\ _{2}Y_{3-2}\big{]}\] \[+A_{T}(B_{4}\Delta_{31}+E_{3}\Delta_{32}-G_{2}\Delta_{33})Y_{3-2 }(\hat{n})\big{[}\bar{\eth}^{2}e^{ix\mu}+\eth^{2}e^{ix\mu}\big{]}\] \[+A_{T}(C_{4}\Delta_{3-1}-D_{3}\Delta_{3-2}-F_{2}\Delta_{3-3})Y_{3 2}(\hat{n})\big{[}\bar{\eth}^{2}e^{ix\mu}+\eth^{2}e^{ix\mu}\big{]}\] \[= H_{5}(\hat{P})\big{[}_{-2}Y_{3-2}+\ _{2}Y_{3-2}\big{]}+H_{6}(\hat{P}) \big{[}_{-2}Y_{32}+\ _{2}Y_{32}\big{]}+H_{9}(\hat{P})\big{[}\bar{\eth}^{2}e^{ix\mu}+\eth^{2}e^{ ix\mu}\big{]} \tag{34}\]
where
\[H_{5}(\hat{P}) = (5!)^{1/2}A_{T}(B_{4}\Delta_{31}-E_{3}\Delta_{32}-G_{2}\Delta_{33})\] \[H_{6}(\hat{P}) = (5!)^{1/2}A_{T}(C_{4}\Delta_{3-1}-D_{3}\Delta_{3-2}-F_{2}\Delta_{3 -3})\] \[H_{9}(\hat{P}) = A_{T}(B_{4}\Delta_{31}+E_{3}\Delta_{32}-G_{2}\Delta_{33})Y_{3-2} (\hat{n})+A_{T}(C_{4}\Delta_{3-1}-D_{3}\Delta_{3-2}-F_{2}\Delta_{3-3})Y_{32}( \hat{n}) \tag{35}\]
we also implement following relations:
\[[\bar{\eth}^{2}+\eth^{2}]e^{ix\mu}=\sum_{\ell=2}(-i)^{2}\sqrt{4 \pi(2\ell+1)\frac{(\ell+2)!}{(\ell-2)!}}j_{\ell}(x)\big{[}_{-2}Y_{\ell 0}+\ _{2}Y_{\ell 0} \big{]}\]
\[{}_{2}Y_{3\pm 2}+\ _{-2}Y_{3\pm 2} = \frac{1}{4}\sqrt{\frac{7}{\pi}}e^{\pm 2i\varphi}(2\cos^{2}\theta-1)\cos\theta \tag{37}\]
finally, Eq. (34) reads as:
\[\bar{\eth}^{2}\Pi^{+(\rm S)}(K,\tau)e^{ix\mu}-\eth^{2}\Pi^{-( \rm S)}(K,\tau)e^{ix\mu} = \frac{1}{4}\sqrt{\frac{7}{\pi}}e^{-2i\phi}(2\mu^{2}-1)\mu H_{5}( \hat{P})+\frac{\sqrt{7}}{4\sqrt{\pi}}e^{2i\phi}(2\mu^{2}-1)\mu H_{6}(\hat{P}) \tag{38}\] \[+H_{9}(\hat{P})\big{[}\bar{\eth}^{2}e^{ix\mu}+\eth^{2}e^{ix\mu} \big{]}\]
Therefore the asymmetric \(B\)-mode power spectrum is written by:
\[C^{(\rm S)}_{BB,\ell}\Big{|}_{\rm Asymmetry} = \frac{(4\pi)}{2\ell+1}\frac{(\ell-2)!}{(\ell+2)!}\int k^{2}dkP_{ \varphi}(k)\sum_{m}\Big{|}\int d\Omega Y_{\ell m}^{*}(\hat{n})\int_{0}^{\tau_{ 0}}d\tau\,g(\tau)\,[\hat{\zeta}(\hat{P},x)-i\hat{\rho}(\hat{P},x)]e^{ix\mu} \Big{|}^{2}\]
where \(\hat{\zeta}(\hat{P},x)\) and \(\hat{\rho}(\hat{P},x)\) are:
\[\hat{\zeta}(\hat{P},x) = \sum_{\ell=2}(-i)^{2}\sqrt{4\pi(2\ell+1)\frac{(\ell+2)!}{(\ell-2 )!}}j_{\ell}(x)\big{[}_{-2}Y_{\ell 0}+\ _{2}Y_{\ell 0}\big{]}\] \[\hat{\rho}(\hat{P},x) = -\left[\frac{\sqrt{7}}{4\sqrt{\pi}}H_{5}(\hat{P})(2\partial_{x}^ {3}+\partial_{x})\right]e^{-2i\varphi}-\left[\frac{\sqrt{7}}{4\sqrt{\pi}}H_{6}( \hat{P})(2\partial_{x}^{3}+\partial_{x})\right]e^{2i\varphi} \tag{40}\]
it turns out that the \(\hat{\zeta}(\hat{P},x)\) does not contain the asymmetric \(B\)-mode power spectrum. Then Eq. (39) is given by:
\[C^{(\rm S)}_{BB,\ell}\Big{|}_{\rm Asymmetry} = (4\pi)^{2}\frac{(\ell+2)!}{(\ell-2)!}\int K^{2}dKP_{\varphi}(K) \left[\int_{0}^{\tau_{0}}d\tau\,g(\tau)\,\hat{\beta}(\hat{P},x)\right]^{2}\]
in which \(\hat{\beta}(\hat{P},x)\equiv\frac{1}{4}\sqrt{\frac{7}{\pi}}\left[H_{5}(\hat{P})+H_{ 6}(\hat{P})\right]\left[\ell(1-\ell)(1+\ell)(2+\ell)\right]\frac{(\ell-2)j_{ \ell}(x)-xj_{\ell+1}(x)}{x^{3}}\). By implementing \(j_{\ell+1}(x)=\frac{\ell}{x}j_{\ell}(x)-j_{\ell}^{\prime}(x)\), finally, we achieve:
\[C_{BB,\ell}^{(\mathrm{S})}\Big{|}_{\mathrm{Asymmetry}} = (4\pi)^{2}\frac{(\ell+2)!}{(\ell-2)!}\int k^{2}dkP_{\varphi}(k) \left[\int_{0}^{\tau_{0}}d\tau\,g(\tau)\,A(K,\tau)\Big{|}_{\mathrm{Asymmetry}} \frac{(\ell-2)j_{\ell}(x)-xj_{\ell+1}(x)}{x^{3}}\right]^{2}, \tag{42}\]
where
\[A(K,\tau)\Big{|}_{\mathrm{Asymmetry}}=\frac{1}{4}\sqrt{\frac{7}{\pi}}A_{T} \left[B_{4}\Delta_{31,T}-E_{3}\Delta_{32,T}-G_{2}\Delta_{33,T}+C_{4}\Delta_{3 -1,T}-D_{3}\Delta_{3-2,T}-F_{2}\Delta_{3-3,T}\right]. \tag{43}\]
We assume \(\Delta_{31,T}=\Delta_{32,T}=\Delta_{33,T}=\Delta_{3-1,T}=\Delta_{3-2,T}= \Delta_{3-3,T}\), therefore by considering \(\hat{P}=(218,-19)\pm 29\) we have:
\[A(K,\tau)\Big{|}_{\mathrm{Asymmetry}} = A_{T}\Delta_{3,T}\frac{16\pi}{4}\sqrt{\frac{14}{3}}\left[0.060 \left(Y_{1\,1}(\hat{P})-Y_{1\,-1}(\hat{P})\right)-\frac{2}{15}Y_{1\,0}(\hat{P} )\right] \tag{44}\] \[= A_{T}\Delta_{3,T}(4\pi)\sqrt{\frac{14}{3}}(-0.0239+0.0517)\] \[= 0.753A_{T}\Delta_{3,T}\]
We will consider this result to estimate the order of magnitude of \(E\)- and \(B\)-modes with the assumption of dipole asymmetry.
|
2303.04067 | Where intermediate-mass black holes could hide in the Galactic Centre: A
full parameter study with the S2 orbit | In the Milky Way the central massive black hole, SgrA*, coexists with a
compact nuclear star cluster that contains a sub-parsec concentration of
fast-moving young stars called S-stars. Their location and age are not easily
explained by current star formation models, and in several scenarios the
presence of an intermediate-mass black hole (IMBH) has been invoked. We use
GRAVITY astrometric and SINFONI, KECK, and GNIRS spectroscopic data of S2 to
investigate whether a second massive object could be present deep in the
Galactic Centre (GC) in the form of an IMBH binary companion to SgrA*. To solve
the three-body problem, we used a post-Newtonian framework and consider two
types of settings: (i) a hierarchical set-up where the star S2 orbits the SgrA*
- IMBH binary and (ii) a non-hierarchical set-up where the IMBH trajectory lies
outside the S2 orbit. In both cases we explore the full 20-dimensional
parameter space by employing a Bayesian dynamic nested sampling method. For the
hierarchical case we find: IMBH masses > 2000 Msun on orbits with smaller
semi-major axes than S2 are largely excluded. For the non-hierarchical case the
parameter space contains several pockets of valid IMBH solutions. However, a
closer analysis of their impact on the resident stars reveals that IMBHs on
semi-major axes larger than S2 tend to disrupt the S-star cluster in less than
a million years. This makes the existence of an IMBH among the S-stars highly
unlikely. The current S2 data do not formally require the presence of an IMBH.
If an IMBH hides in the GC, it has to be either a low-mass IMBH inside the S2
orbit that moves on a short and significantly inclined trajectory or an IMBH
with a semi-major axis >1". We provide the parameter maps of valid IMBH
solutions in the GC and discuss the general structure of our results.
(abridged) | The GRAVITY Collaboration, O. Straub, M. Bauböck, R. Abuter, N. Aimar, P. Amaro Seoane, A. Amorim, J. P. Berger, H. Bonnet, G. Bourdarot, W. Brandner, V. Cardoso, Y. Clénet, Y. Dallilar, R. Davies, P. T. de Zeeuw, J. Dexter, A. Drescher, F. Eisenhauer, N. M. Förster Schreiber, A. Foschi, P. Garcia, F. Gao, E. Gendron, R. Genzel, S. Gillessen, M. Habibi, X. Haubois, G. Heißel, T. Henning, S. Hippler, M. Horrobin, L. Jochum, L. Jocou, A. Kaufer, P. Kervella, S. Lacour, V. Lapeyrère, J. -B. Le Bouquin, P. Léna, D. Lutz, T. Ott, T. Paumard, K. Perraut, G. Perrin, O. Pfuhl, S. Rabien, D. C. Ribeiro, M. Sadun Bordoni, S. Scheithauer, J. Shangguan, T. Shimizu, J. Stadler, C. Straubmeier, E. Sturm, L. J. Tacconi, F. Vincent, S. von Fellenberg, F. Widmann, E. Wieprecht, E. Wiezorrek, J. Woillez | 2023-03-07T17:25:52Z | http://arxiv.org/abs/2303.04067v2 | # Where intermediate-mass black holes could hide
###### Abstract
Context:In the Milky Way the central massive black hole, Sgr A\({}^{*}\), coexists with a compact nuclear star cluster that contains a sub-parsec concentration of fast-moving young stars called S-stars. Their location and age are not easily explained by current star formation models, and in several scenarios the presence of an intermediate-mass black hole (IMBH) has been invoked.
Aims:We use GRAVITY astrometric and SINFONI, KECK, and GNIRS spectroscopic data of S2, the best known S-star, to investigate whether a second massive object could be present deep in the Galactic Centre (GC) in the form of an IMBH binary companion to Sgr A\({}^{*}\).
Methods:To solve the three-body problem, we used a post-Newtonian framework and consider two types of settings: (i) a hierarchical set-up where the star S2 orbits the Sgr A\({}^{*}\) - IMBH binary and (ii) a non-hierarchical set-up where the IMBH trajectory lies outside the S2 orbit. In both cases we explore the full 20-dimensional parameter space by employing a Bayesian dynamic nested sampling method.
Results:For the hierarchical case we find the strongest constraints: IMBH masses \(>2000\)\(M_{\odot}\) on orbits with smaller semi-major axes than S2 are largely excluded. For the non-hierarchical case, the chaotic nature of the problem becomes significant: the parameter space contains several pockets of valid IMBH solutions. However, a closer analysis of their impact on the resident stars reveals that IMBHs on semi-major axes larger than S2 tend to disrupt the S-star cluster in less than a million years. This makes the existence of an IMBH among the S-stars highly unlikely.
Conclusions:The current S2 data do not formally require the presence of an IMBH. If an IMBH hides in the GC, it has to be either a low-mass IMBH inside the S2 orbit that moves on a short and significantly inclined trajectory or an IMBH with a semi-major axis \(>1\arcsec\). We provide the parameter maps of valid IMBH solutions in the GC and discuss the general structure of our results and how future observations can help to put even stronger constraints on the properties of IMBHs in the GC.
Conclusions:
## 1 Introduction
The nuclear star cluster in the Milky Way can, due to its proximity to Earth (\(R_{0}=8.28\) kpc, GRAVITY Collaboration et al., 2019, 2021; Do et al., 2019), be resolved into individual stars. In its entirety, it has an oblate shape and extends in the K-band to about \(178\arcsec\)(i.e. 7.2 pc at \(R_{0}\), Becklin & Neugebauer, 1968; Schodel et al., 2014; Fritz et al., 2016) around the central massive black hole, Sagittarius A\({}^{*}\) (Sgr A\({}^{*}\), Eckart & Genzel, 1996; Ghez et al., 1998; Schodel et al., 2002; Ghez et al., 2008; GRAVITY Collaboration et al., 2018) and consists predominantly of old and evolved stars. However, in its innermost region, the central \(12\arcsec\) (0.5 pc), it contains a dense and diverse population of stars with a surprising accumulation of young and massive O and B stars. They are found in the stellar disc of WR/O stars that extends from \(0.8\arcsec-12\arcsec\) and shows a clockwise motion (Paumard et al., 2006; Barkko et al., 2009; Lu et al., 2009; Bartko et al., 2010; Yelda et al., 2014), and in the S-star cluster that resides inside the disc's truncation radius and can have ages as young as 3 - \(15\times 10^{6}\) years (Ghez et al., 2003; Eisenhauer et al., 2005; Pfuhl et al., 2011; Lu et al., 2013; Habibi et al., 2017; von Fellenberg et al., 2022).
Accompanying the morphology of the Galactic core region are two puzzling observations. On the one hand, there are the isotopically oriented orbital planes and the approximately thermal distribution of the orbital eccentricities of the S-stars. With only a few million years of age, the early B-type stars thus appear too young to be that thermally relaxed in such close proximity to Sgr A\({}^{*}\) (the paradox of youth; Ghez et al., 2003). On the other hand, in dynamically relaxed systems, one would expect mass segregation where more massive bodies like the WR/O stars are
located closer to the centre than the less massive S-stars (Alexander & Hopman, 2009).
Over the past decades, many models have been proposed to explain the age and location of the S-stars. Hansen & Milosavljevic (2003) were the first to suggest that an intermediate-mass black hole (IMBH) is present in the Galactic Centre (GC). They argue that an IMBH could have dragged the S-stars from a greater, more star formation friendly distance inwards. However, the telltale trail of young stars outside 0.5 pc, which would support a collective inward migration of such a cluster, is not observed (Feldmeier-Krause et al., 2015). Nonetheless, the idea that an IMBH is associated with the location and the distribution of orbital elements of the S-star has been picked up in a variety of studies, and is still a matter of debate today.
Not all scenarios require an IMBH, though. Chen & Amaro-Seoane (2014, 2015) resolve the paradox of youth and mass segregation problem with a rapid redistribution of stellar orbits based on a Kozai-Lidov-like resonance induced by a stellar disc that was more massive and extended in the past. Generozov & Madigan (2020) argue that if the S-stars are sourced by the WR/O disc via the Hills mechanism (stellar binary disruption by a massive third body; Hills 1988), an additional relaxation mechanism is needed to reproduce their present-day distribution on the short timescale given by their ages. They conclude that within a few million years either scalar resonant relaxation from the observed isotropic star cluster or an IMBH of \(\sim 10^{3}\)\(M_{\odot}\) at 250 mas could achieve the observed eccentricities. Employing a cluster of stellar black holes (SBHs) as relaxation agent, Perets et al. (2009) found in N-body simulations running over 20 Myrs that a thermal eccentricity distribution is a natural consequence of random gravitational encounters of stars with a population of SBHs with a total mass of \(\propto 10^{4}\)\(M_{\odot}\) in the inner 0.1 pc. Assuming a cluster of more massive SBHs, Tep et al. (2021) arrive at the same conclusion, but on a shorter timescale. This is consistent with the upper limit on the dark mass distribution of about 15000 \(M_{\odot}\) within 0.1-0.2 pc derived by GRAVITY Collaboration et al. (2022).
The paper is structured as follows. Sections 2 and 3 discuss whether or not it is realistic to expect an IMBH in the GC and what constraints on its mass and location have been found in previous studies. In Section 4 we describe the data set used, and in Section 5 we present the model and methodology we used to fit them. Our results follow in Section 6. In Section 7 we discuss the stability analysis. Finally, in Section 8 we add concluding remarks and an outlook on the future.
## 2 Possibility of IMBHs in the Galactic Centre
Theoretically, black holes can have any mass upwards of the Planck mass.1 Astrophysical black holes, however, essentially only come in two 'flavours'.
Footnote 1: \(m_{p}=(\frac{\mathrm{M_{\odot}}}{\mathrm{G}})^{1/2}=2.2\times 10^{-5}\) g
The first is stellar black holes, with masses ranging from about \(3-100\)\(M_{\odot}\), where \(M_{\odot}=2\times 10^{33}\) g, which form via gravitational collapse of massive stars that depleted their nuclear energy source (e.g. Oppenheimer & Snyder, 1939; Penrose, 1965; Mirabel, 2017).
The second flavour is massive black holes (MBHs), with masses higher than \(10^{6}\)\(M_{\odot}\), which are thought to form via direct or indirect gravitational collapse of an initial massive gas cloud and to co-evolve symbolically with their host galaxies (e.g. Rees, 1978). Although there is an emerging consensus regarding the growth of supermassive BHs thanks to Soltan's argument (Soltan, 1982), the evolution of MBHs with masses up to \(10^{7}\)\(M_{\odot}\), such as our own MBH in the Galactic Centre (with a mass of \(\sim 4.2\times 10^{6}\)\(M_{\odot}\) ), is enigmatic.
There is compelling evidence for the existence of SBHs from both electromagnetic observations (e.g. Narayan & McClintock, 2013; Casares & Jonker, 2014; Corral-Santana et al., 2016) and gravitational wave detection (Abbott et al., 2021). Equally well established is the occurrence of MBHs at the centres of massive galaxies (e.g. Magorrian et al., 1998; Volonteri, 2010; Kormendy & Ho, 2013). Moreover, the increasing number of observations of luminous quasars at very high redshift indicates that some supermassive BHs with masses \(>10^{8}\)\(M_{\odot}\) already existed when the Universe was less than a billion years old (Mortlock et al., 2011; Wu et al., 2015; Banados et al., 2018; Yang et al., 2020).
Intermediate-mass black holes are thought to bridge the gap between these two BH populations and, more importantly, to be the building blocks in the formation process of MBHs. Understanding them is crucial to answering the question of how the young and supermassive quasars could develop into behemoths on such short timescales.
The following three MBH formation channels predict the appearance of IMBHs at different times and in different numbers. There are two early formation mechanisms that rely on the properties of zero-metallicity gas and can therefore only operate at redshift \(\mathrm{z}>10\). In the young Universe, the pristine hydrogen gas could have either coagulated into the first generation of massive Population III stars (Madau & Rees, 2001) or it could have contracted uniformly to directly form a single supermassive star that then collapsed into an intermediate-mass seed BH (Loeb & Rasio, 1994; Begelman et al., 2006), possibly via an accreting quasi-star phase (Hoyle & Fowler, 1963; Begelman, 2010; Wise et al., 2019). The inefficient cooling due to the presence of primordial hydrogen inhibits premature fragmentation and pair-instability supernovae such that the Population III stars and the supermassive star could have reached masses significantly higher than 100 \(M_{\odot}\) and lead to early intermediate-mass seed BHs (Ohkubo et al., 2009).
Quite distinct from the two early seeding mechanisms is the third dynamical formation channel where gravitational runaway and hierarchical black hole mergers in dense nuclear star clusters can form many IMBH kernels (Quinlan & Shapiro, 1990; Portegies Zwart & McMillan, 2002; Freitag et al., 2006; Stone et al., 2017). Antonini et al. (2019) have calculated that IMBHs can indeed form via hierarchical mergers in star clusters with high enough escape velocities and densities. Rizzuto et al. (2021) have pointed out that IMBHs could form in \(\lesssim 15\) Myr, in particular in young and compact star clusters. While the two early seeding mechanisms produce at most one IMBH per galaxy halo at high redshift, this latter process can operate throughout cosmic time and could provide a channel to create an IMBH in any dense stellar system (for comprehensive reviews, see Miller & Colbert, 2004; Mezcua, 2017; Greene et al., 2020). Recently, a mass-gap SBH (or low-mass IMBH) of around 150 \(M_{\odot}\) has been identified as the product of a coalescence of two SBHs via gravitational wave detection (GW190521, Abbott et al., 2020; Abbott et al., 2020; Nitz & Capano, 2021), supporting scenarios with dynamical hierarchical mergers.
Today, intermediate-mass black holes that formed via the early seeding processes are thus expected to populate the cen
tres of low-mass dwarf2 and satellite galaxies (e.g. Mezcua et al., 2016, 2018), whereas IMBHs formed via dynamical mergers are thought to be found rather in globular clusters (Miller & Hamilton, 2002; Baumgardt et al., 2005) and nuclear star clusters (Miller & Lauburg, 2009; Neumayer et al., 2020). The most convincing IMBH candidates are indeed found in low-mass galaxies and have masses \(10^{4}\lesssim M<10^{6}\)\(M_{\odot}\), for example HLX-1 (Farrell et al., 2009; Webb et al., 2017) and the Large Magellanic Cloud (LMC, Erkal et al., 2019). During their evolution, galaxies may accrete nearby satellite or dwarf galaxies, which could deposit a substantial number of wandering IMBHs, each surrounded by a stellar system, in the galactic halos (e.g. the Milky Way, Rashkov & Madau, 2014). Moreover, centres of galaxies have in principle deep enough potential wells to retain SBH merger products in their nuclear star clusters (see Hailey et al., 2018; Fragione et al., 2021; Rose et al., 2022). Therefore, it seems conceivable that the centre of the Milky Way could host an IMBH. Although the question arises of whether it could hide among the S-stars.
Footnote 2: Some dwarf galaxies can have surprisingly massive central BHs (e.g. Bustamante-Rosell et al., 2021), possibly due to dynamical mergers of IMBHs in complexes of young stellar clusters (Amaro-Seoane et al., 2014).
## 3 Constraints on IMBH mass and location in the Galactic Centre
The first constraints on the mass and location of an IMBH in the GC came from a study of dynamical processes that can eject hyper-velocity stars from the GC at average speeds of 400-2000 km s\({}^{-1}\)(Yu & Tremaine, 2003) and the measurement of the proper motion of Sgr A\({}^{*}\) that is consistent with no acceleration (Reid & Brunthaler, 2004, 2020). These studies exclude inessence IMBH masses of \(M\gtrsim 3\times 10^{4}\)\(M_{\odot}\) within the S-star cluster and the WR/O disc.
Merritt et al. (2009) employed long-term N-body simulations to show that the presence of an IMBH can randomise the orbital planes of 19 S-stars in one million years if the IMBH mass exceeds 1500 \(M_{\odot}\) and its pericentre distance is smaller than 250 mas. N-body simulations of the orbits of S-stars around Sgr A\({}^{*}\) in the presence of an IMBH have been used to study the effects of an IMBH on the orbit of S2 in particular. These codes typically solve the N-body problem numerically (e.g. with a post-Newtonian approximation') up to order 2.5 and with 21 S-stars in addition to the MBH and the IMBH (Gualandris & Merritt, 2009; Gualandris et al., 2010).
Many-body systems are chaotic in nature, and in order to make the orbital fitting procedure manageable the N-body codes traditionally rely on a discrete but serviceable set of reasonable IMBH orbital parameters, for instance three different eccentricity values paired with a range of interesting IMBH masses and a fixed set of inclinations and orbital angles. Another way to tackle the chaotic nature of the three-body problem is used by Naoz et al. (2020) who studied a high-order analytic approximation of the inverse Kozai-Lidov equations. Considering the stability of the S2 orbit, they could rule out a \(10^{5}\)\(M_{\odot}\) companion on a circular orbit with a semi-major axis greater than 20 mas.
In GRAVITY Collaboration et al. (2020) we collected the available constraints on the IMBH mass and semi-major axis in the literature and presented them together with an estimate of the constraints that could be achieved by the GRAVITY instrument. In this work we show the actual IMBH constraints based on GRAVITY (and SINFONI/KECK/GNIRS) data of S2. In terms of simulation and fitting technique, in this work we go a step further than previous N-body simulations and explore not only a few selected sets of IMBH orbits, but the full-dimensional parameter space. In this way we obtain the most realistic constraints based on current high angular resolution interferometric and spectroscopic infrared observations.
## 4 Data
The star S2 moves on a highly elliptical 16-year orbit around Sgr A\({}^{*}\) and has been monitored since 1992. The resulting high-precision data of nearly 2.5 orbits have not only lead to the direct measurement of the compact mass in the GC, \(M_{0}\approx 4.30\times 10^{6}\)\(M_{\odot}\), and its distance, \(R_{0}\approx 8.28\) kpc (GRAVITY Collaboration et al., 2019, 2022), but have also delivered evidence for relativistic effects such as the gravitational redshift (GRAVITY Collaboration et al., 2018; Do et al., 2019) and the Schwarzschild precession (GRAVITY Collaboration et al., 2020), as well as the local position invariance (Amorim et al., 2019).
In this work, we use the astrometry data taken from 2017-2021 by the GRAVITY beam combiner, a K-band infrared interferometer at the European Southern Observatory's Very Large Telescope (ESO's VLT) together with spectroscopic data collected from 2000-2021 by NIRC2 at the Keck Observatory, SINFONI at the VLT, and GNIRS at the Gemini Observatory (see GRAVITY Collaboration et al., 2022, for a more detailed description).
All GRAVITY data have been recorded in low resolution and split (linear) polarisation. Each exposure consists of a total integration time of 320 seconds, comprised of 32 consecutive frames every 10 seconds. One VLT observation block contains two different targets, the star S2 and the black hole Sgr A\({}^{*}\). During the pericentre passage of S2 from 2017 to 2018, both objects were detected simultaneously in the same fibre field of view (FoV = 60 mas). In all epochs from 2019 onwards, the separation between S2 and Sgr A\({}^{*}\) has been larger than the FoV and the objects have been targeted individually. In this dual-beam mode we first take an exposure with the fibre centred on S2 and then dither to Sgr A\({}^{*}\) and take a sequence of four exposures. We repeat this 1+4 pattern throughout the available night. We then use the latest version of the standard GRAVITY data reduction pipeline to reduce all data. The interferometric observables, the closure phase and visibility, of the star S2 are consistent with a single point source such that we can use it as a phase reference to calibrate the Sgr A\({}^{*}\) exposures. In this way we can calculate the separation vector between S2 and Sgr A\({}^{*}\) from the fitted phase offsets (see Appendix A in GRAVITY Collaboration et al., 2022). The resulting GRAVITY astrometry has a root mean square (rms) uncertainty of \(\approx 50\)\(\mu\)as; SINFONI and KECK - GNIRS data have a rms uncertainty of \(\approx 12\) km/s and \(\approx 45\) km/s, respectively.
In this work, we are not using any adaptive optics (AO) astrometric data collected by NACO/VLT. The reason we omit about 75 a priori perfectly valid AO imaging measurements between 2003 and 2019 is that the calibration of the reference frame between NACO and GRAVITY is largely degenerate with adding an IMBH. In sampling such a posterior, the solutions run away towards an arbitrary calibration factor and arbitrarily high IMBH masses. We avoid the problem by excluding the AO measurements and using only the GRAVITY high-resolution interferometric astrometry, which is internally self-consistent and of a
much higher precision than the NACO data (rms of about 1.7 mas).
## 5 Methodology
We consider two scenarios. In the hierarchical set-up, Sgr A\({}^{*}\) has a close IMBH binary companion with a small semi-major axis \(0.01^{\prime\prime}\leq a_{\rm i}\leq 0.1^{\prime\prime}\). The star S2 with \(a=0.125^{\prime\prime}\) orbits around this binary's centre of mass. The IMBH orbit lies in this case inside the S2 orbit. In the non-hierarchical set-up the IMBH has a semi-major axis \(0.1^{\prime\prime}\leq a_{\rm i}\leq 1^{\prime\prime}\), which crosses the S2 orbit or lies entirely outside of the S2 orbit but still within the S-star cluster. In this set-up the centre of mass is Sgr A\({}^{*}\). We treat these two distinct cases separately.
### Orbital integration
To simulate the orbits of a three-body system consisting of Sgr A\({}^{*}\), the star S2, and an IMBH, we adapted the publicly available REBOUND N-body code (Rein & Liu, 2012). We used REBOUND in combination with the REBOUND package (Tamayo et al., 2020) which incorporates the first-order post-Newtonian effects from all massive bodies in the system. The simulations were integrated using a 15th order Gauss-Radau integrator (IAS15; Rein & Spiegel, 2015).
We first add Sgr A\({}^{*}\) at the origin of the coordinate system. In order to minimise the error introduced to the S2 orbital parameters due to the transformation between a flat Cartesian coordinate system and the relativistic spacetime around the black hole, we add the star S2 near the apocentre of its orbit (i.e. we set the initial timestamp of the osculating Keplerian orbit to \(t_{0}=2010.0\)). We then integrate the orbit forward to the date of the last GRAVITY observation used in this work: \(t=2021.570283\). Here we convert the orbital elements of S2 (\(a_{\rm S2}\), \(e_{\rm S2}\), \(i_{\rm S2}\), \(\Omega_{\rm S2}\), \(a_{\rm S2}\), \(T_{\rm peri,S2}\)) into a state vector consisting of the position and velocity. This ensures the correct starting position with regard to the observational data. We then remove the star S2 and add the IMBH, and redefine the coordinate system so that the origin is now at the centre of mass. Finally, we add the star S2 with the starting position and velocity vectors calculated previously.
Once we have initialised the simulation, we integrate the orbits of all three masses backwards in time to the earliest velocity data point, at \(t=2000.476\). Given the larger uncertainties of the early data points, we integrate backwards in time to make sure S2 is on the correct orbit in the present day. We take into account the Romer delay arising from the change in the light travel time at various points along the S2 orbit. We approximate the delay following GRAVITY Collaboration et al. (2018) as
\[t_{\rm em}=t_{\rm obs}-\frac{z(t_{\rm obs})}{c}\left(1-\frac{v_{z}(t_{\rm obs })}{c}\right), \tag{1}\]
where \(t_{\rm em}\) is the time at which a photon is emitted, \(t_{\rm obs}\) is the time at which it is observed, and \(z\) and \(v_{z}\) represent the line-of-sight distance and velocity, respectively. For each observation we therefore first calculate the position and velocity at the observed time and then use these values to approximate the emitted time. We then integrate the orbit of S2 to \(t=t_{\rm em}\) to compare with the data.
The REBOUNDx module includes general relativistic effects up to first order in the post-Newtonian approximation in the calculation of the orbits of all three masses, but it does not account for the relativistic effects experienced by the photons emitted by those masses. We therefore explicitly account for the transverse Doppler shift and the gravitational redshift for the star S2 when calculating the observed radial velocity. Specifically, we assume a Schwarzschild geometry for the MBH Sgr A\({}^{*}\) and an observer at infinity. This allows us to calculate the approximated observed radial velocity by multiplying the two correction terms, respectively, which leads to
\[v_{\rm obs}=v_{z}+(1-\gamma)+\left(1-\sqrt{1-\frac{r_{S}}{r}}\right), \tag{2}\]
where \(\gamma\) is the Lorentz factor and \(r_{S}\) is the Schwarzschild radius.
We can then calculate a \(\chi^{2}\) value by comparing the model orbital motion of S2 to the observed data. For the spectral velocity measurement, the measured quantity is simply \(v_{\rm obs}\) calculated above. For the astrometric position, the relevant quantities to compare to the GRAVITY measured separation between S2 and the emission from Sgr A\({}^{*}\) are the modelled differences in right ascension and declination \(\left(RA_{\rm S2}-RA_{\rm SgrA^{*}}\,,\;DEC_{\rm S2}-DEC_{\rm SgrA^{*}}\right)\).
### Posterior sampling
Once we are able to calculate a \(\chi^{2}\) value for any point in the parameter space, we turn to sampling methods to evaluate the posterior. Since the general three-body problem is chaotic, the orbit of S2 can depend very sensitively on the IMBH orbital parameters. If the two masses interact significantly, S2 will deviate widely from the observed orbit. This leads to a complex posterior distribution that features many local maxima and degeneracies between parameters.
We use dynamic nested sampling (Skilling, 2004, 2006; Higson et al., 2019) as implemented by the dynesty code (Speagle, 2020) to calculate both the posterior distribution and the model evidence. Dynamic nested sampling is a generalisation of the nested sampling algorithm, which dynamically adjusts the number of samples taken in different regions of the parameter space in order to maximise calculation accuracy. We use this approach for two principal reasons. First, nested sampling is better able to capture multi-modal posterior distributions than more traditional Markov chain Monte Carlo methods (see e.g. Ashton et al., 2022). Second, nested sampling directly calculates the evidence, allowing for model comparison (in this case between scenarios with and without an IMBH) as well as parameter constraints.
After some experimentation, we have found that a nested sampling run with at least 8000 live points is needed to reproducibly converge on the posterior distribution. We have found the best numerical performance using the 'rwalk' sampling method and the'multi' bounding distribution (see Feroz et al., 2009; Skilling, 2006, for details). In order to ensure that we have explored the full parameter space, we explored independent runs with different initialisation parameters or negligibly different boundaries as well as a run with 16000 initial live points. We find that all produce a nearly identical posterior distribution.
To confirm the accuracy of our set-up, we compare the posterior distributions of the S2 orbital parameters as well as the mass and distance of Sgr A\({}^{*}\) with the published values. We recover the published values to within the error bars in both a fiducial run without an IMBH as well as a full run with free IMBH orbital parameters. We also recover the expected degeneracies between the mass and distance of Sgr A\({}^{*}\).
The parameters of our simulation are summarised in Table 1. Along with the mass and the six orbital parameters of the IMBH, we allow the orbital parameters of S2, the mass and distance of Sgr A\({}^{*}\), and a global velocity offset to vary. We chose to
parametrise the initial position of the S2 orbit with the time of pericentre passage \(T_{\rm peri}\), which is well constrained from observations. In order to limit the duplication of IMBH orbits, however, we use the mean anomaly at \(t_{0}\) to parametrise its initial position, which naturally confines the initial conditions to a single orbital period.
Table 1 also shows the initial value and allowed range for each parameter. We use a flat prior across the space (-range, +range). The S2 orbital parameters and Sgr A\({}^{*}\) mass and distance are already tightly constrained by previous fits to (partly) the same data as used here. We adopt values close to GRAVITY Collaboration et al. (2018), with a range scaled from the errors quoted therein. For the IMBH we allow the angular orbital parameters to vary between \(0^{\circ}\) and \(360^{\circ}\). We expect to have the greatest discriminating power for IMBHs that lie within or close to the S2 orbit, as the potential for three-body interactions is thus maximised. However, the minimum time step to accurately calculate orbits decreases as the closest approach distance decreases. This decreased time step increases the computational time for each likelihood evaluation. Given that our results depend on a robust exploration of the parameter space, we therefore choose an initial range of semi-major axes between \(0.01^{\prime\prime}\) and \(0.1^{\prime\prime}\) and limit eccentricities to be less than 0.95. With this set-up, a complete run of the parameter estimation can be completed on a moderately sized cluster (60 cores) within approximately one week. We additionally explore the non-hierarchical scenario in a second run where we allow the IMBH semi-major axis to extend out to \(1^{\prime\prime}\).
## 6 Constraints on IMBHs in the GC from the S2 orbit
From the posterior sampling we obtain the full set of IMBH orbital parameters (see Appendix A). The left panel of Fig. 1 shows the posterior distribution of the IMBH mass and the semi-major axis of its orbit for a prior range of \(a_{\rm i}<0.1^{\prime\prime}\). For all IMBH semi-major axes inside the S2 orbit, we exclude IMBH masses greater than \(4010M_{\odot}\) at the 86% level. At small semi-major axes \(\lesssim 0.05^{\prime\prime}\), these limits are considerably stronger, and IMBHs with a mass greater than \(\approx 2000M_{\odot}\) are very strongly excluded.
We find a global minimum \(\chi^{2}\) value of 219.53 for an IMBH with a mass of 1904 \(M_{\odot}\) and a semi-major axis of 0.031\({}^{\prime\prime}\), compared to a minimum \(\chi^{2}\) of 224.1 for an S2-only model. Since the IMBH model formally fits the data better than the S2-only model, we calculate the evidence for each model by integrating over the posterior distribution. We find that the log-evidence for the two models are essentially identical: log(\(z\)) = 124.80 for S2-only, and log(\(z\)) = 124.79 for the IMBH. We therefore conclude that we cannot distinguish between these models and that our constraints quoted above are indeed upper limits.
The right panel of Fig. 1 shows the posterior distribution of the IMBH mass and the semi-major axis of its orbit for a prior range of \(0.1^{\prime\prime}<a_{\rm i}<1^{\prime\prime}\). Here we find a minimum \(\chi^{2}\) value of 220.54 for an IMBH with a mass of 5842 \(M_{\odot}\) and a semi-major axis of 0.164\({}^{\prime\prime}\). However, the posterior peaks at the upper edge of the prior mass range, implying that we do not generate a valid upper limit on the mass or a constraint on the semi-major axis. These peaks in the posterior correspond to an IMBH on a large orbit that essentially does not interact with S2 over the \(\approx\)20-year timescale probed here, rendering it undetectable with our current method.
We find the shape of the allowed region in the \(M_{\rm i}-a_{\rm i}\) parameter space to be roughly consistent with previous work by Gualandis et al. (2010), with the combination of high mass and small semi-major axis most strongly ruled out. However, we find higher upper mass limits than previous studies. This difference almost certainly stems from the increased sampling density of the parameter space. We find that the level of perturbation of the S2 orbit is extremely sensitive to even those parameters traditionally considered to be nuisance parameters, such as the initial mean anomaly of the orbit.
We also find a larger allowed region of the parameter space than Naoz et al. (2020). We attribute this discrepancy to the fact that the authors in that study approximate the perturbation of the S2 orbit by averaging over the orbital periods of both S2 and the IMBH. As shown in Fig. 2, the relative location of the IMBH along its orbit can play a crucial role in determining to what extent it perturbs the path of S2.
## 7 Constraints on IMBHs in the GC from the S-stars
In the previous section we report that certain IMBHs with specific orbital properties cannot be excluded given the current GRAVITY and SINFONI/KECK/GNIRS data of S2. In order to understand the long-term effects of such an IMBH, we place it among the 40 S-stars with known orbital parameters (see Gillessen et al., 2009, 2017) and evolve the entire system backwards in time. We essentially run the same simulation as defined in Section 5.1, but without the posterior sampling. The question we pose is whether the presence of an intermediate-mass perturber destabilises or even disrupts the S-stars within one million years.
We extract for each of the two scenarios 60 random IMBH orbits from within the 98.8% likelihood contours shown in Fig. 1. Then we evolve the entire system of S2, the 40 S-stars, the IMBH and Sgr A\({}^{*}\) with REBOUND/REBOUNDx for \(10^{6}\) years backwards in time. The stars are considered to be 'active particles' in the simulation (i.e. they have masses): eight early-type stars have precisely determined masses that lie between 7 and 14 \(M_{\odot}\)(see Habibi et al., 2017); the mass of the lesser-known early-type stars were set to 10 \(M_{\odot}\) ; the inferred mass of the population of late-type stars lies between 0.5 and 2 \(M_{\odot}\)(see Habibi et al., 2019), and accordingly we set the known late-type stars to 1 \(M_{\odot}\) ; as the majority of the stellar sample are early-type stars,
\begin{table}
\begin{tabular}{|l|c|l|} \hline Parameter & Starting Point & Boundaries \\ \hline \hline \(M_{0}\) (\(M_{\odot}\) ) & \(4.2\times 10^{6}\) & \(\pm 5\times 10^{5}\) \\ \hline \(R_{0}\) (kpc) & 8.25 & \(\pm\) 1.0 \\ \hline \(v_{\rm s,0}\) (km/s) & 0 & \(\pm\) 5 \\ \hline \(a_{\rm S2}\) (\({}^{\prime\prime}\)) & 0.125 & \(\pm\) 0.02 \\ \hline \(e_{\rm S2}\) & 0.87 & \(\pm\) 0.05 \\ \hline \(i_{\rm S2}\) (\({}^{\circ}\)) & 134 & \(\pm\) 5 \\ \hline \(\Omega_{\rm S2}\) (\({}^{\circ}\)) & 228 & \(\pm\) 5 \\ \hline \(\omega_{\rm S2}\) (\({}^{\circ}\)) & 66 & \(\pm\) 5 \\ \hline \(T_{\rm peri,S2}\) (y) & 2018.4 & \(\pm\) 0.2 \\ \hline \(M_{\rm i}\) (\(M_{\odot}\)) & 5010 & \(\pm\) 5000 \\ \hline \(a_{\rm i}\) (\({}^{\prime\prime}\)) & 0.51 & \(\pm\) 0.5 \\ \hline \(e_{\rm i}\) & 0.48 & \(\pm\) 0.47 \\ \hline \(i_{\rm i}\) (\({}^{\circ}\)) & 180 & \(\pm\) 180 \\ \hline \(\Omega_{\rm i}\) (\({}^{\circ}\)) & 180 & \(\pm\) 180 \\ \hline \(\omega_{\rm i}\) (\({}^{\circ}\)) & 180 & \(\pm\) 180 \\ \hline \(\mu_{\rm i}\) (\({}^{\circ}\)) & 180 & \(\pm\) 180 \\ \hline \end{tabular}
\end{table}
Table 1: Fitting parameters, their initial values, and the boundaries. Not listed are the Sgr A\({}^{*}\) parameters (\(x_{0},y_{0},z_{0},v_{\rm s,0}\), \(v_{\rm s,0}\)), which are also allowed to vary.
we also assume a mass of 10 \(M_{\odot}\) for the two stars (S39 and S55) of unidentified spectral type.
Our criterion to define an unstable system is that within 1 Myr at least one star is ejected and moves past the stellar WR/O disc to reach a separation \(r>3000^{\prime\prime}\) (about 120 pc) from Sgr A\({}^{*}\). At this distance the stars are far outside the sphere of influence, which has for Sgr A\({}^{*}\) a radius of about 3 pc, and appear completely dissociated from the S-star cluster. Depending on the strength of interaction with the IMBH, some of the ejected stars may return to Sgr A\({}^{*}\) after increasingly long intervals of time (and on severely modified orbits) which are, however, not covered by our simulation.
We find that _all_ of our IMBH solutions introduce some degree of instability among the S-stars such that their orbits deviate substantially from the non-IMBH case. Furthermore, the majority of our IMBH solutions fulfil our instability criterion: at least one star (but typically several stars) becomes unbound and is ejected well before one million years have passed. The S-stars that strongly interact with an IMBH are in particular the highly eccentric stars such as S9, S14, and S29 with \(e>0.9\). Only a small fraction of about 5% and 1.6% of the inner and outer IMBH solutions, respectively, does not disrupt the S-stars in 1 Myr. The stability of the S-star cluster thus gives a more stringent constraint than the best-fitting S2 orbit alone.
In our sample of 60 inner IMBH configurations, the only three non-disruptive inner solutions for semi-major axes \(0.01^{\prime\prime}<a_{\rm i}<0.1^{\prime\prime}\) (labelled IMBH\({}_{\rm i1}\), IMBH\({}_{\rm i2}\), IMBH\({}_{\rm i3}\)) have similar orbital parameters: masses below 2000 \(M_{\odot}\), moderate to high eccentricities, and a significant inclination towards the S-plane of at least 60\({}^{\circ}\). We show their orbital properties together with the only non-disruptive outer solution (IMBH\({}_{\rm o1}\)) in Table 2. Interestingly, the only valid outer solution we find has a mass and semi-major axis that falls into the parameter range proposed by Merritt et al. (2009) (i.e. at first glance an IMBH that could potentially thermalise the S-stars in a sufficiently short time).
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Parameter & MBH\({}_{\rm i1}\) & IMBH\({}_{\rm i2}\) & IMBH\({}_{\rm i3}\) & IMBH\({}_{\rm o1}\) \\ \hline \hline \(M_{\rm i}\) (\(M_{\odot}\)) & 1282 & 1321 & 1130 & 3226 \\ \hline \(a_{\rm i}\) (\({}^{\prime\prime}\)) & 0.032 & 0.033 & 0.075 & 0.435 \\ \hline \(a_{\rm i}\) (\({}^{\prime\prime}\)) & 0.73 & 0.69 & 0.49 & 0.56 \\ \hline \(i_{\rm i}\) (\({}^{\circ}\)) & 52.29 & 63.85 & 75.31 & 274.03 \\ \hline \(\Omega_{\rm i}\) (\({}^{\circ}\)) & 155.42 & 161.59 & 291.45 & 95.95 \\ \hline \(\omega_{\rm i}\) (\({}^{\circ}\)) & 195.74 & 171.54 & 156.02 & 180.71 \\ \hline \end{tabular}
\end{table}
Table 2: Example solutions of allowed IMBH parameters that do not disrupt the S-star cluster in 1 Myr. The IMBH\({}_{\rm i1}\) - IMBH\({}_{\rm i0}\) solutions lie inside the S2 orbit, while IMBH\({}_{\rm o1}\) is outside.
Figure 1: Posterior distributions of IMBH orbits. Left: Posterior distribution over the mass and semi-major axis of the IMBH for orbits with a semi-major axis smaller than 0.1\({}^{*}\). The contours correspond to 39%, 86%, and 98.8% (from dark blue to light blue) enclosed likelihood. Right: Same as left, but for orbits extending out to a semi-major axis of \(1^{\prime\prime}\).
Figure 2: \(\chi^{2}\) vs. the initial mean anomaly of the IMBH. All other IMBH parameters are fixed to the values shown in Table 2 and used in Fig. 3.
The three stable inner IMBH orbits are shown in Fig. 3. We note that we have included the adaptive optics positions measured with the NACO instrument in the plot, although these data points were not used for fitting. The three IMBH orbits shown in blue correspond to the IMBH orbital properties given in Table 2 and their residuals are given in Fig. 1. They demonstrate where and how an IMBH could hide in the GC based on the current GRAVITY and SINFONI/KECK/GNIRS data: the IMBH must have a rather low mass and be on a short orbit around Sgr A\({}^{*}\) that is sufficiently inclined towards the orbital plane of S2.
## 8 Discussion
Intermediate-mass black holes are thought to play a vital role in the growth of massive and supermassive BHs. They are thus closely linked to the formation and evolution of their host galaxies and are predicted to be abundant in the local universe (e.g. in young dense stellar clusters and dwarf galaxies). However, IMBHs are notoriously difficult to find and unambiguously identify.4 The presence or absence of an IMBH in the centre of the Milky Way could give important hints to constrain their formation channel and provide valuable input for future electromagnetic and gravitational wave observations with the Extremely Large Telescope (ELT, e.g. Davies et al., 2018) and
Figure 3: Example orbits of allowed IMBHs in the Galactic Centre. The left panel shows the on-sky orbits of S2 and three IMBH solutions around Sgr A\({}^{*}\) (indicated by the cross). The right panels show the time evolution of the RA, DEC, and radial velocity. The solid grey and dashed black curves show the orbit of S2 with and without an IMBH, respectively. The IMBHs are shown in blue and correspond to the parameters given in Table 2. The data points show the last 30 years of observations of S2. The black points correspond to adaptive optics measurements with NACO and early speckle imagery with SHARP. The red points correspond to GRAVITY interferometric measurements. The black and red radial velocity observations correspond, respectively, to SINFONI – KECK and GNIRS spectral measurements of the line-of-sight velocity.
the Laser Interferometer Space Antenna (LISA, planned launch date in 2037; see e.g. Amaro-Seoane et al. 2017), respectively.
In this paper we used the high angular resolution astrometric and spectroscopic data of the star S2 from GRAVITY and SINFONI/KECK/GNIRS, respectively, to assess where in the GC an IMBH could hide. We had a fresh look at the dynamical search for IMBHs in the GC by exploring the full 16-dimensional parameter space of the chaotic three-body problem comprising Sgr A\({}^{*}\), an IMBH, and the star S2. We specifically considered two scenarios, one where the IMBH trajectory lies inside the S2 orbit with a semi-major axis \(0.01^{\prime\prime}<a_{\rm i}<0.1^{\prime\prime}\) and the other where the IMBH trajectory crosses the S2 orbit or lies outside, \(0.1^{\prime\prime}<a_{\rm i}<1^{\prime\prime}\), and calculated for both cases the resulting modified orbital properties for S2. Using dynamic nested sampling, we explored the full set of parameters and found for each scenario the most likely locations for an IMBH (see Fig. 1).
We found that for very specific combinations of orbital parameters, in particular for certain IMBH orientations and pericentre passage times, high IMBH masses could be located among the S-stars. This happens for IMBHs that stay sufficiently far from S2 during their closest approach to Sgr A\({}^{*}\) so as not to measurably affect the orbit at all. We therefore analysed our valid solutions further and selected for each scenario 60 random solutions from within the 98.8% likelihood contours (see Fig. 1). These IMBHs were placed among the 40 stars of the S-star cluster and evolved backwards in time for one million years. Moreover, we calculated for each set of 60 solutions the residuals between the data and the models.
Based on the results from the optimisation, the stability analysis and the residual calculation, we arrive at the conclusion that although we find viable fits to the data that suggest an IMBH could be present for specific parameter combinations, the majority of these solutions do not withstand the reality check and would disrupt the S-star cluster in less than a million years or induce a precession in the orbit of S2 beyond the observed one. We conclude the following:
* Current GRAVITY and SINFONI/KECK/GNIRS data do not formally require the presence of an IMBH.
* IMBHs on orbits that cross the S2 orbit or lie outside the S2 orbit among the other S-stars are disfavoured as they typically disrupt the S-star cluster in less than one million years (only 1.6% of the solutions with \(0.1^{\prime\prime}<a_{\rm i}<1^{\prime\prime}\) are stable).
* A low-mass IMBH with M \(<2000\)\(M_{\odot}\) could hide inside the S2 orbit if its orbital is sufficiently inclined towards S2 (only 5% of the solutions with \(0.01^{\prime\prime}<a_{\rm i}<0.1^{\prime\prime}\) are stable, all of them low-mass IMBHs).
We conclude the following from the IMBH constraints on the population(s) of stars and stellar remnants in the GC: A spherical distribution of stellar-mass BHs, neutron stars (NSs) and/or white dwarfs (WDs) located as a dark cluster among the S-stars would be affected by an IMBH in a very similar way to the S-stars. The bodies on eccentric orbits would most likely be ejected, leaving preferentially the compact objects on low-eccentricity orbits behind. The total mass of such a dark (extended) cluster has been constrained to about 15000 \(M_{\odot}\) within the S-star cluster. Conversely, in the absence of an IMBH among the S-stars, which is based on our analysis the preferred case, a dark cluster of SBHs, NSs, and/or WDs could show a wide range of eccentricities and orbital inclinations, and thus exhibit morphological similarities to the S-star cluster.
The high-precision GRAVITY astrometric measurements span at present about half of the S2 orbit. Much stronger constraints on the properties of IMBHs can be obtained once GRAVITY has measured a full S2 orbit. Already the current SINFONI/KECK/GNIRS data which cover, albeit sparsely, the 2002 pericentre passage hint that two consecutive pericentre passages will be invaluable to assess the likelihood of a low-mass IMBH on a \(<0.1^{\prime\prime}\) orbit. After the upcoming closest approach of S2 to Sgr A\({}^{*}\) in 2034, the data will allow us to put stronger constraints on a single IMBH companion of Sgr A\({}^{*}\) as well as its extended mass (see GRAVITY Collaboration et al. 2022; Heissel et al. 2022; Rubilar and Eckart 2001). Moreover, there are now several other stars with complete or near-complete orbits that can already serve in the coming few years as additional precision probes. Knowing whether or not the nuclear cluster in the GC hosts an IMBH will in turn put constraints on the formation processes of IMBHs. Moreover, constraints on the mass distribution in the GC will also be of value to LISA, which will be able to measure gravitational waves of moving masses in the GC.
###### Acknowledgements.
We are very grateful to our funding agencies (MPG, ERC, CNRS [PNCG, PNGRAM], DFG, BMBF, Paris Observatory [CS, PhyFOG], Observatoire des Sciences de l'Univers de Grenoble, and the Fundacao para a Ciencia e Tecnologia) and to ESO. We especially thank the excellent and in every way amazing ESO/Paranal staff as well as the scientific and technical staff members in our institutions how helped to make GRAVITY and SINFONI a reality and observations a success. PG, and VC were supported by Fundacao para a Ciencia e a Tecnologia, with grants reference SFRB/BSA/14/24940/2018, UIDB/00099/2020 and PTDC/FIS-AST/7002/2020. S.G. acknowledges the support from ERC starting grant No. 306311. F.E. acknowledges the support from ERC synergy grant No. 610058. The GNIRS spectra were obtained at the international Gemini Observatory, a program of NSF's NOIRLab, managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation (NSF) on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia Inuncao (Argentina), Ministerio da Ciencia, Tecnologia, Inovacenes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This work was enabled by observations made from the Gemini North telescope, located within the Maunakea Science Reserve and adjacent to the summit of Manakaea. We are grateful for the privilege of observing the Universe from a place that is unique in both its astronomical quality and its cultural significance.
|
2308.06502 | Three Ways of Using Large Language Models to Evaluate Chat | This paper describes the systems submitted by team6 for ChatEval, the DSTC 11
Track 4 competition. We present three different approaches to predicting
turn-level qualities of chatbot responses based on large language models
(LLMs). We report improvement over the baseline using dynamic few-shot examples
from a vector store for the prompts for ChatGPT. We also analyze the
performance of the other two approaches and report needed improvements for
future work. We developed the three systems over just two weeks, showing the
potential of LLMs for this task. An ablation study conducted after the
challenge deadline shows that the new Llama 2 models are closing the
performance gap between ChatGPT and open-source LLMs. However, we find that the
Llama 2 models do not benefit from few-shot examples in the same way as
ChatGPT. | Ondřej Plátek, Vojtěch Hudeček, Patricia Schmidtová, Mateusz Lango, Ondřej Dušek | 2023-08-12T08:34:15Z | http://arxiv.org/abs/2308.06502v1 | # Three Ways of Using Large Language Models to Evaluate Chat
###### Abstract
This paper describes the systems submitted by _team6_ for ChatEval, the DSTC 11 Track 4 competition. We present three different approaches to predicting turn-level qualities of chatbot responses based on large language models (LLMs). We report improvement over the baseline using dynamic few-shot examples from a vector store for the prompts for ChatGPT. We also analyze the performance of the other two approaches and report needed improvements for future work. We developed the three systems over just two weeks, showing the potential of LLMs for this task. An ablation study conducted after the challenge deadline shows that the new Llama 2 models are closing the performance gap between ChatGPT and open-source LLMs. However, we find that the Llama 2 models do not benefit from few-shot examples in the same way as ChatGPT.
## 1 Introduction
This paper describes the systems submitted by _team6_ for ChatEval, the DSTC 11 Track 4 competition aimed at evaluating open-domain chat.1 We participated in Task 2, which focuses on evaluating multiple criteria on the level of individual dialogue turns. The task of evaluating responses in a chat is challenging because it requires an understanding of the interlocutor's roles (pragmatics), the conversation's context, and the response's meaning (semantics). At the same time, the conversations are often ungrammatical (Rodriguez-Cantelar et al., 2023) and vary in style (Zhang et al., 2018). The commonly used metrics, such as BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), or BERTScore (Zhang et al., 2019), are based on comparison to human references and thus correlate poorly with human judgments on the turn-level, as they penalize many correct responses for a given chat context (Zhao et al., 2017). At the same time, human evaluation is expensive and time-consuming. Previous referenceless metrics based on neural networks and language models still do not reach sufficient correlations with human judgements (Zhang et al., 2020; Lowe et al., 2017).
Footnote 1: Results & task description at chateval.org/dstc11. Our experimental code is available at github.com/oplatek/chateval.llm.
In our work, we followed up on the recent development of pretrained Large Language Models (LLMs) with instruction finetuning (Brown et al., 2020; Raffel et al., 2020), which have been found to be capable evaluators in machine translation, summarization as well as dialogue (Kocmi and Federmann, 2023; Liu et al., 2023). Therefore, we applied LLMs and specific prompting to elicit ratings for the multiple qualities evaluated in DSTC11 Track 4 Task 2: appropriateness, content richness, grammatical correctness, and relevance. We present three different systems used for our three submissions, all of which are based on LLMs and few-shot prompting: (1) We evaluate a straightforward approach with manually designed fixed prompts for off-the-shelf open LLMs checkpoints. (2) We train a simple feed-forward regression neu
Figure 1: The architecture of the vector store approach with a LLM. During training, we construct the vector store from embedded annotated dialogues. At inference time, the input dialogue is embedded, and most similar examples from the vector store are retrieved to be included in the prompt.
ral network (FNN) on top of frozen LLM embeddings to predict the turn-level metrics scores. (3) We used the ChatGPT API and few-shot examples retrieved dynamically from the development set to improve the prompting performance. As no data annotated with the target metrics were available for the challenge, we heuristically mapped existing annotations from the development set to the target metrics, and we manually annotated a small rehearsal dataset for hyperparameter search.
Based on the human annotations released after the challenge finished, our _team6_ achieved second place thanks to our third method, dynamically prompted chatGPT with few-shot examples. This approach showed that LLM prompting is a viable option for prototyping chat evaluation. However, the two other methods we explored scored worse: open LLMs with fixed prompts generally showed poor performance, and the regression FFN worked well on the development set but did not generalize well to the test set.
## 2 Task & Data
The goal of the DSTC11 Track 4 Task 2 was to predict several turn-level metrics automatically on the test set. For each dialogue turn, considering the preceding dialogue history, the participants were to submit a system to predict the score of the target metrics, defined by the organizers as:
* The response is appropriate given the preceding dialogue.
* The response is informative, with long sentences including multiple entities and conceptual or emotional words.
* Responses are free of grammatical and semantic errors.
* Responses are on-topic with the immediate dialogue history.
Table 1 shows chat conversations from the rehearsal dataset with the turn-level metric annotations.
The organizers provided the participants with training, development, and test sets (Rodriguez-Cantelar et al., 2023), each coming from different domains and annotated with different metrics:
* consists of 390k dialogues, annotated with sentiment and toxicity labels. This set was not used in our experiments at all since our goal was to fine-tune or select LLMs that are already well-performing with no finetuning.
* consists of 24 datasets, some annotated with dataset-specific metrics. For our experiments, we created a heuristic mapping to the target metrics on a subset of the development set (see Section 3).
* consists of 3,470 dialogues and 130k turns, annotated with the target metrics. The data was only published in an anonymized form and at the end of the challenge, with no annotations or metadata, so that challenge participant could produce their model outputs. The annotations were published after the challenge was finished.
* this is a set of 156 turns collected in the same way as the test set, released earlier than the test set. We manually annotated this set with the target metrics (see Section 3) and used the result for hyperparameter search.
The submitted systems were benchmarked for the quality of their ranking using the Spearman correlation coefficients (SCC) (Zar, 2005) computed between the predicted scores and the human judgments. As a secondary measure, the Pearson correlation coefficients (PCC) (Freedman et al., 2007) were used to evaluate the correlation. The measures were computed for each of the target metrics separately. The overall submissions' ranking was determined using the average of the four SCCs.
## 3 Data Preprocessing
Since no information was provided on how the individual development dataset metrics relate to the target dialogue metrics, we built a heuristic to obtain target metric scores. The heuristic uses a linear combination of one or more dataset-specific metrics to the target metrics, chosen based on individual descriptions from the literature.2 Using the development set and the heuristic, we created a supervised dataset and split it into training and development splits. We used this _development dataset_ for model selection or supervised training, and we use this dataset to develop the three systems described in Section 4.
Footnote 2: See the line 354 for the turn metric mapping for different datasets.
During our experiments, we struggled to find representative labels and input data which could be used as a development set. Therefore, we decided to annotate the additional 156 turns from the _rehearsal set_ with the target metrics described in Section 2. We used this data to find our submitted systems' optimal hyperparameters. We assumed that this data came from the same distribution as the test set, but this later proved clearly not to be the case, as seen in Figure 2.
Note that we did not use the training set at all.
## 4 Submitted Systems
Inspired by Kocmi and Federmann (2023), we used pre-trained LLMs with prompts for predicting the individual metrics. We started with the simplest approach possible and manually designed the prompts.
### Method 1: Simple Prompting
We experimented with prompting GPT-NeoX-20B Black et al. (2022), OPT-30B Zhang et al. (2022), and TK-Instruct-11B Wang et al. (2022).3 We tried several prompt templates for each model and selected the best-performing one on the development set and the manually annotated rehearsal set. The templates were slightly adapted for each model to control for the deviations in model pretraining or instruction finetuning procedures, i.e., the wording of instructions or tags denoting a user-system interaction.
Footnote 3: The numbers identify each exact model checkpoint by the number of parameters.
We used templates evaluating a single quality of each turn (i.e., calling the LLM four times to predict all metrics). We focus on a single-metric template because most of the open-source models
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dialogue Turns** & **Appr** & **Rich.** & **Gram.** & **Rel** \\ \hline My boss gave me a 10 raise just last month And it was a nice surprise & 5 & 5 & 5 & - \\ It’s great and he might think you’re doing a great job & 5 & 5 & 5 & 5 \\ We have always been very nice He has always been very supportive of me & 4 & 5 & 5 & 5 \\ That’s a good thing & 4 & 3 & 5 & 4 \\ \hline do you have any pets? & 5 & 4 & 3 & - \\ I am retired so I love to travel so pets would slow me down & 4 & 4 & 3 & 4 \\ I understand that my idea of traveling is a hot hot bubble bath & 3 & 4 & 2 & 2 \\ Yes I have dogs and cats I like to take them with me on trips & 2 & 4 & 2 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Two examples of complete conversations from the rehearsal set are annotated with turn-level metrics: appropriateness, content richness, grammatical correctness, and relevance. The context for each turn are the previous turns (lines) in the conversation. The second conversation at the bottom of the table shows an inappropriate response in the last turn because the last response contradicts previous responses of the system.
Figure 2: The histogram of predicted and human-annotated scores for _appropriateness_ of a reply, on the test set (above) and on our manually annotated development set (below). Predicted scores are from ChatGPT with dynamic few-shot examples (see Section 4.3). Note that the rehearsal set is not representative of the test set – compare the blue bars representing the human-annotated scores. Interestingly, ChatGPT-predicted scores on the test set are not concentrated at the extremes, unlike on the rehearsal set.
have trouble sticking to the desired output format when asked to generate a structured response with all four quality scores. Our templates included two hardcoded examples from the DailyDialog set (Li et al., 2017), one of the provided development datasets.
We developed the prompt templates iteratively. Every time we rephrased the prompt templates, we evaluated them on the DailyDialog dev set, which is part of the challenge dev set.
### Method 2: Feed-Forward Regressor on Top of LLMs
Our second method attempts to solve the problem that the prompted LLMs sometimes produce malformed output. We assumed that LLMs extract relevant features even when the decoder produces a malformed one-best hypothesis. Therefore, we aimed to use LLM contextual embeddings as features for a simple regressor. However, instead of using the LLM's output directly, we implemented a simple embedding extractor on top of the LLM, and we trained a regression model to predict all four scores based on the embeddings. We use global max and average pooling over decoder layers and time steps of the decoded output to obtain the prompted response embedding.
We designed the prompts so the LLMs' replies contain information about all four metrics, so a single call to the LLMs is sufficient to obtain all four scores. At the same time, we designed the prompt so the LLM replies are as short as possible. To train the regressor, we used our heuristically mapped development data (see Section 3). We trained four simple feed-forward networks (FFNs), each modeling one of the target metrics using the same input embeddings. See Figure 3 for the architecture of the FNN.
### Method 3: Dynamic Few-Shot Examples from a Vector Store
The previous two approaches used fixed few-shot examples. However, the performance of the in-context LLM learning can be improved by providing examples that are contextually similar to the instance being evaluated (Brown et al., 2020). We, therefore, implement a vector store with a dynamic few-shot example selection. First, we take dialogues from the development set relevant to a given metric (based on our mapping described in Section 3), and compute turn-level embeddings. These are then used as keys in a vector store optimized for similarity search. At runtime, we retrieve a set of examples based on their similarity to the input and include them in the LLM prompt. See Figure 1 for a detailed overview of the vector store architecture.
## 5 Experiments
We experimented with the three methods described in Section 4. First, we we experimented with the Simple Prompting method using the open-source LLMs (Section 4.1). Based on the results, we started two independent experiments. Section 5.2 describes the FFN training and Section 5.3 describes the development of vector storage which we used with ChatGPT API. For all three methods, we used the _rehearsal_ set to select the best-performing model-template combination and hyperparameters.
### Simple Prompting Submission
For our baseline submission, we selected the best-performing model-template combinations for each quality separately and then combined the results. Appropriateness and Relevance were generated by OPT-30b (Zhang et al., 2022). Content Richness was generated by TK-Instruct (Wang et al., 2022). As the outputs for "Grammatical Correctness" were malformed in most cases, we replaced the outputs with randomly generated scores.
### FFN Fine-Tuning setup
We trained the FFN using two layers with 1024 hidden units and ReLU activation with batch size 2048 and learning rate 5e-5. We used the log-cosh (Saleh and Saleh, 2022) loss function. We split the original development set into training and validation sets. We trained until early stopping based on the validation set using SCC for _appropriateness_ as a stopping criterion. We extracted the embeddings from the prompted LLMs on the
Figure 3: The architecture of the FFN trained on top of embeddings of LLM responses.
training and validation sets and cached them. We used the same LLM checkpoints as in the simple prompting method. We only used dev datasets whose annotations mapped to all four target metrics (see Section 3).; _DailyDialog_Li et al. (2017), _Fed-Turn_Mehri and Eskenazi (2020), _Persona-See_See et al. (2019), and _Persona-Usr_Mehri and Eskenazi (2020).
### Vector Store Implementation
We use FAISS Johnson et al. (2019) to implement vector storage that can perform effective similarity-based retrieval. To convert the dialogues into embeddings that are saved to the vector store, we used the MPNet Song et al. (2020) pretrained sentence representation model Reimers and Gurevych (2019). We store the same development datasets in the vector store that we used for FFN training (Section 5.2), with the heuristically mapped scores for all four metrics.
We used the prompt template in Figure 4 with dynamically retrieved examples using vector store for the prompt and ChatGPT as the prompted LLM.4
Footnote 4: We used the gpt-3.5-turbo-0301 API version.
## 6 Results & Discussion
We report positive findings related to Method 3 (Section 4.3), but we also report lessons learned from implementing the other two methods and, in general, using the data provided for the challenge. First, we summarize observations from our use of the data (Section 6.1). Then we report negative results from the simple prompting and FFN fine-tuning (Sections 6.2 and 6.3, respectively). We also report our best results from the vector store (Section 6.4) and discuss what our best model in the challenge is capable of evaluating. Finally, we add an ablation study in Section 6.5 performed after the challenge was complete, comparing few-shot capabilities of ChatGPT with the newly released Llama 2 model.
We are aware that LLMs are trained on large datasets, some of which (e.g., ChatGPT) are not public. However, due to the novelty of the test set Rodriguez-Cantelar et al. (2023), we believe that the test set has not leaked to their training set.
### Dataset Analysis
The test set contains dialogue samples from various datasets unseen in the development and rehearsal sets: _BlenderBot3, ChatGPT, DSTC10Persona, DSTC10Typical, ESL, GPT3, NCM_. The distribution of the test set was unknown to the participants, and most of the data comes from the unseen _BlenderBot3_ and _ChatGPT_ datasets. We observed that scores for individual metrics were not normalized across the datasets as the _ESL_ and _NCM_ datasets had a range of 0-1, while the other datasets had a range of 1-5.
This discrepancy in data distributions most likely resulted in our model selection and hyperparameter search on the rehearsal dataset being detrimental to the final performance of our systems. See the mismatch in the distribution of our own manual annotations on the rehearsal set and human annotation on the test set in Figure 2. Furthermore, we argue that we could have achieved better results if we ran our model selection not only on the appropriateness metric but optimized for all four metrics.
### Simple Prompting is Fragile
In our informal experiments with simple prompting, we noticed that instruction-tuned LLM checkpoints produce results with intended formats more reliably. We also experimented with templates evaluating all four metrics using a single prompt. However, single-quality templates were generally more reliable and yielded outputs adhering to the expected formats more often. We consistently observed that adding examples to the templates improved the reliability of the outputs.
Manual development of prompts, which relies on observing a small set of examples, was impractical for a diverse development dataset. We frequently developed a promising prompt only to discover that the model produces malformed outputs when run on conversations from a different system. The typical problem was that LLMs would interpret part of
Figure 4: Prompt template used with the few-shot dynamic examples retrieval with ChatGPT has a placeholder for the _examples_. Each _example_ contains the turn _response_ together with its _dialogue context_ and the ground truth _appropriateness score_. The other methods used a similar template, with only a slight rewording.
the input conversation as instruction. Consequently, instead of replying with the metric score, the model replied with a next turn fitting the conversation prefix. Whenever the model did not respond in the desired format, we used an uninformed response score of 3. The number of informed responses was the largest factor in the overall lower score for the simple prompting method.
### FFN is Fast but Lacks Normalization
The training of the FFN is very efficient because we ran the LLMs only once in inference mode. Note that the training was faster than extracting the embeddings from the LLMs, and a single FFN layer adds negligible computational and memory costs at inference time. The FNN regression model solved the problem of LLMs producing malformed outputs. However, our submission suffered from unnormalized scores in different development dataset splits, and the model performed poorly on the test set. The results of our FFN training in Method 2 thus were influenced by incorrectly scaling the target metric values: For example, the _FedTurn_ scores lie in the range \([0,2.2]\) instead of \([1,5]\).
### Are we Comparing Systems or Turns?
Method 3 (Section 4.3) was the most successful in our experiments. We argue that we could achieve even better results if we did model selection not only on the appropriateness metric but optimized for all four metrics. We also argue that data mismatch between the rehearsal and test sets was detrimental to the performance of the systems. Despite that, we placed second as a team, improved upon a baseline, and are relatively close to the best system in terms of the overall ranking. See Table 2 for the comparison of the systems based on the average of the SCC over the four metrics. See Figure 2.
Our third method, ChatGPT with vector store examples (Section 4.3), was the most successful in our experiments. We observed that it easily contrasts between responses from different datasets but does not distinguish well among turns coming from the same dialogue system and the same dataset. The SCC scores in Table 3 shows that the score for the whole test set is better than most of the individual subsets based on different source datasets.
### Revisiting Few-Shot Prompts in Ablation
We present an additional ablation study, which we ran after the challenge was completed and evaluated on the _Appropriateness_ quality. Using both ChatGPT and the newly released Llama 2 models (Touvron et al., 2023), we investigate the influence of the few-shot examples on the performance of the models.5 In order to do so, we made two changes to the prompts: (1) we designed a single prompt template that can be used both with and without few-shot examples, (2) we normalized the use of newlines at the end of the prompt and in the few-shot examples, which improved performance. We also (3) further improved the prompt by iterative experiments on the DailyDialog development set.
Footnote 5: We used the Llama2-7b-chat-hf checkpoint ([https://huggingface.com/meta-lllama/Llama-2-7b-chat-hf](https://huggingface.com/meta-lllama/Llama-2-7b-chat-hf)) and the gpt-3.5-turbo-0613 version of the ChatGPT API. The gpt-3.5-turbo-0381 was used for the _Porig_ experiments with the original prompts from our submission.
We label the improved prompt (with changes 1+2+3) as _Pimpr_; we compare to a prompt closer to the original (with only changes 1+2 applied) as _Pnorm_. We then compared both ChatGPT and Llama 2 using both prompts _Pimpr_ and _Pnorm_ in three variants: (a) base without few-shot examples, (b) with two static examples (labeled _-fix-2egs_),
\begin{table}
\begin{tabular}{l c} \hline \hline
**System** & **Avg. Spearman** \\ \hline Baseline (Zhang et al., 2020) & 0.3387 \\ \hline Winning submission (_team4_) & 0.4890 \\ \hline \multirow{3}{*}{Ours:} & Simple Prompting & 0.0807 \\ & FFN Regressor & 0.1742 \\ & ChatGPT + Vector Store & 0.4190 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The overall performance of the baseline, the challenge winning submission and our three submissions.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & \multicolumn{1}{c}{Appropriateness} & \multicolumn{1}{c}{Relvance} & \multicolumn{1}{c}{Content richness} \\ \hline test-all & 0.488 & 0.361 & 0.452 \\ \hline blendenderbots & 0.383 & 0.287 & 0.303 \\ chatGPT & 0.122 & 0.060 & 0.181 \\ dstGPT/grosona & 0.803 & 0.968 & 0.216 \\ dstGPT/protocal & 0.300 & 0.401 & 0.200 \\ esl & 0.199 & - & - \\ GPT3 & 0.091 & 0.007 & 0.242 \\ nCM & 0.061 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: The performance of our best system as Spearman correlation coefficients scores on the test set split for the metrics _Appropriateness_, _content richness_, and _relevance_. The first row _TEST-ALL_ reports the results on the whole dataset. For brevity, we do not report _grammatical correctness_ per splits which is 0.402 for the whole test set. The test set contains conversations from different systems, including ChatGPT and GPT3.
and (c) with two dynamically retrieved examples using the vector store (labeled _dyn-2egs_, cf. Section 4.3. We also include a comparison to the original ChatGPT with the prompt used in our model submitted to the challenge (labeled as _Porig_, see Section 5.3). Finally, we ran an experiment with variants of _Porig_/_Pnorm_ where we prompted the model to evaluate all the four qualities in a single prompt (labeled as _-All_).
Our results in Table 4 suggest that it pays off to design the prompt carefully, and it is beneficial to use few-shot examples in the prompts. However, using dynamic examples form the vector store instead of fixed ones does not bring further improvements. We can see on ChatGPT results that our prompt improvements had an effect, and we were able to improve substantially over our challenge submission. There is a notable gap between ChatGPT and Llama 2; on the other hand, the Llama 2 results are much better than any of our previous results with open models (see Sections 6.2 and 6.3). We observe that predicting four qualities at once is not as good as predicting appropriateness only. However, it still seems an attractive alternative since such template use is roughly four times more effective when predicting four qualities individually. The percentage of failures for all reported systems is lower than 1% and thus does not play a significant role in the evaluation.
## 7 Related Work
Recent works in chat evaluation focus on refercneless approaches, as these do not suffer from penalizing appropriate responses based on surface dissimilarity to a single human-written reference response Liu et al. (2017); Lowe et al. (2017). Here, Lowe et al. (2017) trained a neural network from scratch on relatively large annotated data to predict a single score, but this approach was later found to generalize poorly, even to basic data perturbations, let alone other datasets Sai et al. (2019); Lowe (2019).
Later works leveraged pretrained language models for better generalization abilities, such as BERT Zhang et al. (2020); Gao et al. (2020), RoBERTa Mehri and Eskenazi (2020), GPT-2 Sinha et al. (2020) or DialoGPT Mehri and Eskenazi (2020). These metrics are trained on human-labeled sets
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**System** & **Prompt** & **Spearman Appr. (\%fail)** \\ \hline \multirow{4}{*}{Llama 2} & _Pimpr_ & 0.3310 & (0.04\%) \\ & _Pimpr_-fix-2egs & 0.3756 & (0.56\%) \\ & _Pimpr_-dyn-2egs & 0.3683 & (0.36\%) \\ \hline \multirow{4}{*}{ChatGPT 3.5-turbo-0613} & _Pimpr_ & 0.4536 & (0.01\%) \\ & _Pimpr_-fix-2egs & 0.6136 & (0.00\%) \\ & _Pimpr_-dyn-2egs & 0.5962 & (0.00\%) \\ \hline \hline \multirow{4}{*}{Llama 2} & _Pnorm_ & 0.3914 & (0.98\%) \\ & _Pnorm_-fix-2egs & 0.3551 & (0.06\%) \\ & _Pnorm_-dyn-2egs & 0.3756 & (0.65\%) \\ & _Pnorm_-All & 0.3710 & (0.01\%) \\ \hline \multirow{4}{*}{ChatGPT 3.5-turbo-0613} & _Pnorm_-dyn-2egs & 0.5462 & \multirow{4}{*}{} \\ & _Pnorm_-fix-All & 0.5334 & \\ \hline \multirow{4}{*}{ChatGPT 3.5-turbo-0301} & _Porig_-dyn-2egs & 0.4880 & \multirow{4}{*}{} \\ & _Porig_-fix-All & 0.3616 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study with the ChatGPT and Llama 2 7B Chat models for the _Appropriateness_ quality (see Section 6.5 for prompt variants explanation). “%fail” indicates the percentage of LLM outputs that failed to parse due to incorrect format.
Figure 5: The two scatterplots show the correlation of ground truth turn-level scores for appropriateness and the prediction of our best system, on the whole test set (left) and for the test set turns generated by ChatGPT (right). Our system shows a relatively good correlation over the whole test set and evaluates the ChatGPT results correctly as high-quality, but it fails at distinguishing the quality of individual ChatGPT turns.
of system outputs based on popular open-domain datasets, similar to the ChatEval development data. Some of them use additional data augmentation techniques, such as self-training Zhang et al. (2022). While they do achieve good correlations on some datasets, generalization with respect to unseen datasets is still not guaranteed Yeh et al. (2021).
Sai et al. (2021) stressed the importance of predicting multiple qualities, such as, fluency and appropriateness, in dialogue evaluation. At the same time, they asserted that metrics should be sensitive enough to distinguish between similar responses. Using simple text perturbations targeting the individual qualities, they show that most existing metrics are not robust enough.
Two very recent works, closely related to ours, propose the usage of instruction-tuned LLMs to evaluate generated text in various tasks like summarization and dialogue response generation Liu et al. (2023), or machine translation Kocmi and Federmann (2023). Both approaches use in-context learning and multiple prompting techniques to obtain scalar metric predictions or candidate rankings. They achieved good results and correlations with human judgments. However, they used only closed models for the evaluation and did not experiment with few-shot prompting using relevant examples.
## 8 Conclusion
We presented three simple approaches to using LLMs for turn-level chat evaluation. We achieved promising results using ChatGPT prompting with few-shot example retrieval from a vector score, and ranked as the second-best team. Based on the results of our best system, we argue that chat turn evaluation systems based on current state-of-the-art LLMs are usable only for system-level evaluation but not for segment-level evaluation, i.e., they cannot distinguish between the quality of individual turns, especially for outputs of high-quality latest systems based on LLMs such as ChatGPT and GPT3.
We observed that LLMs are fragile to the prompts, few-shot examples and cannot be used out-of-the-box for chat evaluation. We also report implementing a simple regressor on top of embeddings obtained from the prompted LLM decoder. We attribute its poor performance to our incorrect implementation of data preparation.
We also presented an ablation study that investigated the influence of the few-shot examples on the performance of LLMs. We found that few-shot examples help the LLMs to generalize better to unseen data, especially with respect to fitting the desired output format. However, using examples dynamically obtained from the vector store instead of hand-picked fixed examples did not bring any additional improvements.
We reached a new best Spearman correlation coefficient of 0.6136 for appropriateness with ChatGPT and fixed few-shot examples in our ablation study. In addition, the Llama 2 open model used in our ablation showed significant improvements over the challenge baseline.
## 9 Acknowledgements
This research was supported by Charles University projects GAUK 40222 and SVV 260575 and by the European Research Council (Grant agreement No. 101039303 NG-NLG). It used resources provided by the LINDAT/CLARIAH-CZ Research Infrastructure (Czech Ministry of Education, Youth, and Sports project No. LM2018101). The authors thank the anonymous reviewers for their valuable feedback, Milan Fucik and Mateusz Krubinski for their suggestions and technical support.
|
2306.11539 | A Design Framework for the Simulation of Distributed Quantum Computing | The growing demand for large-scale quantum computers is pushing research on
Distributed Quantum Computing (DQC). Recent experimental efforts have
demonstrated some of the building blocks for such a design. DQC systems are
clusters of quantum processing units (QPUs) connected by means of quantum
network infrastructures. Their extension ranges from the single box to the
geographical scale. Furthermore, they can be integrated with classical High
Performance Computing systems. Simulation modeling of DQC architectures
provides a safe way to test and explore different what-if scenarios. Many
simulation tools have been developed to support the research community in
designing and evaluating quantum computer and quantum network technologies,
including hardware, protocols, and applications. However, a framework for DQC
simulation putting equal emphasis on computational and networking aspects has
never been proposed, so far. In this paper, a design framework for DQC
simulation is presented, whose core component is an Execution Manager that
schedules DQC jobs for running on networked quantum computers. Two metrics are
proposed for evaluating the impact of the job scheduling algorithms with
respect to QPU utilization and quantum network utilization, beyond the
traditional concept of makespan. The discussion is supported by a DQC job
scheduling example, where two different strategies are compared in terms of the
proposed metrics. | Davide Ferrari, Michele Amoretti | 2023-06-20T13:52:05Z | http://arxiv.org/abs/2306.11539v3 | # A Simulation Framework for Distributed Quantum Computing
###### Abstract
**Current quantum processors are characterized by few hundreds of qubits with non-uniform quality and highly constrained physical connectivity. Hence, the increasing demand for large-scale quantum computers is pushing research on Distributed Quantum Computing (DQC) architectures as a scalable approach for increasing the number of available qubits for computational tasks. Recent experimental efforts have demonstrated some of the building blocks for such a design. Indeed, network and communications functionalities provided by the Quantum Internet allow remote quantum processing units (QPUs) to communicate and cooperate for executing computational tasks that each single device cannot handle by itself. Simulation plays a major role in this field. Many simulation tools have been recently developed to support the research community in the design and evaluation of quantum computing and quantum network technologies, including hardware, protocols and applications. However, a framework for DQC simulation putting equal emphasis on computational and networking aspects has never been proposed, so far. In this paper, we contribute to filling this gap.**
_Index terms_-- distributed quantum computing; simulation framework; performance indicators
## 1 Introduction
The number of qubits that can be embedded in a single quantum chip is limited from the emergence of noise, which is caused for example by changes in the environment, crosstalk, quantum decoherence and implementation errors [1]. This hard technological limitation affects all major quantum computing technologies, such as superconducting Josephson junctions, ion traps, quantum dots, etc. As a consequence, both academic and industry communities agree on the need for a quantum computing paradigm-shift, in order to realize large-scale quantum processors. Such a change of approach consists in clustering together modular and small quantum chips, by means of a quantum network infrastructure, with the purpose of scaling the number of qubits [2].
The realization of the aforementioned vision has already started. For example, IBM is working on a 1386-qubit multi-chip processor, denoted as _Kookaburra_, to be released in 2025 [3]. As a demonstration of the quantum communication links supported by this new device, IBM will connect three Kookaburra chips into a 4158-qubit system. Such modular systems enable the partitioning of monolithic quantum computations for their execution on multiple inter-connected processors, according to the distributed quantum computing (DQC) paradigm [4, 5, 6]. DQC among geographically-distributed quantum devices is also expected further into the future, supported by metropolitan-area and wide-area quantum networks that are also under research and development [7, 8]. In Fig. 1, the DQC principle is illustrated.
Many simulation tools have been recently developed to support the research community in the design and evaluation of quantum computing and quantum network technologies, including hardware, protocols and applications. However, a framework for DQC simulation putting equal emphasis on computational and networking aspects has never been pro
Figure 1: A distributed quantum computing scenario in which a client submits an abstract (that is, platform-independent) circuit to the system, where the computation gets split into quantum programs that are spread for execution over networked QPUs – of which some are directly connected, while others are linked by means of quantum repeaters.
posed, so far.
## 2 Motivations
Compared to analytical tools, which are well suited to predict the performance of simplified versions of the scenarios of interest, simulation tools find their role in complex use cases. This general concept is particularly true in the quantum computing and networking domain. Indeed, simulation tools enable the definition of hardware requirements using a top-down approach, that is, starting from applications and protocols. Therefore, high-level key performance indicators (KPIs) drive hardware design, which is much more convenient than trial and error. Another advantage of simulation is related to network sizing. Different network topologies and entanglement routing schemes can be devised, given the number of potential users and the number of available quantum processors. This results in saving time and money.
Regarding DQC, simulation is crucial for establishing the correctness of the compiled distributed quantum programs, and evaluating the quality of their execution against different network configurations, hardware platforms and scheduling algorithms. In a recent survey dedicated to DQC, Caleff et al. [9] compared some prominent simulation tools that can be used for designing and evaluating DQC systems. Each tool is classified as belonging to one of three possible classes:
* hardware-oriented (HW) simulation tools, allowing the user for modeling the physical entities with the desired degree of detail, including noise models;
* such as quantum state teleportation, quantum leader election, etc.
- with the possibility to model hardware-agnostic networked quantum processors, with very limited (if not missing) support for noise modeling;
* application-oriented (AP) simulation tools, which are tailored to the design and implementation of quantum network applications, relying on simulated backends offered by other packages that are not directly accessible to the user.
## 3 Simulation Framework for DQC
The DQC domain is in its infancy, reason why it is mostly studied by means of vertical simulation tools rather than general simulation frameworks. In general, simulation frameworks encourage defining abstractions for the domain of interest, which allow for multiple specific implementations of models at varying levels of accuracy, resolution and/or detail to be produced and assembled.
To the best of our knowledge, the first and - so far - only attempt to specify a simulation framework for DQC was made by Parekh et al. [5], introducing Interlin-q. This discrete event simulation framework includes three components, namely the Client Node, the Controller Node and the set of Computing Nodes. The purpose of the Client Node is allowing the user to define a monolithic circuit and the merging function (that is, the function that will be used to merge the partial results of the distributed computation). The Controller Node compiles the monolithic circuit and produces a schedule of quantum programs (that is, subprograms of the monolithic one), taking into account the network topology and the architectures of the quantum processors. Computing Nodes are simulated quantum processors that mimic the execution of the scheduled quantum programs. Such a scheme is effective for the simulation of controlled quantum computing environments, while it lacks the flexibility that is required for simulating DQC over complex quantum networks.
In Fig. 2, our simulation framework for DQC is illustrated. There is a clear separation between quantum compilation and classical simulation of the distributed quantum computation.
Figure 2: The proposed simulation framework for DQC.
A _Quantum Compiler_ for DQC is a software tool whose purpose is to partition monolithic quantum algorithms, trying to find the best breakdown, that is, the one that minimizes the number of gates that are applied to qubits stored at different devices [9, 10]. In our vision, the input to the quantum compiler includes the monolithic circuit description, a network topology description and high-level QPU feature descriptions. The monolithic circuit is denoted as abstract circuit, as it does not take into account platform-specific constraints. The network topology is just a description of how the QPUs are connected, and it may be totally ideal or instead it may refer to a very specific network (with node names corresponding to actual IP addresses). The QPU features that are taken into account at this stage are the coupling maps (that is, the adjacency matrix specifying how the physical qubits are connected, in a QPU) and simple noise models [11].
The simulation framework includes four components, namely: _Execution Manager_, _Analytics_, _Simulated Networks_ and _Simulated Nodes_. The compiled circuit is the main input to the Execution Manager, whose purpose is to schedule the subprograms of multiple compiled circuits that should be concurrently executed on the _Simulated Network_. The latter is characterized by a detailed (not abstract) network description. It should be noted that, in the Quantum Internet, any QPU can be connected to any other QPU, as quantum links can be created (continuously or on-demand) between any two nodes, leveraging quantum repeaters [12]. Therefore, the network topology description may encompass both QPUs and quantum repeaters, with detailed characterization of physical channels, including noise models and quantum link quality indexes. Simulated Nodes mimic the behavior of QPUs, based on low-level descriptions that include detailed qubit and gate models. Finally, the component denoted as Analytics calculates performance indicators from simulation logs collected by the Execution Manager.
## 4 Performance Indicators
Simulation allows evaluating several different aspects of DQC systems. First of all, the correctness of a compiled circuit. Indeed, by simulating its execution on an ideal quantum network, it is possible to analyze the output of the distributed computation, consisting in a quantum state and a classical binary string, respectively before and after the final measurement of the qubits. By repeating the same simulation several times, it is possible to estimate the probability mass function of the classical output. The obtained output can be compared to the desired one by means of two performance indicators, namely the _classical fidelity_ and the _quantum fidelity_.
Classical fidelity (usually denoted as Hellinger fidelity and related to, although different from, the Bhattacharyya coefficient) is a measure of the amount of overlap between two statistical samples or populations. It thus allow comparing the probability mass function of the classical output to the desired one.
Quantum fidelity is a popular measure of distance between density operators, which are matrices that describe quantum states more generally than state vectors or wavefunctions. Quantum fidelity is not a distance measure in the strict mathematical sense, but has some useful properties. In case of multipartite quantum systems (like DQC ones), calculating the partial trace of the global density operator results in a reduced density operator characterizing the quantum state of the subsystem of interest (for example, the quantum state of the qubits in one specific QPU). In this way, if the simulation tool does not allow to evaluate the fidelity of the global quantum state spread across the network (which may be computationally expensive), it is possible at least to evaluate the fidelities of its pieces.
Other relevant DQC aspects that can be evaluated by means of simulation, are the consequences of the non-ideal quantum network over which the distributed quantum computation is performed. Indeed, using realistic models for classical and quantum channels for a given quantum network, it is possible to thoroughly compare different DQC strategies (concerning compilation, program scheduling, output merging) and try to push DQC to the limit. Clearly, there is a trade-off between the rate of executed computations and the quality of their outputs. On the other hand, simulated DQC is also a great opportunity for quantum network designer to learn what hardware components and protocols need to be improved first.
Hereafter, we illustrate a simulation example concerning the evaluation of the correctness of compiled circuits, using a quantum circuit that is quite simple in the abstract design, while not trivial in the compiled distributed version, even for a 2-node network topology.
## 5 Simulation Example
Following the general framework described above, we use a Quantum Compiler [10] to produce the input for the simulations, starting from the abstract circuit illustrate in Fig. (a)a. We do not use an Execution Manager - a choice that binds us to the case of one single client submitting one compiled circuit at a time, but is beneficial to the clarity of this example. Simulated Nodes and Simulated Network are provided by means of NetQASM SDK [13], a Python-based AP simulation environment relying on NetSquid [14], which is a C++-based HW simulation engine. NetQASM SDK provides methods for calculating the quantum fidelity. Using an external tool, we also calculated the classical Hellinger fidelity.
The simulation of the distributed circuit is shown in Fig. (b)b. The simulated network consists of two QPUs, each holding 3 qubits for computation and one for communication purposes. The first two _CZ_ gates from the circuit in Fig. (a)a are executed by exploiting entangled EPR pairs shared between the QPUs. Operations across different QPUs require the measurement of the EPR pairs, which then need to be re-generated and shared. For the sake of simplicity, _QPU_0 is
in charge of creating the EPR pair and send it to \(\mathit{QPU}1\) by means of a quantum link.
NetQASM gives the user the ability to set various parameters, including noise models, both for the QPUs and the links. For this simulation example, we used a configuration based on Appendix A.6 of [15] as a reference point, and studied the Hellinger and quantum fidelities of the output state (with respect to ideal states calculated analytically) while varying the gate fidelity and link fidelity independently from 1 to 0.9. The results, averaged on 100 different execution rounds, are reported in Fig. 4. It may be observed that gate fidelity has a greater impact on the quality of the computation result, with respect to link fidelity.
## 6 Conclusion and Future Work
In this work, we proposed a general simulation framework for DQC, and illustrated it by means of a simulation example. There is a quite variegated choice of simulation tools for quantum networks and quantum computers to support DQC research, specialized on applications, protocols, or hardware. However, full-stack simulation of large networks is still unsupported. To be practical, such a tool should support multiprocessing and multithreading, as well as seamless deployment of DQC simulations on high performance computing facilities. Currently we are studying and developing modules that allow for bridging existing simulation tools, especially those that support seamless replacement of simulated hardware with real devices. For example, we plan to develop tools for orchestrating DQC simulations, with automated instantiation of simulation objects representing QPUs and quantum network components.
## 7 Acknowledgment
We acknowledge financial support from the European Union - NextGenerationEU, PNRR MUR project PE0000023-NQSTI.
|
2306.06045 | Global solution and blow-up for the SKT model in Population Dynamics | In this paper, we prove the existence and uniqueness of the global solution
to the reaction diffusion system SKT with homogeneous Newmann boundary
conditions. We use the lower and upper solution method and its associated
monotone iterations where the reaction functions are locally Lipschitz .We
study the blowing-up property of the solution, we give a sufficient condition
on the reaction parameters of the model to ensure the blow-up of the solution
continuous functions spaces. | Ichraf Belkhamsa, Messaoud Souilah | 2023-06-09T17:12:55Z | http://arxiv.org/abs/2306.06045v4 | # Global solution and blow-up for the SKT model in Population Dynamics
# Global solution and blow-up for the SKT model in Population Dynamics
**Ichraf Belkhamsa**
Department of Mathematics, Faculty of Science, University of Bida1, Bida, Algeria
ORCID iD: [https://orcid.org/0000-0003-0603-5741](https://orcid.org/0000-0003-0603-5741)
**Messaoud Souilah**
Department of Analysis, Faculty of Mathematics, University of Science
and Technology Houari Boumediene (USTHB), Algiers, Algeria
ORCID iD: [https://orcid.org/0000-0002-2918-3395](https://orcid.org/0000-0002-2918-3395)
**Abstract:** In this paper, we prove the existence and uniqueness of the global solution to the reaction diffusion system SKT with homogeneous Newmann boundary conditions. We use the lower and upper solution method and its associated monotone iterations where the reaction functions are locally Lipschitz.We study the blowing-up property of the solution, we give a sufficient condition on the reaction parameters of the model to ensure the blow-up of the solution continuous functions spaces.
**Keywords:** Reaction diffusion, population dynamics, SKT model, upper and lower solutions, global solution, blow-up.
## 1 Introduction
Various mathematical models from population dynamics translated into reaction-diffusion systems posed in a bounded domain of \(\mathbb{R}^{n}\). For example, the Sheguesada Kawasaki Teramoto model (see [20]) proposed in 1978, which include the following problem:
\[\left\{\begin{array}{l}u_{1t}-\Delta[(d_{1}+\alpha_{1}u_{1}+\beta_{1}u_{2}) u_{1}]=u_{1}(a_{1}-b_{1}u_{1}+c_{1}u_{2})\mbox{ in }Q_{T}\\ u_{2t}-\Delta[(d_{2}+\alpha_{2}u_{2}+\beta_{2}u_{1})u_{2}]=u_{2}(a_{2}+b_{2}u_{1 }-c_{2}u_{2})\mbox{ in }Q_{T}\\ \frac{\partial u_{1}}{\partial\eta}=\frac{\partial u_{2}}{\partial\eta}=0 \mbox{ on }S_{T}=(0,T]\times\partial\Omega\\ u_{1}(0,x)=u_{1,0}(x),\ u_{2}(0,x)=u_{2,0}(x)\mbox{ on }\Omega\end{array}\right. \tag{1}\]
where \(\Omega\) is bounded domain in \(\mathbb{R}^{n},\ n\geq 1,\ Q_{T}=(0,T]\times\Omega\,\ S_{T}\) is the boundary and the closure of \(\Omega\). \(d_{i},\alpha_{i},\beta_{i},a_{i}\) and \(b_{i},c_{i}\) are positive constants,
\(\Delta=\sum_{i=1}^{n}\frac{\partial^{2}}{\partial^{2}x_{i}}\)is the Laplace operator, \(\frac{\partial}{\partial\eta}\) denotes the directional derivative along the outward normal on \(\partial\Omega\).The problem (1) has been treated by many researchers, are devoted to the blow-up of solution and the global existence with either Newmann or Dirichlet boundary condition by various methods (cf.[21, 5, 6]). If \(d_{i}=\alpha_{i}=\beta_{i}=0\), \(i=1,2\), the problem (1) is the histhorical Volterra model. For \(\alpha_{i}=\beta_{i}=0\), \(i=1,2\), (1) is the Lotka-Volterra system, in this case, C.V. Pao in [18] proved that for \(b_{1}c_{2}<c_{1}b_{2}\) the problem admits a unique solution and for \(b_{1}c_{2}>c_{1}b_{2}\) the solution blows up. The same case, Lou,Nagylaki, Ni, [10] studied the effect of diffusion on the blow-up of the solutions in finite time \(T^{*}\).In particular, for \(\beta_{i}=0,\ i=1,2\), R.Hoyer and M.Souilah [16] investigated the existence of global solution to the reaction diffusion system (1),they have given a sufficient condition on the reaction parameters to ensure the global existence of the solution to the problem in Sobolev spaces \(W^{2,p}\). Linling. Z, Zhi. Li and Zhigui.Li in [13], devoted to the global existence and blow-up of the solution to system (1) for \(\beta_{i}=0,\ i=1,2\), with Dirichlet boundary conditions, the global solution is proved by using the
upper and lower solutions method and sufficient conditions are given for the solution to blows up.
In this paper we study the global and blow-up of the solution to the reaction diffusion system
\[\left\{\begin{array}{l}u_{1t}-\Delta[(d_{1}+\alpha_{1}u_{1})u_{1}]=f_{1}(u_{1}, u_{2})\mbox{ in }Q_{T}=(0,T]\times\Omega\\ u_{2t}-\Delta[(d_{2}+\alpha_{2}u_{2})u_{2}]=f_{2}(u_{1},u_{2}\mbox{ })\mbox{ in }Q_{T}=(0,T]\times\Omega\\ \frac{\partial u_{1}}{\partial\eta}=\frac{\partial u_{2}}{\partial\eta}=0 \mbox{ on }S_{T}=(0,T]\times\partial\Omega\\ u_{1}(0,x)=u_{1,0}(x),\ u_{2}(0,x)=u_{2,0}(x)\mbox{ on }\Omega\end{array}\right. \tag{2}\]
where \(f_{1}(u_{1},u_{2}\mbox{ })=u_{1}(-a_{1}+b_{1}u_{1}-c_{1}u_{2}),\ f_{2}(u_{1},u_{2} \mbox{ })=u_{2}(-a_{2}-b_{2}u_{1}+c_{2}u_{2}),\)\(f_{1},f_{2}\) are the quasimonotone decreasing.
This paper is organized as follows. In section one, we prove the global existence of the studied model SKT in continous spaces by using upper and lower solutions and their associated monotone iterations. The section two, we given the suffisant condition for the solution blow up.
## 2 Existence of global solution
In this section we study the existence and uniquness soltuion of the system (2).We first give the following definitions.
**Definition 1**: _The pair \(w^{(0)}=(w^{(0)}_{1},w^{(0)}_{2}),v^{(0)}=(v^{(0)}_{1},v^{(0)}_{2})\) in \(C(\Omega)\cap C^{1,2}(\Omega)\) are called order upper and lower solution of (2), if \(w^{(0)}{\geq}v^{(0)}\) and \(w^{(0)},v^{(0)}\) satisfies the following relations_
\[\left\{\begin{array}{l}(w^{(0)}_{1})_{t}-\Delta[(d_{1}+\alpha_{1}w^{(0)}_{1 })w^{(0)}_{1}]{\geq}w^{(0)}_{1}(-a_{1}+b_{1}w^{(0)}_{1}-c_{1}v^{(0)}_{2})=f_{1 }(w^{(0)}_{1},v^{(0)}_{2}))\\ (w^{(0)}_{2})_{t}-\Delta[(d_{2}+\alpha_{2}w^{(0)}_{2})w^{(0)}_{2}]{\geq}w^{(0) }_{2}(-a_{2}-b_{2}v^{(0)}_{1}+c_{2}w^{(0)}_{2})=f_{2}(v^{(0)}_{1},w^{(0)}_{2}) \\ \frac{\partial w^{(0)}_{i}}{\partial\eta}{\geq}0\\ w^{(0)}_{i}(x,0){\geq}u_{i,0}(x)\end{array}\right. \tag{3}\]
_and_
\[\left\{\begin{array}{l}(v^{(0)}_{1})_{t}-\Delta[(d_{1}+\alpha_{1}v^{(0)}_{1 })v^{(0)}_{1}]{\leq}v^{(0)}_{1}(-a_{1}+b_{1}v^{(0)}_{1}-c_{1}w^{(0)}_{2})=f_{1 }(v^{(0)}_{1},w^{(0)}_{2})\\ (v^{(0)}_{2})_{t}-\Delta[(d_{2}+\alpha_{2}v^{(0)}_{2}),v^{(0)}_{2}]{\leq}v^{(0 )}_{2}(-a_{2}-b_{2}w^{(0)}_{1}+c_{2}v^{(0)}_{2})=f_{2}(w^{(0)}_{1},v^{(0)}_{2} )\\ \frac{\partial v^{(0)}_{i}}{\partial\eta}{\leq}0\\ v^{(0)}_{i}(x,0){\leq}u_{i,0}(x)\end{array}\right. \tag{4}\]
Let define
\(J\times J=\{(u_{1},u_{2})\in(C(\overline{Q}))^{2}:(v_{1},v_{2}){\leq}(u_{1},u_ {2}){\leq}(w_{1},w_{2})\}\)
we denote \(C(\overline{Q}_{T})=C((0,T]\times\overline{\Omega}\mbox{ })\) the space of all bounded and continuous functions in \((0,T]\times\overline{\Omega}\mbox{ },\)let \((u_{1},u_{2}),(v_{1},v_{2})\) in \(J\times J.\)
\((f_{1},f_{2})\) satisfies the Lipschitz condition,
\[-\varphi_{1}(u_{1}-v_{1}){\leq}f_{1}(u_{1},u_{2})-f_{1}(v_{1},u_{2}){\leq} \varphi_{1}(u_{1}-v_{1}),\ u_{1}{\geq}v_{1} \tag{5}\]
\[-\varphi_{2}(u_{2}-v_{2}){\leq}f_{2}(u_{1},u_{2})-f_{2}(u_{1},v_{2}){\leq} \varphi_{2}(u_{2}-v_{2}),\ u_{2}{\geq}v_{2} \tag{6}\]
where \(\varphi_{i}\in C^{\alpha}(\overline{Q}),\ i=1,2.\)
we can write (2) in the form :
\[\left\{\begin{array}{l}{\frac{\partial u_{1}}{\partial t}-[(d_{1 }+2\alpha_{1}u_{1})\Delta u_{1}+2\alpha_{1}\nabla u_{1}\nabla u_{1}]=f_{1}(u_{1 },u_{2})\ \mbox{in}\ Q_{T}=(0,T]\times\Omega}\\ {\frac{\partial u_{2}}{\partial t}-[(d_{2}+2\alpha_{2}u_{2})\Delta u _{2}+2\alpha_{2}\nabla u_{2}\nabla u_{2}]=f_{2}(u_{1},u_{2})\ \mbox{in}\ Q_{T}=(0,T]\times\Omega}\\ {\frac{\partial u_{1}}{\partial\eta}=\frac{\partial u_{2}}{\partial \eta}=0\ \mbox{on}\ S_{T}=(0,T]\times\partial\Omega}\\ u_{1}(0,x)=u_{1,0}(x),\ u_{2}(0,x)=u_{2,0}(x)\ \mbox{on}\ \Omega\end{array}\right. \tag{7}\]
Denote the operator \(D_{i}(u_{i})=d_{i}+2\alpha_{i}u_{i}\) and define
\[\begin{array}{l}P_{i}(u_{i})=h_{i}=\int_{0}^{u_{i}}D_{i}(s)ds=\int_{0}^{u_{ i}}(d_{i}+2\alpha_{i}s)ds=(d_{i}+\alpha_{i}u_{i})u_{i}\,\ i=1,2\\ h_{it}=(d_{i}+2\alpha_{i}u_{i})u_{it}\end{array},\ i=1,2 \tag{8}\]
the problem (2) equivalent form,
\[\left\{\begin{array}{l}{\frac{(d_{i}+2\alpha_{i}u_{i})^{-1}h_{it }-\Delta h_{i}=f_{i}(u_{1},u_{2})\ \mbox{in}\ Q_{T}}}\\ {\frac{\partial h_{i}}{\partial\eta}=0\ \mbox{on}\ S_{T}}\\ h_{i}(x,0)=h_{i,0}(x)\ \mbox{in}\ \Omega\\ u_{i}=q(h_{i})\ \mbox{in}\ \overline{Q_{T}}\end{array}\right. \tag{9}\]
Define
\(Lh_{i}=\Delta h_{i}-\varphi_{i}h_{i}\) where \(\varphi_{i}>0\) and \(F_{i}(u_{1},u_{2})=f_{i}(u_{1},u_{2})+\varphi_{i}h_{i},\ i=1,2\)
for any \(u_{1}>0,\ u_{2}>0\) the problem (9) is equivalent to
\[\left\{\begin{array}{l}{\frac{(d_{i}+2\alpha_{i}u_{i})^{-1}h_{it }-Lh_{i}=F_{i}(u_{1},u_{2}),\mbox{in}\ Q_{T}}}\\ {\frac{\partial h_{i}}{\partial\eta}=0\ \mbox{on}\ S_{T}}\\ h_{i}(x,0)=h_{i,0}(x)\ \mbox{in}\ \Omega\end{array}\right. \tag{10}\]
First, we present the following posivity lemma.
**Lemma 1**
Let \(\sigma(x,t)>0\) in \(Q_{T}\), \(\beta\geq 0\) on \(S_{T}\) and let either
\((i)\)\(e(t,x)>0\) in \(Q_{T}\) or
\((ii)\)\((-\frac{e}{\sigma(x,t)})\) be bounded in \(\overline{Q_{T}}\).
If \(z\in C^{1,2}(Q_{T})\cap C(\overline{Q_{T}})\) satisfies the following inequalities:
\(\left\{\begin{array}{l}{\frac{\sigma(t,x)z_{t}-\Delta z+e(x,t)z \geq 0\ \mbox{in}\ Q_{T}}}\\ {\frac{\partial z}{\partial\eta}+\beta(x,t)z\geq 0\ \mbox{in}\ S_{T}}\\ z(x,0)\geq 0\ \mbox{in}\ \Omega\end{array}\right.\)
then \(z\geq 0\) in \(Q_{T}\)
**Proof.** \(\blacksquare\)
See proof the lemma 1.2 of [18]
using either \(w^{(0)}\) or \(v^{(0)}\) as the initial iteration. We can construct a sequence \(\left(u^{(k)},h^{(k)}\right)\) from the iteration process
\[\left\{\begin{array}{l}(d_{i}+2\alpha_{i}u_{i}^{(k)})^{-1}h_{it}^{(k)}-Lh_{i}^{(k )}=F_{i}(u_{1}^{(k-1)},u_{2}^{(k-1)})\mbox{ in }Q_{T}\\ \frac{\partial h_{i}^{(k)}}{\partial\eta}=0\mbox{ on }S_{T}\\ h_{i}^{(k)}(x,0)=h_{i,0}^{(k)}(x)\mbox{ in }\Omega\\ u_{i}^{(k)}=q(h_{i})\mbox{ in }\overline{Q_{T}}\end{array}\right. \tag{11}\]
for any \(k\) and \(i=1,2\).
Denote the sequences by \(\left(w^{(k)},\underline{h}^{(k)}\right)\) and \((v^{(k)},\overline{h}^{(k)})\) respectively, where \(\left(w^{(0)},\underline{h}^{(0)}\right)\) and \((v^{(0)},\overline{h}^{(0)})\) are the initial iteration of the sequences.
### Monotone sequences
The following lemma gives the monotony of the sequences \(\left(w^{(k)},\underline{h}^{(k)}\right)\) and \((v^{(k)},\overline{h}^{(k)})\).
**Lemma 2**: The sequences \(\left(w^{(k)},\underline{h}^{(k)}\right)\) and \((v^{(k)},\overline{h}^{(k)})\) governed by (11) possess the monotone property
\[(v^{(0)},\underline{h}^{(0)})\leq(v^{(k)},\underline{h}^{(k)})\leq(v^{(k+1)},\underline{h}^{(k+1)})\leq(w^{(k+1)},\overline{h}^{(k+1)})\leq(w^{(k)}, \overline{h}^{(k)})\leq(w^{(0)},\overline{h}^{(0)}) \tag{12}\]
where \(\underline{h}\) and \(\overline{h}\) are the upper and lower bounds of \(h\).
**Proof.**\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
\[(d_{1}+2\alpha_{1}v_{1}^{(1)})^{-1}\underline{z}_{1t}^{(1)}-L_{1} \underline{z}_{1}^{(1)}+\gamma_{1}^{(0)}\underline{z}_{1}^{(1)}\geq 0\mbox{ in }\ \Omega\times(0,T],\mbox{ \ where }\gamma_{1}^{(0)}=\frac{-2\alpha_{1}}{(d_{1}+2\alpha_{1}\xi_{1}^{(0)})^{3}} \underline{h}_{1t}^{(0)}\]
on the other hand,
\[\frac{\partial\underline{z}_{1}^{(1)}}{\partial\eta} = \frac{\partial(\underline{h}_{1}^{(1)}-\underline{h}_{1}^{(0)})} {\partial\eta}=\frac{\partial\underline{h}_{1}^{(1)}}{\partial\eta}-\frac{ \partial\underline{h}_{1}^{(0)}}{\partial\eta}=0\mbox{ on }\partial\Omega\times(0,T]\] \[\underline{z}_{1}^{(1)}(x,0) = \underline{h}_{1}^{(1)}(x,0)-\underline{h}_{1}^{(0)}(x,0)\geq 0 \mbox{ in }\Omega\]
it follwos again from the lemma 1 that \(\underline{z}_{1}^{(1)}\geq 0\) and thus \(v_{1}^{(1)}\geq v_{1}^{(0)}\).
we replace \(\underline{z}_{2}^{(1)}\) in (11),
\[(d_{2}+2\alpha_{2}v_{2}^{(1)})^{-1}\underline{z}_{2t}^{(1)}-L_{ 2}\underline{z}_{2}\] \[= F_{2}(w_{1}^{(0)},v_{2}^{(0)})-[(d_{2}+2\alpha_{2}v_{2}^{(1)})^ {-1}-(d_{2}+2\alpha_{2}v_{2}^{(0)})^{-1}]\underline{h}_{2t}^{(0)}+L_{2} \underline{h}_{2}^{(0)}-(d_{2}+2\alpha_{2}v_{2}^{(0)})^{-1}\underline{h}_{2t} ^{(0)}\] \[= f_{2}(w_{1}^{(0)},v_{2}^{(0)})+\Delta\underline{h}_{2}^{(0)}-(d_ {2}+2\alpha_{2}v_{2}^{(0)})^{-1}\underline{h}_{2t}^{(0)}-\gamma_{2}^{(0)}(v_{ 2}^{(1)}-v_{2}^{(0)})\] \[= -v_{2t}^{(0)}+\Delta(d_{2}+2\alpha_{2}v_{2}^{(0)})v_{2}^{(0)}+f_{ 2}(w_{1}^{(0)},v_{2}^{(0)})-\gamma_{2}^{(0)}\underline{z}_{2}^{(1)}\]
we use the definition of the lower solution, yields
\[(d_{2}+2\alpha_{2}v_{2}^{(1)})^{-1}\underline{z}_{2t}^{(1)}-L_{2}\underline{z }_{2}+\gamma_{2}^{(0)}\underline{z}_{2}^{(1)}\geq 0\]
where \(\gamma_{2}^{(0)}=\frac{-2\alpha_{2}\underline{h}_{2t}^{(0)}}{(d_{2}+2\alpha_{ 2}\xi_{2}^{(0)})^{3}}\) and \(\xi_{2}^{(0)}\) is the intermediate value between \(v_{2}^{(1)}\) and \(v_{2}^{(0)}\).
at the same time,
\[\frac{\partial\underline{z}_{2}^{(1)}}{\partial\eta}=\frac{ \partial(\underline{h}_{2}^{(1)}-\underline{h}_{2}^{(0)})}{\partial\eta}= \frac{\partial\underline{h}_{2}^{(1)}}{\partial\eta}-\frac{\partial\underline{ h}_{2}^{(0)}}{\partial\eta}=0\mbox{ on }\partial\Omega\times(0,T]\] \[\underline{z}_{2}^{(1)}(x,0)=\ \underline{h}_{2}^{(1)}(x,0)- \underline{h}_{2}^{(0)}(x,0)\geq 0\mbox{ in }\Omega\]
Again, lemma 1 indicates that \(\underline{z}_{2}^{(1)}\geq 0\) and thus \(v_{2}^{(1)}\geq v_{2}^{(0)}\).
By an analogue reasoning, we show that \(\overline{z}_{i}\geq 0\) and \(w_{i}^{(0)}\geq w_{i}^{(1)},\ i=1,2\).
Let \(y_{i}^{(1)}=\overline{h}_{i}^{(1)}-\underline{h}_{i}^{(1)},\ i=1,2\)
by (11), for \(i=1\)
\[(d_{1}+2\alpha_{1}w_{1}^{(1)})^{-1}y_{1t}^{(1)}-L_{1}y_{1}^{(1)}\] \[= F_{1}(w_{1}^{(0)},v_{2}^{(0)})-F_{1}(v_{1}^{(0)},w_{2}^{(0)})-[ \frac{-2\alpha_{1}\underline{h}_{1t}^{(1)}}{(d_{1}+2\alpha_{1}\xi_{1}^{(1)})^ {3}}(\overline{h}_{1}^{(1)}-\underline{h}_{1}^{(1)})]\]
\[(d_{1}+2\alpha_{1}w_{1}^{(1)})^{-1}y_{1t}^{(1)}-L_{1}y_{1}^{(1)}+\delta_{1}^{( 1)}y_{1}^{(1)}\geq 0\]
where \(\delta_{1}^{(1)}=\frac{-2\alpha_{1}\underline{h}_{1t}^{(1)}}{(d_{1}+2\alpha_{ 1}\xi_{1}^{(1)})^{3}}\) with \(\xi_{1}^{(1)}\) is the intermediiare value between \(w_{1}^{(1)}\) and \(v_{1}^{(1)}\)
\[\frac{\partial y_{1}^{(1)}}{\partial\eta}=0\] \[y_{1}^{(1)}(x,0)=\overline{h}_{1}^{(1)}(x,0)-\underline{h}_{1}^ {(1)}(x,0)=0\]
using lemma 1, \(y_{1}^{(1)}\geq 0\), implies that \(\overline{h}_{1}^{(1)}\geq\underline{h}_{1}^{(1)}\) and \(w_{1}^{(1)}\geq v_{1}^{(1)}\) and by (11) and the same reasoning as \(y_{1}^{(1)}\)we get
\(y_{2}^{(1)}\geq 0\), \(\overline{h}_{2}^{(1)}\geq\underline{h}_{2}^{(1)}\) and \(w_{2}^{(1)}\geq v_{2}^{(1)}\).
The above conclusion shaw that
\[(v_{i}^{(0)},\underline{h}_{i}^{(0)})\leq(v_{i}^{(1)},\underline{h}_{i}^{(1) })\leq(w_{i}^{(1)},\overline{h}_{i}^{(1)})\leq(w_{i}^{(0)},\overline{h}_{i}^{( 0)}),\ i=1,2\]
#### 2.1.2 Step 2 : Iteration
Assume by induction that
\[(v_{i}^{(k)},\underline{h}_{i}^{(k)})\leq(v_{i}^{(k+1)},\underline{h}_{i}^{(k +1)})\leq(w_{i}^{(k+1)},\overline{h}_{i}^{(k+1)})\leq(w_{i}^{(k)},\overline{h} _{i}^{(k)}),\ k>1,\ i=1,2\]
Let \(\ \underline{z}_{1}^{(k+1)}=\underline{h}_{1}^{(k+1)}-\underline{h}_{1}^{(k)}\) by (11) and the quasimonotone non increasing property of \((f_{1},f_{2})\);
\[\left\{\begin{array}{l}(d_{1}+2\alpha_{1}v_{1}^{(k+1)})^{-1}\underline{z}_{ 1t}^{(k+1)}-L_{1}\underline{z}_{1}^{(k+1)}+\gamma_{1}^{(k)}\underline{z}_{1}^ {(k+1)}=F_{1}(v_{1}^{(k)},w_{2}^{(k)})-F_{1}(v_{1}^{(k-1)},w_{2}^{(k-1)})\geq 0 \\ \dfrac{\partial\underline{z}_{1}^{(k+1)}}{\partial\eta}=0\ \mbox{on}\ \partial\Omega\times(0,T]\\ \underline{z}_{1}^{(k+1)}(x,0)=0\ \mbox{in}\ \Omega\end{array}\right.\]
where \(\gamma_{1}^{(k)}=\dfrac{-2\alpha_{1}}{(d_{1}+2\alpha_{1}\xi_{1}^{(k)})^{3}} \underline{h}_{1t}^{(k)},\xi_{1}^{(k)}\) is the intermediaire value between \(v_{1}^{(k)}\) and \(v_{1}^{(k+1)}\)
and \(\ \underline{z}_{2}^{(k+1)}=\underline{h}_{2}^{(k+1)}-\underline{h}_{2}^{(k)}\) satisfies
\[\left\{\begin{array}{l}(d_{2}+2\alpha_{2}w_{2}^{(k+1)})^{-1}\underline{z}_{ 2t}^{(k+1)}-L_{2}\underline{z}_{2}^{(k+1)}+\gamma_{2}^{(k)}\underline{z}_{2}^ {(k+1)}=F_{2}(w_{1}^{(k)},v_{2}^{(k)})-F_{2}(w_{2}^{(k-1)},v_{2}^{(k-1)})\geq 0 \\ \dfrac{\partial\underline{z}_{2}^{(k+1)}}{\partial\eta}=0\ \mbox{on}\ \partial\Omega\times(0,T]\\ \underline{z}_{2}^{(k+1)}(x,0)=0\ \mbox{in}\ \Omega\end{array}\right.\]
where \(\gamma_{2}^{(k)}=\dfrac{-2\alpha_{2}}{(d_{2}+2\alpha_{2}\xi_{2}^{(k)})^{3}} \underline{h}_{2t}^{(k)},\ \xi_{2}^{(k)}\) is the intermediaire value between \(v_{2}^{(k)}\) and \(v_{2}^{(k+1)}\)
by lemma 1, we have \(\underline{z}_{i}^{(k+1)}\geq 0\) for \(i=1,2\) wich leads to \(\underline{h}_{i}^{(k+1)}\geq\underline{h}_{i}^{(k)}\) and \(v_{i}^{(k+1)}\geq v_{i}^{(k)},\ i=1.2\).
For \(\overline{z}_{i}^{(k+1)},\ i=1,2\) we use the same procedure to get \(w_{i}^{(k)}\geq w_{i}^{(k+1)}\).
Let \(y_{i}^{(k+1)}=\overline{h}_{i}^{(k+1)}-\underline{h}_{i}^{(k+1)},\ i=1,2\)
for \(i=1\), \(y_{1}^{(k+1)}\) satisfies
\[\left\{\begin{array}{l}(d_{1}+2\alpha_{1}w_{1}^{(k+1)})^{-1}y_{1t}^{(k+1)}-L _{1}y_{1}^{(k+1)}+\gamma_{1}^{(k+1)}y_{1}^{(k+1)}=F_{1}(w_{1}^{(k)},v_{2}^{(k )})-F_{1}(v_{1}^{(k)},w_{2}^{(k)})\geq 0\\ \dfrac{\partial y_{1}^{(k+1)}}{\partial\eta}=0\ \mbox{on}\ S_{T}\\ y_{1}^{(k+1)}\left(x,0\right)=0\ \mbox{in}\ \Omega\end{array}\right.\]
where \(\gamma_{1}^{(k+1)}=\dfrac{-2\alpha_{1}}{(d_{1}+2\alpha_{1}\xi_{1}^{(k)})^{3}} \bar{h}_{1t}^{(k+1)},\ \xi_{1}^{(k+1)}\) is the intermediaire value between \(w_{1}^{(k+1)}\) and \(v_{1}^{(k+1)}\).
The same result hold for \(i=2\) and using lemma 1, the two cases lead to \(\overline{h}_{i}^{(k+1)}\geq\underline{h}_{i}^{(k+1)}\) and \(w_{i}^{(k+1)}\geq v_{i}^{(k+1)}\) for \(i=1,2\).
### Convergence of the two sequences
The result (12) of the two sequences follows by induction. The monotone property (12) imlplies also that the pointwise limit holds
\[\lim_{k\rightarrow+\infty}(w^{(k)},\overline{h}^{(k)})=(w,\overline{h}),\ \ \lim_{k \rightarrow+\infty}(v^{(k)},\underline{h}^{(k)}=(v,\underline{h})\]
In the following, we show that \((w,\overline{h})=(v,\underline{h})=(u^{*},h^{*})\) where \(u^{*}\) is the unique solution for (2).
### Uniqueness of the solution
**Theorem 3**
Let \((w_{1}^{(0)},w_{2}^{(0)}),(v_{1}^{(0)},v_{2}^{(0)})\) be the upper and lower solution of (2). Then the sequences \((w^{(k)},\underline{h}^{(k)})\) and \((v^{(k)},\overline{h}^{(k)})\) obtained from (11) converge monotonically to a unique solution \((u^{*},h^{*})\) of (10) and satisfy the relation
\[(v^{(0)},\underline{h}^{(0)})\leq(v^{(k)},\underline{h}^{(k)})\leq(v^{(k+1)}, \underline{h}^{(k+1)})\leq(u^{*},h^{*})\leq(w^{(k+1)},\overline{h}^{(k+1)}) \leq(w^{(k)},\overline{h}^{(k)})\leq(w^{(0)},\overline{h}^{(0)}),\,k\geq 1 \tag{13}\]
**Proof.**\(\quad\blacksquare\)
By (11) and the equivalence between (7) and (10), \(u_{i}^{(k)}\) is a solution of the (scalar) quasilinear system for \(i=1,2:\)
\[\left\{\begin{array}{l}\frac{\partial u_{i}^{(k)}}{\partial t}-\nabla[(d_{i }+2\alpha_{i}u_{i}^{(k)})\nabla u_{i}^{(k)}]=F(x,t,u_{i}^{(k-1)})-\varphi_{i}P _{i}(u_{i}^{(k-1)})\ \mbox{in}\ Q_{T}\\ \frac{\partial u_{i}^{(k)}}{\partial\eta}=0\ \ \mbox{on}\ S_{T}\\ u_{i}^{(k)}(0,x)=u_{i,0}(x)\ \mbox{in}\ \Omega\end{array}\right. \tag{14}\]
We first prove that the limit \(u_{i}^{*}\) of the sequence \(u_{i}^{(k)}\) satisfies the first equation of our studied system in \(Q_{T}.\) Secondly we show that \(u_{i}^{*}\) satisfies the boundary and initial conditions of (2). This implies that \(u_{i}^{*}\) satisfies the system (2) and that \(w,v\) are both solutions of (2). Finally, both \((w,\overline{h})\) and \((v,\underline{h})\) are solutions of (10), then \((w,\overline{h})=(v,\underline{h}).\)
### Conditions on the parameters of the model
**Theorem 4**
Let \((w_{1}^{(0)},w_{2}^{(0)}),(v_{1}^{(0)},v_{2}^{(0)})\) the upper and lower solutions of (2). If the constants of the system satisfy the following conditions
\[\left\{\begin{array}{l}b_{1}\geq 2\alpha_{1}\lambda_{0}\\ c_{2}\geq 2\alpha_{2}\lambda_{0}\\ a_{1}\leq(b_{1}-2\alpha_{1}\lambda_{0})\rho_{1}\\ a_{2}\leq(c_{2}-2\alpha_{2}\lambda_{0})\rho_{2}\end{array}\right.,\]
then, the upper solution \((w_{1}^{(0)},w_{2}^{(0)})\) satisfies the relation
\[w_{1}^{(0)} \leq \min\left\{\frac{a_{1}}{b_{1}},\frac{(c_{2}-2\alpha_{2}\lambda_{0}) \rho_{2}-a_{2}}{b_{2}}\right\},\] \[w_{2}^{(0)} \leq \min\left\{\frac{a_{2}}{c_{2}},\frac{(b_{1}-2\alpha_{1}\lambda_{0}) \rho_{1}-a_{1}}{c_{1}}\right\}\]
and the problem (2) admits a unique global solution \((u_{1},u_{2})\) in \(\overline{\Omega}\times[0,T]\), where \(\lambda_{0}\) is the first eigenvalue of the Laplacian and \(\rho_{i},\ i=1,2\), are some positive constants.
**Proof.**\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
\[a_{1}\leq(b_{1}-2\alpha_{1}\lambda_{0})\rho_{1}\mbox{ and }a_{2}\leq(c_{2}-2\alpha_{2} \lambda_{0})\rho_{2}\]
we have
\[N_{1} \leq \min\left\{\frac{a_{1}}{b_{1}},\frac{(c_{2}-2\alpha_{2}\lambda_{0} )\rho_{2}-a_{2}}{b_{2}}\right\}, \tag{20}\] \[N_{2} \leq \min\left\{\frac{a_{2}}{c_{2}},\frac{(b_{1}-2\alpha_{1}\lambda_{0 })\rho_{1}-a_{1}}{c_{1}}\right\} \tag{21}\]
Hence under condition (20), the pair (15) are coupled upper and lower solutions, we conclude from lemma 2 that the sequences \(\{w^{(k)},\overline{h}^{(k)}\},\{v^{(k)},\underline{h}^{(k)}\}\) converge monotonically to somme \((w,\overline{h}),(v,\underline{h}).\) It follows from theorem 3 that \((w,\overline{h})=(v,\underline{h})=(u^{*},h^{*})\) is the unique solution of (2) in \(Q_{T}\)
## 3 The Blow-up of the solution
We announce the following general result on blow up solutions in the partial differential equations.
**Theorem 5**
Let \((f_{1},f_{2})\) be locally Lipschitz functions in \(\mathbb{R}^{+}\times\mathbb{R}^{+}\) and let \((v_{1}^{(0)},v_{2}^{(0)})\) be a positive functions defined in \([0,T_{0})\times\overline{\Omega}\) for a finite \(T_{0}\) and unbounded at some point in \(\overline{\Omega}\) as \(t\longrightarrow T_{0}.\) If \((v_{1}^{(0)},v_{2}^{(0)})\) is a lower solution of (2) for every \(T<T_{0}\) then there exists \(T^{*}{\leq}T_{0}\) such that a unique positive solution \((u_{1},u_{2})\) to (2) exists in \(\overline{Q}\) and satisfies the relation :
\[\lim_{t\to T^{*}}\max\left[u_{1}(t,x)+u_{2}(t,x)\right]=+\infty \tag{22}\]
The proof of theorem 5 is similar to that of theorem 11.5.1 of [18].
Based on the result of this theorem, the blowing-up property of the solution to (2) is ensured if there exists a lower solution \((v_{1}^{(0)},v_{2}^{(0)})\) that is unbounded on \(\overline{\Omega}\) at a finite time.
The construction of such a lower solution depends on the type of boundary conditions.
For Neumann boundary conditions, if \((f_{1},f_{2})\) satisfy a growth condition, the lower solution can often be constructed.
### Conditions on the reaction terms
**Lemma 6**
If \((f_{1},f_{2})\) satisfy the local lipschitz continuous property and
\[\psi_{1}=\min\left\{\mu_{1}b_{1,}\mu_{2}c_{2}\right\},\ \psi_{2}=\left(\mu_{1}c_{1}+\mu_{2}b_{2} \right)/2,\ c=\max\left\{\mu_{1}a_{1},\mu_{2}a_{2}\right\},\ \psi_{1}>\psi_{2}>0 \tag{23}\]
If \(\psi=(\psi_{1}-\psi_{2})/2,\) then \((f_{1},f_{2})\) satisfy the growth condition :
\[\mu_{1}f_{1}(u_{1},u_{2})+\mu_{2}f_{2}(u_{1},u_{2})\geq\psi(u_{1}+u_{2})^{2}-c (u_{1}+u_{2}),\ u_{1}\geq 0,\ u_{2}\geq 0 \tag{24}\]
where \(\mu_{i},\ i=1,2\) are positive constants.
**Proof.**\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
by (23) the (25),becomes;
\[\mu_{1}f_{1}(u_{1},u_{2})+\mu_{2}f_{2}(u_{1},u_{2})=\psi_{1}(u_{1}^{2}+u_{2}^{2}) -2\psi_{2}u_{1}u_{2}-c(u_{1}+u_{2}) \tag{26}\]
such that:
\[\psi_{1}(u_{1}^{2}+u_{2}^{2})-2\psi_{2}u_{1}u_{2}=\psi(u_{1}+u_{2})^{2}-\psi(u_ {1}^{2}+u_{2}^{2})+\psi_{1}(u_{1}^{2}+u_{2}^{2})-2(\psi+\psi_{1})u_{1}u_{2}\]
\[\psi_{1}(u_{1}^{2}+u_{2}^{2})-2\psi_{2}u_{1}u_{2}\geq\psi(u_{1}+u_{2})^{2}+2( \psi_{1}-2\psi-\psi_{2})u_{1}u_{2}\]
if \(\psi=(\psi_{1}-\psi_{2})/2\), then
\[\psi_{1}(u_{1}^{2}+u_{2}^{2})-2\psi_{2}u_{1}u_{2}\geq\psi(u_{1}+u_{2})^{2}.\]
The relation (26) becomes
\[\mu_{1}f_{1}(u_{1},u_{2})+\mu_{2}f_{2}(u_{1},u_{2})\geq\psi(u_{1}+u_{2})^{2}-c (u_{1}+u_{2}),\ u_{1}\geq 0,\ u_{2}\geq 0\]
Denotes \(\widehat{p}(t)=\mu_{1}\widehat{p}_{1}(t)+\mu_{2}\widehat{p}_{2}(t)\), where \(\widehat{p}_{i}(t)=\left|\Omega\right|^{-1}\int_{\Omega}u_{i}\Phi_{0}dx,\ i=1,2.\) In the following, the lower solution can be constructed by using the solution
of the cauchy problem
\[\left\{\begin{array}{l}\vec{p}^{\prime}+\overline{\tau}\widehat{p}=\underline {\psi}\widehat{p}^{2}\\ \widehat{p}(0)=\widehat{p}_{0}\end{array}\right.\]
according the notations of [18], where \(\overline{\tau}\) is an arbitrary non zero constant.
\[T_{0}=\frac{1}{\overline{\tau}}\ln[\frac{\underline{\psi}p_{0}}{\underline{ \psi}\overline{p_{0}}-\overline{\tau}}]. \tag{27}\]
**Theorem 7**
Let \((f_{1},f_{2})\) be locally Lipschitz function in \(\mathbb{R}^{+}\times\mathbb{R}^{+}\) and satisfy the condition (23) and the relation (24), and let \((u_{1},u_{2})\) be the local positive solution of (2),
then for any nonnegative \((u_{1}(0,x),u_{2}(0,x))\) the condition
\[\widehat{p}_{0}>\frac{\overline{\tau}}{\underline{\psi}} \tag{28}\]
where \(\overline{\tau},\underline{\psi}\) are some positive constants that ensure the existence of a finite \(T^{*}\leq T_{0}\) such that \((u_{1},u_{2})\) possesses the blowing-up property (22), where \(T_{0}\) is given by (27)
**Proof.** Let \(\Phi_{0}>0\), be the corresponding eigenfunction of the eigenvaleue \(\lambda_{0}\). Define
\[\widehat{p}_{1}(t)=\left|\Omega\right|^{-1}\int_{\Omega}\Phi_{0}(x)u_{1}(x,t) dx,\ \ \ \widehat{p}_{2}(t)=\left|\Omega\right|^{-1}\int_{\Omega}\Phi_{0}(x)u_{2}(x,t)dx\ \mbox{and}\]
\[\widehat{p}(t)=\left|\Omega\right|^{-1}\int_{\Omega}\Phi_{0}(x)[\mu_{1}u_{1}( x,t)+\mu_{2}u_{2}(x,t)]dx=\mu_{1}\widehat{p}_{1}(t)+\mu_{2}\widehat{p}_{2}(t) \tag{29}\]
\(\blacksquare\)
Multiply the first equation in (2) by the first eigenvector \(\Phi_{0}\) of the laplacian and integrate over \(\Omega\) and using Green's theorem which implies
\[\int_{\Omega}u_{1t}\Phi_{0}dx\geq\int_{\Omega}\Delta\Phi_{0}[(d_{1}+\alpha_{1 }u_{1})u_{1}]dx+\int_{\Omega}f_{1}(u_{1},u_{2})\Phi_{0}dx\]
\[\int_{\Omega}u_{1t}\Phi_{0}dx\geq-\int_{\Omega}\lambda_{0}\Phi_{0}d_{1}u_{1}(x,t) dx-\int_{\Omega}\lambda_{0}\alpha_{1}u_{1}^{2}(x,t)\Phi_{0}dx+\int_{\Omega}f_{1}(u_{1},u_ {2})\Phi_{0}dx \tag{30}\]
Multiply (30) by \(\mu_{1}\),
\[\mu_{1}\left|\Omega\right|\widehat{p}_{1}^{\prime}(t)\geq-\mu_{1}\lambda_{0}d_ {1}\left|\Omega\right|\widehat{p}_{1}(t)-\lambda_{0}\alpha_{1}\mu_{1}\left| \Omega\right|\widehat{p}_{1}^{2}(t)+\int_{\Omega}\mu_{1}f_{1}(u_{1},u_{2}) \Phi_{0}dx \tag{31}\]
An analogue reasoning for the second equation in (2), gives
\[\mu_{2}\left|\Omega\right|\widehat{p}_{2}^{\prime}(t)\geq-\mu_{2}\lambda_{0}d_ {2}\left|\Omega\right|\widehat{p}_{2}(t)-\lambda_{0}\alpha_{2}\mu_{2}\left| \Omega\right|\widehat{p}_{2}^{2}(t)+\int_{\Omega}\mu_{2}f_{2}(u_{1},u_{2}) \Phi_{0}dx \tag{32}\]
Combination of the above inequalities (31) and (32), gives
\[\Omega|\left[\mu_{1}\widehat{p}_{1}^{\prime}(t)+\mu_{2}\widehat{p }_{2}^{\prime}(t)\right] \geq -\lambda_{0}\left|\Omega\right|\left[\mu_{1}d_{1}\widehat{p}_{1}(t )+\mu_{2}d_{2}\widehat{p}_{2}(t)\right]-\lambda_{0}\left|\Omega\right|\left[ \mu_{1}\alpha_{1}^{2}\widehat{p}_{1}(t)+\mu_{2}\alpha_{2}\widehat{p}_{2}^{2}(t)\right]\] \[+\int_{\Omega}[\mu_{1}f_{1}(u_{1},u_{2})+\mu_{2}f_{2}(u_{1},u_{2} )]\Phi_{0}dx\]
We notice that,
\[\widehat{p}^{2}(t)>\mu_{1}^{2}\widehat{p}_{1}^{2}(t)\mbox{ and }\widehat{p}^{2}(t)>\mu_{2}^{2}\widehat{p}_{2}^{2}(t)\]
Multiplying the two inequalities by \(-\lambda_{0}\left|\Omega\right|\alpha_{1},\ -\lambda_{0}\left|\Omega\right|\alpha_{2}\) respectively, we get
\[-\lambda_{0}\left|\Omega\right|\alpha_{1}\widehat{p}^{2}(t)<-\lambda_{0}\left| \Omega\right|\mu_{1}^{2}\alpha_{1}\widehat{p}_{1}^{2}(t)\mbox{ and }-\lambda_{0}\left|\Omega\right|\alpha_{2}\widehat{p}^{2}(t)<-\lambda_{0} \left|\Omega\right|\mu_{2}^{2}\alpha_{2}\widehat{p}_{2}^{2}(t)\]
taking \(\alpha=\alpha_{1}+\alpha_{2}\), \(\overline{\mu}=\max\left\{\mu_{1},\mu_{2}\right\}\) and substitute in the above inequalities
\[-\lambda_{0}\left|\Omega\right|\alpha\widehat{p}^{2}(t)<-\lambda_{0}\left| \Omega\right|\overline{\mu}[\mu_{1}\alpha_{1}\widehat{p}_{1}^{2}(t)+\mu_{2} \alpha_{2}\widehat{p}_{2}^{2}(t)] \tag{34}\]
for \(\overline{d}=\max\left\{d_{1},d_{2}\right\},\) then by (33),(34) and the growth condition on \((f_{1},f_{2})\) we obtain
\[\left|\Omega\right|\widehat{p}^{\prime}(t)+\lambda_{0}\left|\Omega\right| \overline{d}\widehat{p}(t)+\lambda_{0}\left|\Omega\right|\alpha\overline{\mu }^{-1}\widehat{p}^{2}(t)>\int_{\Omega}\psi(u_{1}+u_{2})^{2}\Phi_{0}dx-\int_{ \Omega}c(u_{1}+u_{2})\Phi_{0}dx. \tag{35}\]
We use the relation
\[\overline{\mu}^{-1}(\mu_{1}u_{1}+\mu_{2}u_{2})\leq(u_{1}+u_{2})\leq\underline{ \mu}^{-1}(\mu_{1}u_{1}+\mu_{2}u_{2}),\ \underline{\mu}=\min(\mu_{1},\mu_{2})\]
\[-\int_{\Omega}c(u_{1}+u_{2})\Phi_{0}dx\geq-\int_{\Omega}\underline{c\mu}^{-1} (\mu_{1}u_{1}+\mu_{2}u_{2})\Phi_{0}dx\geq-\underline{c\mu}^{-1}\left|\Omega \right|\widehat{p}(t) \tag{36}\]
\[\int_{\Omega}\psi(u_{1}+u_{2})^{2}\Phi_{0}dx\geq\psi\overline{\mu}^{-2}\int_{ \Omega}(\mu_{1}u_{1}+\mu_{2}u_{2})^{2}\Phi_{0}dx \tag{37}\]
\[\int_{\Omega}\left[(\mu_{1}u_{1}+\mu_{2}u_{2})\Phi_{0}\right]^{2}dx\geq\int_{ \Omega}(\mu_{1}u_{1}+\mu_{2}u_{2})^{2}\Phi_{0}dx\geq\left|\Omega\right| \widehat{p}^{2}(t) \tag{38}\]
we replace (36) and (38) in the inequality (35)
\[\widehat{p}^{\prime}(t)+(\lambda_{0}\overline{d}+c\underline{\mu}^{-1}) \widehat{p}(t)\geq(\lambda_{0}\alpha\overline{\mu}^{-1}+\psi\overline{\mu}^{- 2})\widehat{p}^{2}(t) \tag{39}\]
taking \(\overline{\tau}=\lambda_{0}\overline{d}+c\underline{\mu}^{-1};\ \underline{\psi}= \lambda_{0}\alpha\overline{\mu}^{-1}+\psi\overline{\mu}^{-2},\) this gives
\[\widetilde{p}^{\prime}(t)+\overline{\tau}\widehat{p}(t)\geq\underline{\psi} \widehat{p}^{2}(t)\]
Integrate the above inequality to get
\[\widehat{p}(t)\geq\frac{\exp(-\overline{\tau}t)}{\frac{1}{ \widehat{p}_{0}}-\frac{\psi}{\overline{\tau}}(1-\exp(-\overline{\tau}t))},\ t\leq T_{0} \tag{40}\]
where \(\widehat{p}_{0}=\mu_{1}\widehat{p}_{1}(0)+\mu_{2}\widehat{p}_{2}(0),\) and \(T_{0}\) is given by the right-hand side of (27).
Since for \(\widehat{p}_{0}>\overline{\tau}/\underline{\psi}\) the function \(\widehat{p}(t)\) grows and is unbounded as \(t\to T_{0},\) there exists \(T^{*}\leq T_{0}\) such that \(\widehat{p}(t)\rightarrow+\infty\) as \(t\to T^{*}.\) This shows that \(\mu_{1}\widehat{p}_{1}(t)+\mu_{2}\widehat{p}_{2}(t)\rightarrow+\infty\) as \(t\to T^{*}\) when the condition (28) is satisfied.
By relation (29), \(u_{1}\) or \(u_{2}\) must be unbounded in \([0,T^{*}]\times\overline{\Omega},\) this implies that \((u_{1},u_{2})\) possess the blowing-up property (22).
### Discussion on the blow-up parameters
Following the notations in (23), we obtain the two conditions on the blow-up parameters :
* If \(0<\mu_{1}<\mu_{2}\) we choose \(\psi_{1}=\mu_{1}b_{1}\) then \[\mu_{1}b_{2}<\mu_{2}b_{2}\Leftrightarrow\frac{\mu_{1}c_{1}+\mu_{1}b_{2}}{2}< \frac{\mu_{1}c_{1}+\mu_{2}b_{2}}{2}\] \[\frac{\mu_{1}c_{1}+\mu_{1}b_{2}}{2}<\psi_{2}<\psi_{1}\] we obtain the first condition \[c_{1}+b_{2}<2b_{1}\] (41)
* If \(\mu_{1}>\mu_{2}>0,\) we choose \(\psi_{1}=\mu_{2}c_{2}\) then \[\frac{\mu_{1}c_{1}+\mu_{2}b_{2}}{2}>\frac{\mu_{2}c_{1}+\mu_{2}b_{2}}{2}\] \[\psi_{1}>\frac{\mu_{1}c_{1}+\mu_{2}b_{2}}{2}>\frac{\mu_{2}c_{1}+\mu_{2}b_{2}}{2}\] \[\mu_{2}c_{2}>\frac{\mu_{2}c_{1}+\mu_{2}b_{2}}{2}\] we obtain the second condition : \[2c_{2}>c_{1}+b_{2}\] (42)
Conclusion
We have obtained the following conditions for the golbal existence of the solution to the SKT problem :
If \(b_{1}\geq 2\alpha_{1}\lambda_{0}\), \(c_{2}\geq 2\alpha_{2}\lambda_{0},\ a_{1}\leq(b_{1}-2\alpha_{1}\lambda_{0})\rho_{1}\), \(a_{2}\leq(c_{2}-2\alpha_{2}\lambda_{0})\rho_{2}\) then, the initial upper solution \((w_{1}^{(0)},w_{2}^{(0)})\) satisfies the relation
\[w_{1}^{(0)}\leq\min\left\{\frac{a_{1}}{b_{1}},\frac{(c_{2}-2\alpha_{2}\lambda_ {0})\rho_{2}-a_{2}}{b_{2}}\right\},\ w_{2}^{(0)}\leq\min\left\{\frac{a_{2}}{c_{2 }},\frac{(b_{1}-2\alpha_{1}\lambda_{0})\rho_{1}-a_{1}}{c_{1}}\right\}\]
and the problem (2) admits a unique global solution \((u_{1},u_{2})\) in \(\overline{\Omega}\times[0,+\infty)\).
For the blow-up conditions parameters, we have obtained :
if \(\psi_{1}=\mu_{1}b_{1},\ \psi_{2}=\frac{(\mu_{1}c_{1}+\mu_{2}b_{2})}{2},\ \psi_{2}<\psi_{1}\) and \(0<\mu_{1}<\mu_{2}\) then \(c_{1}+b_{2}<2b_{1}\).
if \(\psi_{1}=\mu_{2}c_{2},\ \psi_{2}=\frac{(\mu_{1}c_{1}+\mu_{2}b_{2})}{2},\psi_{2}< \psi_{1}\) and \(\mu_{1}>\mu_{2}>0\) then \(c_{1}+b_{2}<2c_{2}\).
Under these conditions the solution of problem (2) blows up as \(t\to T^{*}\).
|
2307.11610 | CausE: Towards Causal Knowledge Graph Embedding | Knowledge graph embedding (KGE) focuses on representing the entities and
relations of a knowledge graph (KG) into the continuous vector spaces, which
can be employed to predict the missing triples to achieve knowledge graph
completion (KGC). However, KGE models often only briefly learn structural
correlations of triple data and embeddings would be misled by the trivial
patterns and noisy links in real-world KGs. To address this issue, we build the
new paradigm of KGE in the context of causality and embedding disentanglement.
We further propose a Causality-enhanced knowledge graph Embedding (CausE)
framework. CausE employs causal intervention to estimate the causal effect of
the confounder embeddings and design new training objectives to make stable
predictions. Experimental results demonstrate that CausE could outperform the
baseline models and achieve state-of-the-art KGC performance. We release our
code in https://github.com/zjukg/CausE. | Yichi Zhang, Wen Zhang | 2023-07-21T14:25:39Z | http://arxiv.org/abs/2307.11610v2 | # CausE: Towards Causal Knowledge Graph Embedding
###### Abstract
Knowledge graph embedding (KGE) focuses on representing the entities and relations of a knowledge graph (KG) into the continuous vector spaces, which can be employed to predict the missing triples to achieve knowledge graph completion (KGC). However, KGE models often only briefly learn structural correlations of triple data and embeddings would be misled by the trivial patterns and noisy links in real-world KGs. To address this issue, we build the new paradigm of KGE in the context of causality and embedding disentanglement. We further propose a **Caus**ality-enhanced knowledge graph **E**mbedding (**CausE**) framework. CausE employs causal intervention to estimate the causal effect of the confounder embeddings and design new training objectives to make stable predictions. Experimental results demonstrate that CausE could outperform the baseline models and achieve state-of-the-art KGC performance. We release our code in [https://github.com/zjukg/CausE](https://github.com/zjukg/CausE).
Keywords:Knowledge Graph EmbeddingKnowledge Graph CompletionCausal Inference.
## 1 Introduction
Knowledge graphs (KGs) [2] modeling the world knowledge with structural triples in the form of _(head entity, relation, tail entity)_, which portrays the relation between the head and tail entity. Expressive KGs have become the new infrastructure of artificial intelligence (AI), which have been widely used in question answering [18], recommender systems [20], and fault analysis [6].
KGs are usually inherently incomplete due to their vast diversity and complexity. To address this issue, knowledge graph completion (KGC) has become a popular research topic, aimed at identifying undiscovered triples in KGs. A mainstream solution to KGC is knowledge graph embedding (KGE), which utilizes low-dimensional continuous space to embed entities and relations from the KG. The triple structure is modeled through a score function [3, 17, 12] that measures the plausibility of each triple, forming the basis for predictions in KGC tasks.
However, in KGs, various confounding factors (such as trivial structural patterns, noisy links, etc.) may mislead KGE models, resulting in spurious correlations [11] being learned and non-causal predictions being made. Figure 1
provides an intuitive view of such a situation. While many existing methods propose scoring functions to model different relationship patterns, they overlook the possibility that the knowledge graph data itself may contain information that could mislead the model.
To address the mentioned problem, We decouple the embeddings of entities and relations into causal and confounder embeddings. Then we introduce the theory of causal inference [9] to model and analyze this problem. We construct the structural causal model (SCM) [10] to analyze the KGE task in the context of causality. Meanwhile, we propose a **Caus**ality-enhanced knowledge graph **E**mbedding (CausE) framework to guide the KGE models to learn causal features in the KG. In CausE, we design the intervention operator to implement the backdoor adjustment [10], which would combine the two kinds of embeddings to estimate the effect of the causal and confounder embeddings. Besides, we design two auxiliary training objectives to enhance the model. We conduct comprehensive experiments on two public benchmarks with the link prediction task to demonstrate the effectiveness of CausE on KGC and make further explorations. The main contribution of this paper can be summarized as follows:
* We are the first work to introduce causality theory into the field of KGE.
* We propose a new learning paradigm for KGE in the context of causality and design a **Caus**ality-enhanced knowledge graph **E**mbedding (**CausE** for short) framework to learn causal embeddings for KGE models.
* We conduct comprehensive experiments on public benchmarks to demonstrate the effectiveness of CausE. We also make further exploration to understand it deeply.
## 2 Related Works
### Knowledge Graph Embedding
Knowledge graph embedding [15] usually represent the entities and relations of a KG into the low dimensional continuous space to learn the structural features
Figure 1: A simple example to explain that the confounding factors like noisy links e.g. (Human, prey_on, Mouse) and trivial patterns (Both Tiger and Cat are in the family of Felidae) might mislead the link prediction. In this case, the prediction result of (Tiger, prey_on,?) would be misled to Mouse.
in the KG. A score function is defined in the KGE model to model the triple structure and discriminate the plausibility of triples.
Existing KGE methods [3, 17, 14, 12, 4, 5] focus on design elegant and expressive score functions to modeling the triples. Translation-based methods [3, 12, 5] modeling the relation as a translation from head to tail in the representation space. TransE [3] treats the translation as a vector addition. RotatE [12] represents the relation as a rotation in the complex space. PairRE [5] employs two vectors for relation representation and designs a more complicated score function in the Euclidean space. Besides, semantic matching [17, 14] models employ latent semantic matching to score the triples, which could be regarded as implicit tensor factorization. DistMult [17] treats the process as 3D tensor factorization and ComplEx [14] further extends it to the complex space. Although various KGE methods are proposed and achieve state-of-the-art knowledge graph completion results, no existing methods are concerned with learning the causality of triple structure and making knowledge graph completion better.
### Causal Inference-Enhanced Graph Learning
Causal inference [10, 9] is a popular statistical research topic which aims to discovering causality between data. In recent years, it is becoming increasingly visible to combine causal inference and machine learning to learn the causality from data rather then the correlation for stable and robust prediction. As for graph learning (GL), causal inference also brings a different perspective to the learning paradigm of graphs. CGI [8] employs causal theory to select trustworthy neighbors for graph convolution networks. CAL [11] proposes a causal attention learning framework to learn the causal feature of graphs to enhance the graph classification task. However, there is no existing work to introduce causal theory into the knowledge graph community.
## 3 Preliminary
A knowledge graph can be denoted as \(\mathcal{G}=(\mathcal{E},\mathcal{R},\mathcal{T})\), where \(\mathcal{E}\) is the entitiy set, \(\mathcal{R}\) is the relation set, and \(\mathcal{T}=\{(h,r,t)|h,t\in\mathcal{E},r\in\mathcal{R}\}\) is the triple set.
A KGE model would embed each entity \(e\in\mathcal{E}\) and each relation \(r\in\mathcal{R}\) into the continuous vector space and represent each of them with an embedding. We denote \(\mathbf{E}^{|\mathcal{E}|\times d_{e}}\) and \(\mathbf{R}^{|\mathcal{R}|\times d_{r}}\) as the embedding matrix of entity and relation respectively, where \(d_{e},d_{r}\) are the dimensions of the entity embeddings and the relation embeddings. Besides, a score function \(\mathcal{F}(h,r,t)\) is defined to measure the triple plausibility. The overall target of the KGE model is to give positive triples higher scores and give negative triples lower scores. During training, negative triples are generated by randomly replacing the head or tail entity for positive-negative contrast. We denote the negative triple set as \(\mathcal{T}^{\prime}=\{(h^{\prime},r,t)|(h,r,t)\in\mathcal{T},h^{\prime}\in \mathcal{E},h^{\prime}\neq h\}\cup\{(h,r,t^{\prime})|(h,r,t)\in\mathcal{T},t^ {\prime}\in\mathcal{E},t^{\prime}\neq t\}\). Sigmoid loss proposed by [12] is widely used by recent state-of-the-art KGE methods, which could be
denoted as:
\[\mathcal{L}=\frac{1}{|\mathcal{T}|}\sum_{(h,r,t)\in\mathcal{T}}\Big{(}-\log\sigma( \gamma-\mathcal{F}(h,r,t))-\sum_{i=1}^{K}p_{i}\log\sigma(\mathcal{F}(h_{i}^{ \prime},r_{i}^{\prime},t_{i}^{\prime})-\gamma)\Big{)} \tag{1}\]
where \(\sigma\) is the sigmoid function, \(\gamma\) is the margin, and \(K\) is the number of negative triples generated for each positive triple. The negative triples for \((h,r,t)\) is denoted as \((h_{i}^{\prime},r_{i}^{\prime},t_{i}^{\prime}),i=1,2,\ldots,K\). Besides, \(p_{i}\) is the self-adversarial weight [12] for each negative triple \((h_{i}^{\prime},r_{i}^{\prime},t_{i}^{\prime})\). It could be denoted as \(p_{i}=\frac{\exp(\alpha\mathcal{F}(h_{i}^{\prime},r_{i}^{\prime},t_{i}^{ \prime}))}{\sum_{j=1}^{K}\exp(\alpha\mathcal{F}(h_{j}^{\prime},r_{j}^{\prime},t_{j}^{\prime}))}\), where \(\alpha\) is the temperature of self-adversarial weight.
## 4 Methodology
In this section, we first present the structural causal model (SCM) for the KGE task. Then we further propose our causality-enhanced KGE framework CausE to learn causal and confounder embeddings with carefully designed objectives.
### SCM for KGE task
In KGE models described in Section 3, each entity and relation has a single embedding that encodes both the useful (causal) and harmful (confounder) features. However, as discussed in Section 1, this approach is not robust enough since some local structural information in the KG (e.g. trivial patterns, noisy links) can mislead embedding learning. To develop better embeddings that account for structural causality and make accurate predictions, we introduce the structural causal model (SCM) [10] for KGE, as shown in Figure 2.
The SCM defines variables: the triple data \(T\), the confounder embeddings \(F\), the causal embeddings \(C\), the triple score \(S\), and the prediction result \(Y\). Besides, the SCM demonstrates several causal relations among those variables:
* \(F\gets T\to C\). The causal embeddings \(C\) encode the implicit knowledge about the triple structure. The confounder embeddings \(F\), however, have no contribution to the prediction. As both of them could be learned from the KG data \(T\), such causal relations exist in the SCM.
* \(F\to S\gets C\). \(S\) represents the score of a triple, which is based on both the causal embeddings and confounder embeddings.
* \(S\to Y\). We denote \(Y\) as the prediction results. The overall target of a KGE model is to predict the proper results \(Y\) based on the triple scores \(S\) in the inference stage.
Figure 2: Our SCM for KGE models.
In the original KGE paradigm, the causal and confounder embedding of each entity or relation co-exist in one embedding. With SCM, we explicitly disentangle the structural embeddings from the causal and confounder embeddings and analysis their effects on the prediction results in \(Y\). The next question is how to mitigate the impact of \(F\) on the final prediction \(Y\) to make causal predictions.
### Causal Intervention
According to the SCM, both the confounder embeddings \(C\) and causal embeddings \(F\) could be learned from the triple data, which would be all considered in the triple score \(S\). Thus, \(F\gets T\to C\to S\to Y\) is a backdoor path [9] and \(F\) is the confounder between \(C\) and \(Y\).
To make causal predictions based on causal embeddings \(C\), we need to model \(P(Y|C)\). However, the backdoor path creates a confounding effect of \(F\) on the probability distribution \(P(Y|C)\), opening a backdoor from \(F\) to \(Y\). Therefore, it is crucial to block the backdoor path and reduce the impact of confounder embeddings. This will allow KGE models to make predictions by utilizing the causal embeddings fully. Causality theory [10, 9] provides powerful tools to solve the backdoor path problem.
We employ do-calculus [10, 9] to make the causal intervention on the variable \(C\), which could **cut off the backdoor path**\(F\gets T\to C\to S\to Y\). With the help of do-calculus, the influence from the confounder \(F\) to \(C\) is manually cut off, which means \(C,F\) are independent. Our target turns to estimate \(P(Y|do(C))\) instead of the confounded \(P(Y|C)\). Combined with Bayes Rule and the causal assumptions [10, 9], we could deduce as follows:
\[P(Y|do(C))=P(Y|S)\sum_{d\in\mathcal{D}}P(S|C,d)P(d) \tag{2}\]
The above derivation shows that to estimate the causal effect of \(C\) on \(Y\), it is necessary to consider the scores with both causal and confounder embeddings. This can be understood as re-coupling the decoupled embeddings and using them to calculate the score of the triple. In the next section, we would propose our **Caus**ality-enhanced knowledge graph **E**mbedding (CausE) framework and implement the backdoor adjustments mentioned above.
### CausE Framework
In this section, we would demonstrate our **Caus**ality-enhanced knowledge graph **E**mbedding (CausE) framework. We would first describe the basic settings of CausE and emphasize how we implement the backdoor adjustment in the CausE.
#### 4.3.1 Basic Definition
The overall framework of CausE is shown in Figure 3. In the embedding layer, we define two embeddings called causal embedding and confounder embedding for each entity and relation in the KG, aiming to achieve the disentanglement of causal and confounder features. Specifically, for each
entity \(e\in\mathcal{E}\), we define a causal embedding \(\mathbf{e}_{caus}\) and a confounder embedding \(\mathbf{e}_{conf}\) for it. Similarly, for each relation \(r\in\mathcal{R}\), the two embeddings are \(\mathbf{r}_{caus}\) and \(\mathbf{r}_{conf}\). Such design is consistent with the SCM in Figure 2.
As for the score function, we employ three score functions \(\mathcal{F}_{caus},\mathcal{F}_{conf},\mathcal{F}_{inter}\), which are called causal score, confounder score, and intervention score respectively. The three score functions are in the same form but can be any general score functions proposed by the existing KGE models. Besides, we design several loss functions to guide the training process of CausE. We would describe the details of the score functions and their corresponding loss functions.
#### 3.2.1 Causal and Confounder Scores
The causal score function \(\mathcal{F}_{caus}(h,r,t)\) takes the causal embeddings \(\mathbf{h}_{caus},\mathbf{r}_{caus},\mathbf{t}_{caus}\) of \(h,r,t\) as input and calculate the causal score the triple. According to our assumption, the causal embeddings are expected to make reasonable and causal predictions. Thus, the causal score \(\mathcal{F}_{caus}(h,r,t)\) should still follow the general rule of KGE models: positive triple should have higher scores. We apply sigmoid loss function with self-adversarial negative sampling as the loss function to train the causal embeddings. The causal loss \(\mathcal{L}_{caus}\) has the same form as Equation 1, which is based on \(\mathcal{F}_{caus}\).
Meanwhile, the confounder score function \(\mathcal{F}_{conf}(h,r,t)\) would calculate the confounder score of the confounder embeddings \(\mathbf{h}_{conf}\), \(\mathbf{r}_{conf}\), \(\mathbf{t}_{conf}\). Different from the causal embeddings, we assume that confounder embeddings learn the harmful features from the KGs and they make no positive contribution to the reasonable prediction. Hence, the confounder score \(\mathcal{F}_{caus}(h,r,t)\) should be close to the confounder score of negative triples, which means the KGE model is misled by the harmful features and could not distinguish the positive triple from high plausibility from the negative triples. Therefore, we apply the mean squared error (MSE) loss to train the confounder embeddings. The training objective can
Figure 3: The overall architecture of CausE. We disentangle the embeddings into two parts called causal and confounder embeddings respectively while applying three score functions. We also design five loss functions to train these embeddings while the causal intervention is integrated into them.
be denoted as:
\[\mathcal{L}_{conf}=\frac{1}{|\mathcal{T}|}\sum_{(h,r,t)\in\mathcal{T}}\Big{(} \mathcal{F}_{caus}(h,r,t)-\sum_{i=1}^{K}p_{i}\mathcal{F}_{caus}(h_{i}^{\prime},r_{i}^{\prime},t_{i}^{\prime})\Big{)}^{2} \tag{3}\]
By the two loss functions proposed above, we could achieve the disentanglement of the causal and confounder embeddings.
#### 3.2.2 Intervention Scores
As shown in Equation 2, we need to implement the backdoor adjustment. As we mentioned above, the formula for backdoor adjustment can be understood as jointly measuring the triple plausibility with both causal and confounder embeddings, while considering all possible confounder embedding. This is equivalent to recombining the two decoupled embeddings into the original embeddings and computing the score.
We call this score an intervention score \(\mathcal{F}_{inter}(h,r,t)\). Besides, we propose a **intervetion operator**\(\Phi\) to recombine the two embeddings and get the intervention embeddings as the output. This process can be denoted as:
\[\mathbf{e}_{inter}=\Phi(\mathbf{e}_{caus},\mathbf{e}_{conf}),\mathbf{e}\in\{\mathbf{h},\mathbf{t}\} \quad\mathbf{r}_{inter}=\Phi(\mathbf{r}_{caus},\mathbf{r}_{conf}) \tag{4}\]
We employ the addition operation as the intervention operation. Hence, we could calculate the intervetion score \(\mathcal{F}_{inter}(h,r,t)\) with the intervention embedding \(\mathbf{h}_{inter},\mathbf{r}_{inter},\mathbf{t}_{inter}\). From another perspective, causal intervention is such a process that employs the confounder embeddings to disrupt the prediction of the causal embeddings to estimate the causal effect of the confounder embeddings. We expect the intervention scores could still lead to reasonable predictions. Thus, the training objective \(\mathcal{L}_{inter}\) is also a sigmoid loss like 1 based on \(\mathcal{F}_{inter}\).
#### 3.2.3 Auxiliary Objectives
To further improve the performance of CausE, we utilize the intervention score and propose two auxiliary training objectives.
As we mentioned above, the intervention embeddings can be regarded as the causal embeddings perturbed by the confounder embeddings. Therefore, the effectiveness of the causal scores should be worse than the causal scores but better than the confounder scores. Based on such an assumption, we design two auxiliary training objectives. The first auxiliary objective is between the causal and intervention scores. We apply the sigmoid loss function to make the contrast between them and push the causal scores higher than the intervention scores:
\[\mathcal{L}_{aux1}=\frac{1}{|\mathcal{T}|}\sum_{(h,r,t)\in\mathcal{T}}\Big{(} -\log\sigma(\gamma-\mathcal{F}_{caus}(h,r,t))-\log\sigma(\mathcal{F}_{inter} (h,r,t)-\gamma)\Big{)} \tag{5}\]
The second auxiliary objective \(\mathcal{L}_{aux2}\) is similarly designed as \(\mathcal{L}_{aux1}\) to push the intervention scores higher than the confounder scores. In summary, the overall training objective of CausE is:
\[\mathcal{L}=\mathcal{L}_{caus}+\mathcal{L}_{conf}+\mathcal{L}_{inter}+ \mathcal{L}_{aux1}+\mathcal{L}_{aux2} \tag{6}\]
## 5 Experiments
In this section, we will demonstrate the effectiveness of our methods with comprehensive experiments. We first detailedly introduce our experimental settings in Section 5.1. Then we would demonstrate our results to answer the following questions:
* **RQ1**: Could CausE outperform the existing baseline methods in the knowledge graph completion task?
* **RQ2**: How does CausE perform in the noisy KGs?
* **RQ3**: How much does each module of CausE contribute to the performance?
* **RQ4**: Do the learned embeddings achieve our intended goal?
### Experiment Settings
#### 5.1.1 Datasets / Tasks / Evaluation Protocols.
In the experiments, we use two benchmark datasets FB15K-237 [13] and WN18RR [7].
We evaluate our method with link prediction task, which is the main task of KGC. Link prediction task aims to predict the missing entities for the given query \((h,r,?)\) or \((?,r,t)\). We evaluate our method with mean reciprocal rank (MRR), and Hit@K (K=1,3,10) following [12]. Besides, we follow the filter setting [3] which would remove the candidate triples that have already appeared in the training data to avoid their interference.
#### 5.1.2 Baselines.
As for the link prediction task, we select several state-of-the-art KGE methods, including translation-based methods (TransE [3], RotatE [12], PairRE [5]), semantic matching methods (DistMult [17], ComplEx [14]), quaternion-based methods (QuatE [19], DualE [4]), and neural network based methods (ConvE [7], MurP [1]). We report the baseline results from the original paper.
#### 5.1.3 Parameter Settings.
We implement CausE framework to five representative score functions: TransE [3], DistMult [16], ComplEx [14], PairRE [5], and DualE [4]. We apply grid search to tune the hyper-parameters to find the best results of CausE. We search the embedding dimension of the KGE model \(d_{e},d_{r}\in\{256,512,1024\}\), the margin \(\gamma\in\{0,4,6,8\}\), the training batch size \(\in\{512,1024\}\), the temperature \(\alpha\in\{1.0,2.0\}\), the negative sample number \(N_{k}\in\{64,128,256\}\), and the learning rate \(\eta\in\{1e^{-3},1e^{-4},2e^{-5}\}\). We conduct all the experiments on Nvidia GeForce 3090 GPUs with 24 GB RAM.
### Main Results (RQ1)
Our main experiment results are in Table 1. From the results, we could find that The CausE could outperform the baseline methods on the two benchmarks. For example, CausE can achieve a relatively 1.4% Hit@1 improvement on the
WN18RR dataset. Such results demonstrate that CausE becomes a new state-of-the-art KGE method.
Meanwhile, CausE is a universal framework and can be applied in various KGE models. The results in Table 1 also demonstrate that CausE could enhance the performance of various KGE models, compared with the corresponding baselines trained w/o CausE. For example, the MRR results on the FB15K-237 dataset of the TransE/DistMult/ComplEx models get relative improvement by 18.9%, 13.5%, and 16.5% respectively. We speculate that this is due to the design defects in the early KGE models, which would mislead the model to learn the confounder features in the KG and make non-causal predictions in the inference stage. Overall, we show that CausE can outperform the baseline methods in various score functions. Thus, the **RQ1** is solved.
### Link Prediction on Noisy KG (RQ2)
To answer the **RQ2**, we make further exploration on the noisy link prediction task, aiming to validate the robustness of CausE on noisy KGs. We set a parameter called noisy rate \(\lambda\), it is defined as \(\lambda=\frac{|\mathcal{T}_{noisy}|}{|\mathcal{T}_{train}|}\), where \(\mathcal{T}_{noisy}\subset\mathcal{T}_{train}\) is the noisy link set of the training set. We generate noisy KGs by randomly replacing the positive triples and setting the noisy rate \(\lambda\) from 1% to 10%. We conduct experiments on these noisy datasets with DistMult [17] and ComplEx [14]. The results are shown in Figure 4.
According to the noisy link prediction results, we could first observe that the performance of KGE models is gradually declining as the noisy links in the training data increase. Further, the models enhanced with CausE outperform the baseline models on different benchmarks and score functions. Such experimental results show that our design is effective to counter the noise in the data set and achieve better link prediction performance.
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{FB15K-237} & \multicolumn{4}{c}{WN18RR} \\ & MRR & Hit@10 & Hit@3 & Hit@1 & MRR & Hit@10 & Hit@3 & Hit@1 \\ \hline TransE [3] & 0.279 & 0.441 & 0.376 & 0.198 & 0.224 & 0.520 & 0.390 & 0.022 \\ DistMult [17] & 0.281 & 0.446 & 0.301 & 0.199 & 0.444 & 0.504 & 0.470 & 0.412 \\ ComplEx [14] & 0.278 & 0.450 & 0.297 & 0.194 & 0.449 & 0.530 & 0.469 & 0.409 \\ ConvE [7] & 0.312 & 0.497 & 0.341 & 0.225 & 0.456 & 0.531 & 0.470 & 0.419 \\ RotatE [12] & 0.338 & 0.533 & 0.375 & 0.241 & 0.476 & 0.571 & 0.492 & 0.428 \\ MurP [1] & 0.336 & 0.521 & 0.370 & 0.245 & 0.475 & 0.554 & 0.487 & 0.436 \\ QuatE [19] & 0.311 & 0.495 & 0.342 & 0.221 & 0.481 & **0.564** & 0.500 & 0.436 \\ DualE [4] & 0.330 & 0.518 & 0.363 & 0.237 & 0.482 & 0.561 & 0.500 & 0.440 \\ PairRE [5] & 0.351 & 0.544 & 0.387 & 0.256 & - & - & - & - \\ \hline CausE (TransE) & 0.332 & 0.517 & 0.368 & 0.234 & 0.227 & 0.536 & 0.391 & 0.023 \\ CausE (DistMult) & 0.298 & 0.473 & 0.327 & 0.212 & 0.447 & 0.517 & 0.452 & 0.415 \\ CausE (ComplEx) & 0.324 & 0.504 & 0.357 & 0.234 & 0.467 & 0.527 & 0.482 & 0.436 \\ CausE (SOTA) & **0.355** & **0.547** & **0.392** & **0.259** & **0.486** & 0.562 & **0.502** & **0.446** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Link prediction results on FB15K-237 and WN18RR. The best results are **bold** and the second best results are underlined for each metrics.
### Ablation Study (RQ3)
To explore the **RQ3**, we conduct ablation studies on different components of CausE in this section. We mainly verify the effectiveness and necessity of module design from two aspects.
First, we remove each of the five training objectives and conduct link prediction experiments. Secondly, we validate the effectiveness of the intervention operator by replacing the addition operation \(\Phi\) with other common operators.
Our ablation studies are conducted in the mentioned settings with ComplEx score and WN18RR dataset, while keeping other hyper-parameters same. The results are shown in 2. The experiment results show that all five parts of the training objective are of great significance, as the model performs worse when any of them is removed. The performance of the model degrades most when \(\mathcal{L}_{inter}\) is removed. Hence, the results emphasize that causal intervention plays a very important role in CausE. Meanwhile, when the intervention operator is changed to other settings, the performance of the model has also decreased. Thus, we could conclude that the addition operation is a pretty good choice, as it is simple but effective enough.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline \multicolumn{2}{c|}{Model} & MRR & Hit@10 & Hit@3 & Hit@1 \\ \hline CausE-ComplEx & 0.467 & 0.527 & 0.482 & 0.436 \\ \hline \multirow{3}{*}{\(\mathcal{L}\)} & w/o \(\mathcal{L}_{caus}\) & 0.458 & 0.525 & 0.479 & 0.421 \\ & w/o \(\mathcal{L}_{conf}\) & 0.453 & 0.509 & 0.467 & 0.424 \\ & w/o \(\mathcal{L}_{inter}\) & 0.427 & 0.494 & 0.452 & 0.407 \\ & w/o \(\mathcal{L}_{aux1}\) & 0.454 & 0.508 & 0.466 & 0.426 \\ & w/o \(\mathcal{L}_{aux2}\) & 0.446 & 0.497 & 0.460 & 0.419 \\ \hline \multirow{3}{*}{\(\Phi\)} & subtraction & 0.454 & 0.507 & 0.464 & 0.426 \\ & multiple & 0.439 & 0.494 & 0.454 & 0.409 \\ \cline{1-1} & concatenation & 0.433 & 0.482 & 0.442 & 0.409 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation Study result on WN18RR dataset with ComplEx score.
Figure 4: The noisy link prediction results. We report the Hit@1 and MRR results for different experiment settings. The x-axis represents the noisy rate (%) of the training dataset.
### Visualization
To answer **RQ4** and to illustrate the effectiveness of CausE intuitively, we selected entities with several different types and visualize their embeddings with t-SNE, which is shown in Figure 5. We can find that the causal embedding distribution of different types can be clearly distinguished, while the confounder embedding are relatively mixed and closer together. The distribution of the intervention embeddings which could represent the original embeddings without disentanglement lies between the two. This shows that our approach make causal embeddings learn more distinguishable and achieve the designed goal.
## 6 Conclusion
In this paper, we emphasis that learning correlation in knowledge graph embedding models might mislead the models to make wrong predictions. We resort to causal inference and propose the new paradigm of knowledge graph embedding. Further, we propose a novel framework called CausE to enhance the knowledge graph embedding models. CausE would disentangle the causal and confounder features to different embeddings and train those embeddings guided by the causal intervention. Comprehensive experiments demonstrate that CausE could outperform the baseline methods achieve new state-of-the-art results. In the future, we plan to introduce more causality theory into knowledge graph embeddings and we attempt to apply the causal theory in more complex scenarios such as multi-modal knowledge graphs, and temperal knowledge graphs.
## Acknowledge
This work is funded by Zhejiang Provincial Natural Science Foundation of China (No. LQ23F020017), Yongjiang Talent Introduction Programme (2022A-238-G), and NSFC91846204/U19B2027.
Figure 5: Embedding visualization results with t-SNE, we assign different colors for the entities with different types. |
2303.12836 | Photogalvanic response in multi-Weyl semimetals | We investigate the dependence of the photogalvanic response of a multi-Weyl
semimetal on its topological charge, tilt, and chemical potential. We derive
analytical expressions for the shift and injection conductivities for tilted
charge-$n$ Weyl points $(n=1,2,3)$ using a low energy two-band effective
Hamiltonian. For double-Weyl semimetals, we also compute the response from
two-band and four-band tight-binding models with broken time-reversal symmetry
to study the effect of band bending and the contributions from higher bands. We
find a significant deviation in the responses obtained from the effective
low-energy continuum model and more realistic four-band continuum and
tight-binding models. We analyze several different limits of these models. We
describe the nature of the deviations and provide estimates of their dependence
on the frequency and other model parameters. Our analysis provides a simple
explanation for the first-principle calculation based frequency dependence of
the injection current in SrSi$_2$. Additionally, we find interesting parameter
regimes where the frequency dependence of the non-linear optical response can
be directly used to probe the type-I/type-II nature of the Weyl cone. We obtain
analytical results for the charge-4 Weyl semimetal by reducing the original
problem involving a triple $k$-space integral to one with only a double
integral. This simplification allows us to extract all relevant information
about the nature of its second-order dc response and the precise condition for
observing circular photogalvanic effect quantization. The semi-analytical
approach presented here can also be extended to a systematic study of second
harmonic generation and first-order optical conductivity in charge-4 Weyl
semimetals. | Arpit Raj, Swati Chaudhary, Gregory A. Fiete | 2023-03-22T18:00:05Z | http://arxiv.org/abs/2303.12836v2 | # Photogalvanic response in multi-Weyl semimetals
###### Abstract
We investigate the dependence of the photogalvanic response of a multi-Weyl semimetal on its topological charge, tilt, and chemical potential. We derive analytical expressions for the shift and injection conductivities for tilted charge-\(n\) Weyl points (\(n=1,2,3\)) using a low energy two-band effective Hamiltonian. For double-Weyl semimetals, we also compute the response from two-band and four-band tight-binding models with broken time-reversal symmetry to study the effect of band bending and the contributions from higher bands. We find a significant deviation in the responses obtained from the effective low-energy continuum model and more realistic four-band continuum and tight-binding models. We analyze several different limits of these models. We describe the nature of the deviations and provide estimates of their dependence on the frequency and other model parameters. Our analysis provides a simple explanation for the first-principle calculation based frequency dependence of the injection current in SrSi\({}_{2}\). Additionally, we find interesting parameter regimes where the frequency dependence of the non-linear optical response can be directly used to probe the type-I/type-II nature of the Weyl cone. We obtain analytical results for the charge-4 Weyl semimetal by reducing the original problem involving a triple \(k\)-space integral to one with only a double integral. This simplification allows us to extract all relevant information about the nature of its second-order dc response and the precise condition for observing circular photogalvanic effect quantization. The semi-analytical approach presented here can also be extended to a systematic study of second harmonic generation and first-order optical conductivity in charge-4 Weyl semimetals.
## I Introduction
The quantum geometry (QG) of Bloch wavefunctions can significantly influence the electronic properties and response functions of a material [1]. The quantum anomalous Hall effect in the absence of a magnetic field is the seminal example of such QG effects, originating in this case from the Chern number [2]. More generally, the anomalous contribution from band topology can overcome limitations in non-topological systems on many physical properties like superfluid weight [3], exciton stability [4], transport coefficients [5], and optical responses [6; 7]. The bulk photovoltaic effect (BPVE) is one such effect where quantum geometry contributions have been shown to be of immense importance [8; 9]. The BPVE is a second-order optical response where a DC current is produced in response to an AC electric field. It has been shown that in many non-centrosymmetric materials, the non-trivial structure of Bloch wavefunctions engenders a BPVE without creating any macroscopic electric field or carrier concentration gradient in the sample [10]. This allows one to overcome the Shockley-Queisser limit [11] present in traditional p-n junctions.
Based on the mechanism of generation, the bulk photovoltaic effects can be divided into shift and injection currents [8]. The shift current results from the real-space shift in the electron wavepacket due to inter-band photoexcitation [12], and the injection current is caused by change in electron velocity upon inter-band transition [8]. The properties of these responses are determined by the polarization of light, and presence of time-reversal and space-inversion symmetries [13]. The shift current response occurs for linearly-polarized light even when time-reversal symmetry is present. On the other hand, the injection current requires circularly polarized light and is also known as the circular photogalvanic effect (CPGE). However, when time-reversal symmetry is broken, both the shift current and injection current can occur for circularly and linearly polarized light, respectively [14].
These mechanisms for a BPVE are intimately related to the quantum geometry of the electronic wavefunction [15; 16], and thus are proving to be reliable tools to probe and utilize the band topology [6; 7]. The bulk photovoltaic effects in Weyl semimetals have attracted enormous research interest as they provide a mechanism to generate photocurrents in the infrared and THz regime [17; 18]. It was shown in the seminal work [19], that the CPGE contribution from a Weyl node would exhibit quantization proportional to the charge of the Weyl node. Following these theory works, CPGE was measured in many different Weyl semimetals including TaAs [20], RhSi [21], and TaIrTe \({}_{4}\)[22] which showed interesting helicity-dependent behavior arising from the chirality of Weyl nodes. These experimental works also highlighted the importance of using more realistic models: the tilt and higher bands were shown to play an important role in determining the CPGE response [23].
In recent years, many different kinds of Weyl semimetals have been discovered [24; 25; 26; 27]. In certain
materials, it has been shown that the Weyl node carrying a charge higher than \(n=1\) can be stabilized by crystal symmetries [28]. These semimetals, also known as multi-Weyl semimetals (MWSMs) have been proposed in SrSi\({}_{2}\)[29; 30], Cu\({}_{2}\)Se, and RhAs\({}_{3}\)[31] which can host Weyl nodes with charge \(n=2\). It was shown in Ref. [32; 33] that such double-Weyl nodes can also be engineered in Luttinger semimetals like \(\alpha\)-Sn by applying strain and magnetic fields or via Floquet engineering.
Materials which can host Weyl nodes with Berry monopole charge higher than two are not known but triple-WSM can be possibly obtained from cubic Dirac semimetals [34] by applying a magnetic field or by Floquet engineering [35]. Another interesting feature of these multi-Weyl semimetals is that the dispersion around Weyl node is no longer linear in all directions but becomes quadratic (cubic) for two directions for charge two (three). This leads to a strong anisotropy in the velocity matrix and also modifies the density of states which has been known to affect the transport coefficients [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] and linear optical responses [52; 53; 54; 55; 56; 57] of multi-Weyl semimetals. These unusual properties of multi-Weyl semimetals are also believed to significantly influence the second-order optical responses, such as the BPVE and second-harmonic generation. A deeper understanding of how different properties of these MWSMs affect the shift current and injection current can possibly lead to a mechanism to probe the topological charge of Weyl semimetals.
Most theoretical works on the BPVE employ effective two-band low-energy Hamiltonians. These models have proven quite useful for general predictions like the quantization of the injection current conductivity, but the experimental signatures are often complicated by the discrepancy between effective low-energy models and real electronic band structure where the band curvatures and higher energy bands start to play an important role. As a result, the predictions of the continuum model usually agree only in a small energy window. This necessitates the need to analyze the role of different model parameters and understand the frequency behavior of Weyl semimetals in different regimes away from this small energy window.
In our work, we first provide a complete analytical solution to the two-band charge-\(n\) low energy Hamiltonian along with an analysis of its important features, including CPGE quantization. These analytical expressions elucidate the role of tilt and non-linear dispersion on different components of the shift and injection current conductivities in multi-Weyl semimetals. We also numerically evaluate the response in tight-binding models and observe a significant deviation in some components of second-order conductivity which highlight the importance of band curvature.
For multi-Weyl semimetals, the validity of two-band models becomes further restricted. Double Weyl nodes are obtained when two charge-1 Weyl nodes are pinned to a high-symmetry point and two of the four bands are gapped out by some symmetry allowed perturbations. As a result, even if the effective two-band picture is valid for each charge-1 Weyl node in a given energy range, it might not be valid for the double-Weyl node if the perturbation is not strong enough to push the other two bands out of that energy window. This type of scenario occurs in the charge-2 WSM SrSi\({}_{2}\)[29] where the two charge-1 Weyl nodes are gapped out by a spin-orbit coupling resulting in a very small gap between the bands hosting a double-Weyl node and the higher energy bands.
Inspired by the band structure of SrSi\({}_{2}\), we also consider a four-band continuum model and find a significant deviation from the two-band continuum model. We find that the CPGE quantization is destroyed and instead a very different behavior is observed at small frequencies. In the particular case of the four-band model, we find two opposite limits in the parameter space with good and poor agreement. We notice that the agreement is better when the perturbation induced gap is large. Our analysis provides a simple-explanation for the results from first-principle calculations in Ref. [58] where quantization is observed only above a certain cutoff frequency. We attribute this discrepancy to the contribution from higher bands.
Finally, we also investigate the charge-4 case by using a two-band effective low energy model and a tight-binding model. We derive semi-analytical expressions for different components of the shift and injection current conductivities. Most importantly, we obtain the analytical limits for the frequency window where CPGE quantization can be observed.
Our paper is organized as follows. In Sec.II, we provide a brief introduction to the shift and injection current conductivities along with the symmetry requirements to observe their effects. In Sec.III, we derive expressions for different components of these second-order conductivity tensors by considering an effective two-band low-energy Hamiltonian for a Weyl node with arbitrary charge \(n\). We also include a finite tilt in the \(z\)-direction in our analysis and systematically study how tilt affects these different components at different chemical potentials and frequencies. In Sec.IV, we focus on double-Weyl semimetals and consider two different models. First, we compare different conductivities for a two-band tight-binding model and an effective low-energy Hamiltonian. Next, we consider a four-band model inspired by the SrSi\({}_{2}\) band structure around its double Weyl node and study the second-order conductivities in different limits. In Sec.V, we derive the joint density of states (JDOS), and the shift and injection current conductivity expressions for a charge-4 model. In Sec.VI, we discuss the implications of our results.
## II Photogalvanic response
In materials lacking inversion symmetry, the photogalvanic effect (PGE) refers to the generation
of directed photocurrent as a second-order response to an external time-varying electromagnetic field. For light of frequency \(\omega\) (and wavelength much larger than the sample size so the electric field has uniform amplitude), the second-order dc response is given by,
\[j_{dc}^{a}=\sigma^{abc}(\omega)E_{b}(\omega)E_{c}(-\omega), \tag{1}\]
where the second-order conductivity \(\sigma^{abc}(\omega)\) can be divided into a shift current conductivity, \(\sigma^{abc}_{\rm shift}\) and an injection current conductivity, \(\sigma^{abc}_{\rm inj}\). These two quantities are given by,
\[\begin{split}\sigma^{abc}_{\rm shift}&=\frac{-i\pi e ^{3}}{\hbar^{2}}\int_{\mathbf{k}}\sum_{n>m}f_{nm}\Big{(}r^{b}_{nm}r^{c}_{mn;a}-r ^{c}_{mn}r^{b}_{nm;a}\Big{)}\\ &\times\delta(\omega_{nm}-\omega),\end{split} \tag{2}\] \[\begin{split}\sigma^{abc}_{\rm inj}&=\tau\frac{2 \pi e^{3}}{\hbar^{2}}\int_{\mathbf{k}}\sum_{n>m}f_{nm}\Delta^{a}_{nm}r^{b}_{nm}r^ {c}_{mn}\delta(\omega_{nm}-\omega),\end{split} \tag{3}\]
where, \(n,m\) label the energy bands, \(\int_{\mathbf{k}}=\int\mathrm{d}^{3}k\,/(2\pi)^{3}\), \(\omega_{nm}=\omega_{n}-\omega_{m}\) is the energy difference between bands \(n\) and \(m\), \(f_{nm}=f_{n}-f_{m}\) where \(f\) is the Fermi-Dirac distribution function, \(\Delta^{a}_{nm}=v^{a}_{nm}-v^{a}_{nm}\) with \(v^{a}_{nm}\) being the velocity matrix elements, and \(\tau\) is the relaxation time. The interband Berry connection is given by \(r^{b}_{nm}=\bra{n}i\frac{\partial}{\partial k_{b}}\ket{m}\) for \(n\neq m\) and zero otherwise, with its generalized derivative defined as \(r^{b}_{nm;a}=\frac{\partial r^{b}_{nm}}{\partial k_{a}}-i(\xi^{a}_{nn}-\xi^{a} _{nm})r^{b}_{nm}\), where \(\xi^{a}_{nn}=\bra{n}i\frac{\partial}{\partial k_{a}}\ket{n}\) is the intraband Berry connection.
Numerical calculation of these quantities by direct evaluation of wavefunction derivatives can be difficult as it would require fixing a smooth gauge for the wavefunctions at each point. However, it is possible to circumvent this problem completely by making use of \(r^{b}_{nm}=-iv^{b}_{nm}/\omega_{nm}=-i\bra{n}\frac{\partial}{\partial k_{b}} \mathcal{H}\ket{m}/\omega_{nm}\), and the sum rule [13; 59; 8],
\[\begin{split} r^{b}_{nm;a}&=\frac{i}{\omega_{nm}} \Bigg{[}\frac{\Delta^{b}_{nm}v^{a}_{nm}+\Delta^{a}_{nm}v^{b}_{nm}}{\omega_{nm} }-w^{ba}_{nm}\\ &\qquad\quad+\sum_{l\neq n,m}\bigg{(}\frac{v^{b}_{nl}v^{a}_{lm}} {\omega_{lm}}-\frac{v^{a}_{nl}v^{b}_{lm}}{\omega_{nl}}\bigg{)}\Bigg{]},\quad n \neq m\end{split} \tag{4}\]
where, \(w^{ba}_{nm}=\bra{n}\frac{\partial}{\partial k_{b}}\frac{\partial}{\partial k _{a}}\mathcal{H}\ket{m}\). The condition on the summation \(\sum_{l\neq n,m}\) is understood as \(\omega_{l}\neq\omega_{n},\omega_{m}\)[13].
The consequences of time-reversal symmetry can be seen directly by analyzing the integrand in Eq. (2) and Eq. (3) under a time-reversal operation. Time-reversal symmetry enforces the real part of the integrand to be odd in \(\tilde{k}\) space, and hence makes \(\sigma^{abc}_{\rm shift}\) real and \(\sigma^{abc}_{\rm inj}\) imaginary [60]. In other words, when time-reversal symmetry is preserved, the shift current conductivity is non-zero only for linearly-polarized light and the injection current requires circularly polarized light. However, no such restrictions are present once the time-reversal symmetry is broken.
## III Results for the charge-\(n\) low-energy Weyl Hamiltonian
We begin with a low-energy effective Hamiltonian for a two-band charge-\(n\) Weyl point
\[\mathcal{H}_{n}=\begin{pmatrix}u_{z}k_{z}+u_{t}k_{z}-\mu&\varepsilon_{0}( \tilde{k}_{x}-i\zeta\tilde{k}_{y})^{n}\\ \varepsilon_{0}(\tilde{k}_{x}+i\zeta\tilde{k}_{y})^{n}&-u_{z}k_{z}+u_{t}k_{z}- \mu\end{pmatrix}, \tag{5}\]
where \(\zeta=\pm 1\), \(u_{z}\) and \(u_{t}\) are, respectively, the effective velocity and tilt along \(\mathbf{\hat{z}}\). Here, \(\tilde{k}_{x,y}=k_{x,y}/k_{0}\), and \(\mu\) is the chemical potential. The values \(k_{0},\varepsilon_{0}\) are material-dependent parameters with units of momentum and energy, respectively. We will assume \(\varepsilon_{0}>0\) and set \(k_{0}=1\). The chirality of this Weyl point is \(\chi=\mathrm{sgn}(u_{z}\zeta)\). The energy eigenvalues are given by,
\[E_{n,\pm}=u_{t}k_{z}-\mu\pm\varepsilon_{0}\sqrt{(\tilde{k}_{x}^{2}+\tilde{k}_{y }^{2})^{n}+u_{z}^{2}k_{z}^{2}/\varepsilon_{0}^{2}}. \tag{6}\]
It should be noted that although all our derivations will hold for \(n\) being any positive integer, it makes physical sense to only take \(n=1,2,3\) due to symmetry restrictions in actual lattice systems [61; 62]. Two-band charge-4 Weyl points are allowed but have different low energy Hamiltonian [61; 63] and are discussed in a later section.
In order to use Eq. (2), and Eq. (3) to find the shift and injection conductivity tensors, we note that the delta and Fermi-Dirac distribution functions restrict the domain of integration. In our calculations, we assume temperature, \(T=0\) K which simplifies the Fermi-Dirac distribution to \(f(E)=1-\Theta(E)\) where \(\Theta\) is the Heaviside function. The delta function forces the integration we performed over the surface \(2\varepsilon_{0}\sqrt{(\tilde{k}_{x}^{2}+\tilde{k}_{y}^{2})^{n}+u_{z}^{2}k_{ z}^{2}/\varepsilon_{0}^{2}}-\omega=0\), while the theta function further selects out a portion of this surface. By making suitable substitutions, this surface can be transformed into a sphere which makes it easier to perform the integral analytically (see Appendix A) for arbitrary charge \(n\).
After accounting for the finite tilt of the Weyl cone, the Pauli blocking condition restricts the integration region on this sphere to region \(S\) as shown in Fig. 1 with \(\theta_{1}\) and \(\theta_{2}\) given by:
\[\theta_{p}=\begin{cases}-\pi/2,\text{ if }\varphi_{p}<-1\\ \arcsin(\varphi_{p}),\text{ if }-1\leq\varphi_{p}\leq 1\;,\\ +\pi/2,\text{ if }1<\varphi_{p}\end{cases} \tag{7}\]
for \(p=1,2\) where \(\varphi_{p}=\frac{1}{W}\left(\mathrm{sgn}\left(\frac{u_{t}}{u_{z}}\right)\frac{2 \mu}{\omega}+(-1)^{p}\right)\), and \(W=|u_{t}/u_{z}|\) is an important quantity which determines if the WSM is type-I (\(W<1\)) or type-II (\(W>1\)). The behavior of \(\theta_{1},\theta_{2}\) is mainly determined by the amount of tilt (\(W\)) and doping (\(\mu\)), and is crucial to understanding the basic features of the response.
For zero doping, \(\theta_{2}=-\theta_{1}=\pi/2\) for type-I and \(\theta_{2}=-\theta_{1}=\arcsin(1/W)\) for type-II WSM. It should also be noted that the angles lose dependence on chirality in this case. It is important to note that these results contain an
implicit \(\omega\) dependence. In the transformed coordinates, where the integration surface is a sphere, these angles are measured from the \(x\)-axis in the \(xz\)-plane and determine which part of that surface is not Pauli-blocked (region \(S\) in Fig. 1).
First, we evaluate the join density of states using the expression
\[\text{JDOS}(\omega)=\int_{\mathbf{k}}\sum_{n>m}f_{mn}\delta(\omega_{nm}-\omega), \tag{8}\]
where the factor \(f_{nm}\) accounts for Pauli-blocking effects. In the absence of the tilt, we obtain the expected \(\omega^{2/n}\) dependence for a charge-\(n\) Weyl node. However, at finite tilt and finite chemical potential, this \(\omega^{2/n}\) dependence is modulated by the angular factor of \(\int_{\theta_{1}}^{\theta_{2}}\cos^{2/n-1}\theta\,\mathrm{d}\theta\).
Next, we calculate different components of shift and injection current tensors. The resulting expressions are given in Table 1. We notice that all conductivity tensors are directly proportional to the charge of the Weyl point, except for \(\sigma_{\text{inj}}^{zzz}\). It should be noted that analytical results for the \(n=1\) and untilted \(n=2\) cases have been given in Refs. [13, 59, 64, 65] and Ref. [66], respectively. Here, we have extended the analytical results to arbitrary chiral charge-\(n\) with finite tilt.
Let's first analyze the shift current conductivity results. As shown in Table 1, there are two kinds of non-zero components: (i) purely imaginary which is responsible for a second-order dc photocurrent from circular polarization, and (ii) purely real which leads to a photogalvanic effect from linearly polarized light. For the shift current, the circular polarization components always vanish at zero doping since \(\theta_{1}=-\theta_{2}\) for \(\mu=0\) from Eq. (7). Similarly, when time-reversal symmetry (TRS) is preserved, the circular polarization current from a time-reversed pair of nodes would also vanish as \(u_{z}\to-u_{z}\) under time-reversal.
On the other hand, the linear polarization component \(\sigma_{\text{shift}}^{xyz}\) shows a very interesting behavior and can even provide estimates of tilt and chemical potential. We note that among all the non-zero conductivity tensors, \(\sigma_{\text{shift}}^{xyz}\) alone changes sign with frequency and can be used to estimate \(\mu\). For type-I and type-II with \(W<2\), this sign change occurs at \(\omega=2|\mu|\) which can be understood from Eq. (7) which indicates that while one of the angles is zero the other becomes \(\pm\pi/2\) leading to \(\sigma_{\text{shift}}^{xyz}=0\). The latter stays at \(\pm\pi/2\) for small variation in \(\omega\), while the former changes sign going through \(\omega=2|\mu|\), causing \(\sigma_{\text{shift}}^{xyz}\) to do the same (as it has a \(\sin\theta\cos^{2}\theta\) dependence).
The \(W\geq 2\) case is not so straightforward but after some work we find that the sign change occurs at \(2|\mu|\sqrt{\frac{3}{W^{2}-1}}\) (see Appendix D for details). Note that \(\frac{2|\mu|}{1+W}<2|\mu|\sqrt{\frac{3}{W^{2}-1}}\leq 2|\mu|\) with the equality holding at \(W=2\), as one would expect. Interestingly, for \(\mu=0\), both components \(\sigma_{\text{shift}}^{xyz},\sigma_{\text{shift}}^{xxz}\) show a \(1/\omega\) divergence for a type-II WSM. Additionally, for type-I, all shift current conductivities are non-zero (shown in Fig. 2(b,c)) only in a finite frequency window determined by the tilt parameter \(W\) and doping.
Our results show that the tilt parameter plays an important role for all shift current components. When the tilt vanishes, all the shift current conductivity components also vanish. This can be easily understood from the behavior of \(\theta_{p}\) from Eq.(7) in the limit \(W\to 0\). For \(\omega<2|\mu|\), \(\theta_{1}=\theta_{2}=\pm\pi/2\) which simply means that the entire \(\omega_{21}=\omega\) surface is Pauli-blocked. However, when \(\omega>2|\mu|\), the entire surface becomes Pauli-unblocked (as captured by \(\theta_{2}=-\theta_{1}=\pi/2\)) which again leads to a vanishing shift conductivity.
Now, we turn our attention to injection current conductivity components-some of which are known to exhibit quantization proportional to the Berry charge of the Weyl node. Here again, there are two kind of components: (i) purely imaginary which leads to CPGE, and (ii) purely real which leads to a photogalvanic effect from linearly polarized light. When time-reversal symmetry is preserved, the contribution from time-reversed Weyl node pairs is such that the real components vanish and only CPGE survives, as expected. Also, all the real components of the injection current conductivity would disappear at zero doping and also at zero tilt. As a result in order to get an injection current for linearly polarized light not only time-reversal must be broken but doping and tilt should be finite as well.
For finite doping, the conductivities become non-zero after \(2|\mu|/(1+W)\) for both type-I and type-II WSMs (note that the \(1/\omega\) divergence gets cut off in case of type-II). For type-I, \(\sigma_{\text{inj}}^{xyz},\sigma_{\text{inj}}^{yxx},\sigma_{\text{inj}}^{xy}\) reach their quantized value of \(-n\,\text{sgn}(u_{z}\zeta)/12\pi\) after \(2|\mu|/(1-W)\) where as other components become zero beyond this
Figure 1: The surface defined by \(\delta(\omega_{21}-\omega)\) in the transformed coordinates (see Appendix A). The factor \(f_{21}\) restricts the integral in Eq.(2), Eq.(3), Eq.(8) to the Pauli-unblocked region S (shown in brown).
point. For the latter, the response window is proportional to \(|\mu|\). For type-II case, \(\sigma^{xyz}_{\rm inj},\sigma^{yix}_{\rm inj},\sigma^{zxy}_{\rm inj}\) approach their respective quantized values \(-n\,{\rm sgn}(u_{z}\zeta)\frac{3W^{2}-1}{12\pi W^{3}}\), \(-n\,{\rm sgn}(u_{z}\zeta)\frac{3W^{2}-1}{12\pi W^{3}}\), \(\frac{-n\,{\rm sgn}(u_{z}\zeta)}{6\pi W^{3}}\) asymptotically while the remaining components asymptotically approach zero.
The quantization condition for CPGE for the injection current conductivity can be easily obtained as the trace of the CPGE tensor,
\[\frac{2\pi}{i\tau e^{3}k_{0}^{2}/\hbar^{2}}\epsilon_{abc}\sigma^{ abc}_{\rm inj}=-n\,{\rm sgn}(u_{z}\zeta)\left(\frac{\sin\theta_{2}-\sin\theta_{1}}{2} \right), \tag{9}\]
which gives the perfect quantized value equal \(-n\,{\rm sgn}(u_{z}\zeta)\) only when \(\theta_{2}=-\theta_{1}=\pi/2\). The contribution of the factor \(\frac{1}{2}(\sin\theta_{2}-\sin\theta_{1})\) is easy to understand when interpreted as the fraction of the solid angle available for integration,
\[\frac{1}{4\pi}\int_{S}{\rm d}\Omega=\frac{1}{4\pi}\Big{(}4\pi- \big{(}2\pi(1-\sin\theta_{2}) \tag{10}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+2 \pi(1+\sin\theta_{1})\big{)}\Big{)},\]
which leads to a reduced value of quantized response when either \(|\theta_{1}|,|\theta_{2}|<\pi/2\). For \(\mu=0\), type-I WSM gives perfect quantization where as in type-II WSM, the quantization value is reduced by a factor of \(1/W\). When \(\mu\neq 0\), type-I WSMs show perfect quantization above a certain frequency cutoff, i.e for \(\omega\geq 2|\mu|/(1-W)\) where as type-II WSMs show a reduced quantization for \(\omega\geq 2|\mu|/(W-1)\), as shown in Fig. 2(i). Note that in the case of type-II, while individual terms in the CPGE trace only approach their respective quantized values asymptotically, the trace itself is fully quantized for \(\omega>2|\mu|/(W-1)\). This feature is captured in Fig. 2(g,h,i).
When TRS is broken, injection current can also be generated by linearly polarized light, and the non-zero components for this case depend on the tilt direction. For tilt along the \(z\)-axis, non-zero linear photogalvanic effect (LPGE) injection current conductivities include \(\sigma^{yzy},\sigma^{yyz},\sigma^{xxx},\sigma^{xxx},\sigma^{zxx},\sigma^{zyy}\), and \(\sigma^{zzz}\). The last one is the only component among all shift and injection current conductivities which can allow for a current in the direction of linear polarization if it coincides with the direction of the tilt. Thus, a measurement of \(\sigma^{aaa}_{\rm inj}\) can provide a simple way to determine the direction of the tilt in charge 2 and 3 which have linearly dispersing bands in only one direction.
## IV Charge-2 Weyl semimetals
For concreteness, we numerically calculate the conductivity tensors for the following two-band tight-binding model for a charge-2 WSM with broken inversion, time-reversal and mirror symmetries,
\[\mathcal{H}^{2b}=t\big{(} 2(\cos(k_{y})-\cos(k_{x}))\sigma_{x}+2\sin(k_{x})\sin(k_{y}) \sigma_{y} \tag{11}\] \[+(M-\cos(k_{x})-\cos(k_{y})-\cos(k_{z}))\sigma_{z}\] \[+g\sin k_{z}\sigma_{0})-\mu\sigma_{0},\]
which has nodes at \((k_{x},k_{y},k_{z})=(0,0,\pm\,{\rm acos}(M-2))\) for \(1\leq M\leq 3\). The low-energy Hamiltonian near the nodes is given by,
\[\mathcal{H}^{2b}_{\pm}=t\big{(} (k_{x}^{2}-k_{y}^{2})\sigma_{x}+2k_{x}k_{y}\sigma_{y}+u_{z}k_{z} \sigma_{z} \tag{12}\] \[+\left(u_{t}k_{z}-(\mu/t-gu_{z})\right)\sigma_{0}\big{)},\]
where, \(u_{z}=\pm\sqrt{(3-M)(M-1)}\), and \(u_{t}=g(M-2)\). Chirality of this node is given by \(\chi={\rm sgn}(u_{z})\). The bands disperse as \(t(u_{t}\pm u_{z})k_{z}\) when \(k_{x},k_{y}=0\) and \(\pm t(k_{x}^{2}+k_{y}^{2})\) when \(k_{z}=0\). Based on the possible range of band inversion strength and \(k\)-space node separation given in Ref. [30], we take \(t=1{\rm eV},\,g=0.1,\,M=2.958\) and \(\mu=-0.0287t\). The chosen parameters give \(W=0.33\) and put the lower energy node near zero energy.
The second-order dc conductivity results obtained for Eq.(11) are shown in orange in Fig. 3. To compare these results against those obtained simply by treating each node separately based on Eq.(12), we have included the blue curve which represents the sum total of
contributions from individual nodes using expressions from Table 1. This is reasonable if the contribution from at least one of the nodes is constant over the energy range under consideration, as is the case here.
Looking at Fig. 3, it is clear that higher order terms present in the tight binding model lead to significant deviations from the low-energy predictions of Table 1. Surprisingly, we find that the injection conductivities \(\sigma^{xxx},\sigma^{xxx}\) and \(\sigma^{zzz}\) (d-f), develop a plateau up to an energy of about \(0.07t\). Additionally, a shift in the response energy window to the left by about \(0.02t\) is seen for all the conductivities (b-h) and the CPGE quantization (i). A shift in the quantization window has also been seen for \(\mathcal{T}\)-symmetric charge-2 Weyl system [58] and is believed to arise from higher order terms.
We probe the origin of these deviations by explicitly adding higher order terms to Eq. (12) (see Appendix B for details). Specifically, we find that including the second order terms \((\frac{1}{2}(k_{x}^{2}+k_{y}^{2})+(\frac{1}{2}M-1)k_{z}^{2})\sigma_{z}-\frac{ 1}{2}gu_{z}k_{z}^{2}\sigma_{0}\) in Eq.(12), not only matches the energy shift, but also captures all other features of the tight-binding results. Most importantly, we find the plateaus to come from the node situated close to zero energy (in our case, it is the node with negative \(u_{z}\)) and their heights to be \(\frac{-\text{sgn}(u_{z})}{64}\), \(\frac{\text{sgn}(u_{z})}{64}\) and \(\frac{\text{sgn}(u_{z})(2-M)}{64}\), respectively.
We should point out that the \(\sigma^{xxz},\sigma^{xxx}\) plateaus can be obtained by just including the \(\frac{1}{2}(k_{x}^{2}+k_{y}^{2})\sigma_{z}\) term where as the one for \(\sigma^{zzz}\) can be explained with the \((\frac{1}{2}M-1)k_{z}^{2})\sigma_{z}\) term alone. We believe that the plateaus should be present in any charge-2 WSM where these higher order terms show up. Note that despite the energy shift, \(\sigma^{xyz}_{\text{shift}}\) crosses zero at \(2|\mu-gu_{z}|\) as seen from Fig 3(b). This may not hold for arbitrarily chosen \(\mu\). In our case, we have carefully put one node close to zero energy which makes the other node almost entirely dictate the behavior of the response.
Lastly, we also find perfect CPGE quantization up to
Figure 2: Plots showing (a) JDOS, (b)-(c) shift conductivity, (d)-(h) injection conductivity, (i) CPGE quantization for a single charge-2 Weyl point (obtained using the expressions in Table 1). The orange and green curves correspond to type-I (\(W_{1}=0.334\)) and type-II (\(W_{2}=2.334\)) case, respectively. We have chosen \(W_{2}-W_{1}=2\) just to keep the plots neat. We have taken \(u_{z}=0.287,\varepsilon_{0}=1,\mu=-0.03\). Note that except for the JDOS, remaining plots will show similar behavior for charges 1 and 3.
an energy of about \(0.07t\) as shown in Fig 3(i). Behavior of the JDOS for each node with (solid) and without (dashed) higher order corrections is shown in Fig. 3(a). With the higher order terms, the JDOS for the node at energy \(-\mu+gu_{z}\) becomes non-zero after about \(0.071t\) and explains why the quantization ceases earlier than the predicted value \(0.086t\).
Beyond two-band models, the CPGE quantization is no longer guaranteed to hold. This has been explored in Ref. [19] for a charge-1 WSM. In order to better understand the contributions coming from higher bands and the extent to which they destroy quantization in charge-2 WSM, we study the following four-band tight-binding model (taking inspiration from Ref. [29]) with broken time-reversal symmetry,
\[\begin{split}\mathcal{H}^{4b}=& t\big{(}\sin(k_{x}) \tau_{x}+\sin(k_{y})\tau_{y}\\ &+(M-\cos(k_{x})-\cos(k_{y})-\cos(k_{z}))\tau_{z}\\ &+\Delta\left(\tau_{x}\sigma_{x}+\tau_{y}\sigma_{y}\right)+g\sin (k_{z})\tau_{z}\sigma_{z}\big{)}-\mu,\end{split} \tag{13}\]
where \(\tau\) and \(\sigma\) are Pauli matrices acting on the orbital and spin space, respectively.
The \(k_{z}\) dependent \(\tau_{z}\sigma_{z}\) term produces tilt while \(\Delta(\tau_{x}\sigma_{x}+\tau_{y}\sigma_{y})\) gives rise to the quadratic band dispersion along \(k_{x},k_{y}\). The low-energy Hamiltonian near nodes at \((0,0,\pm\,\text{acos}(M-2))\) is,
\[\begin{split}\mathcal{H}_{\pm}^{4b}=& t\big{(}k_{x} \tau_{x}+k_{y}\tau_{y}+u_{z}k_{z}\tau_{z}+\Delta\left(\tau_{x}\sigma_{x}+ \tau_{y}\sigma_{y}\right)\\ &+(u_{t}k_{z}+gu_{z})\tau_{z}\sigma_{z}\big{)}-\mu,\end{split} \tag{14}\]
Figure 3: Injection and shift conductivities for type-I charge-2 WSM. The orange curve represents the response for the two-band tight binding model Eq.(11) obtained numerically. The blue curve is sum of contributions from the two nodes based on analytical results for the low energy model Eq.(12) while the green curve is obtained by including higher order terms \((\frac{1}{2}(k_{x}^{2}+k_{y}^{2})+(\frac{1}{2}M-1)k_{z}^{2})\sigma_{z}-\frac{1 }{2}gu_{z}k_{x}^{2}\sigma_{0}\) in Eq.(12) (see Appendix B for details). The energy separation between the nodes is \(|2gu_{z}|\) and \(\vec{\mu}=\mu-gu_{z}\). We have taken \(t=1,M=2.958,g=0.1,\mu=-0.0287\). (a) JDOS for each node for the low energy model with (solid) and without (dashed) higher order correction terms. Pink and purple correspond to the nodes at \(-\mu-gu_{z}\) and \(-\mu+gu_{z}\), respectively.
where \(u_{z}=\pm\sqrt{(3-M)(M-1)}\), and \(u_{t}=g(M-2)\).
We begin with \(|\Delta|\) large compared to \(|gu_{z}|\) and gradually decrease it below \(|gu_{z}|\). The two bands which touch, disperse as \((t_{u}+u_{z})k_{z}\), \(t(u_{t}-u_{z})k_{z}\) when \(k_{x},k_{y}=0\) and \(\frac{|t(k_{x}^{2}+k_{y}^{2})}{2|\Delta+gu_{z}|}\), \(\frac{|t(k_{x}^{2}+k_{y}^{2})}{2|\Delta-gu_{z}|}\) when \(k_{z}=0\). We use the same \(g,M,t\) values from before. We note that unlike the two-band case Eq.(12), the quadratic dispersion now has a dependence on \(\Delta\). For \(\Delta=0.5\) (recall \(gu_{z}=0.0287\)), the dispersion becomes almost the same for the two cases and provides a good starting point for comparison. Also, since the gap between the highest occupied and the lowest unoccupied bands is \(\sim|\Delta|\), the effect of higher bands should be more prominent for smaller values of \(\Delta\).
Results obtained for Eq.(13) are shown in Fig. 4. We find large deviation from perfect quantization for small gaps as seen in Fig. 4(f). However, for \(\Delta\gg|gu_{z}|\) we do see almost perfect quantization. Also, the plateaus seen earlier in \(\sigma_{\rm inj}^{xxz},\sigma_{\rm inj}^{zxx},\sigma_{\rm inj}^{zzz}\) continue to show up when \(\Delta\) is at least a few times larger than \(|gu_{z}|\) as shown in (a-c). Their heights become dependent on \(\Delta\) and are empirically found to be about \(\Delta/32\), \(-\Delta/32\), and \(\Delta(M-2)/32\), respectively.
## V Charge-4 Weyl semimetals
Having looked at the charge-2 case in some detail, we move on to investigate the behaviour of the injection conductivity and JDOS for charge-4 WSMs. The existence of CPGE quantization in such systems has been discussed in earlier studies [63]. In our study, we want to develop a full understanding of how model parameters and doping affect these responses. In order to do that, we take the following two-band Hamiltonian based on Ref. [63],
\[\begin{split}\mathcal{H}_{4}&=-2c_{1}(\cos(k_{x})+ \cos(k_{y})+\cos(k_{z}))\sigma_{0}\\ &\quad+2c_{2}\big{(}\sqrt{3}(\cos(k_{y})-\cos(k_{x}))\sigma_{x} \\ &\quad-(\cos(k_{x})+\cos(k_{y})-2\cos(k_{z}))\sigma_{z}\big{)}\\ &\quad+c_{3}\sin(k_{x})\sin(k_{y})\sin(k_{z})\sigma_{y}- \widetilde{\mu}\sigma_{0},\end{split} \tag{15}\]
which has nodes of opposite chirality at \((0,0,0)\) and \((\pi,\pi,\pi)\). The low-energy Hamiltonian near \(\Gamma\)-point is given by,
\[\begin{split}\mathcal{H}_{4}^{\Gamma}&=c_{1}\left(k _{x}^{2}+k_{y}^{2}+k_{z}^{2}\right)\sigma_{0}+c_{2}\Big{(}\sqrt{3}\left(k_{x }^{2}-k_{y}^{2}\right)\sigma_{x}\\ &\quad+\left(k_{x}^{2}+k_{y}^{2}-2k_{z}^{2}\right)\sigma_{z} \Big{)}+c_{3}k_{x}k_{y}k_{z}\sigma_{y}-\mu\sigma_{0},\end{split} \tag{16}\]
where \(\mu=\widetilde{\mu}+6c_{1}\). The chirality of this Weyl point is given by \(\chi=\text{sgn}(c_{3})\). We derive all our results using Eq. (16) with \(c_{1}>0\) (the opposite case is an easy generalization, which we discuss later). Its eigenvalues
Figure 4: (a)-(e) Injection conductivities and (f) trace of the CPGE tensor for charge-2 WSM obtained using the four-band model Eq.(13). The dashed brown curve shows the corresponding result for the two-band model (green curve from Fig. 3) which is close to the \(\Delta=0.5\) curve, as expected. We see significant deviations from perfect CPGE quantization for \(\Delta\lesssim|gu_{z}|\). The \(\sigma_{\rm inj}^{xxz},\sigma_{\rm inj}^{zxx},\sigma_{\rm inj}^{zxx}\) plateaus continue to show up for \(\Delta\gg|gu_{z}|\) with heights of about \(\Delta/32\), \(-\Delta/32\) and \(\Delta(M-2)/32\), respectively.
are given by
\[E_{4,\pm}^{\Gamma} =c_{1}\left(k_{x}^{2}+k_{y}^{2}+k_{z}^{2}\right)\pm 2|c_{2}|\bigg{(}k_{x}^{4 }+k_{y}^{4}+k_{z}^{4}\] \[\qquad-k_{x}^{2}k_{y}^{2}-k_{y}^{2}k_{z}^{2}-k_{z}^{2}k_{x}^{2}+ \left(\frac{c_{3}}{2c_{2}}\right)^{2}k_{x}^{2}k_{y}^{2}k_{z}^{2}\bigg{)}^{\frac {1}{2}}-\mu. \tag{17}\]
The presence of the sixth order term \(k_{x}^{2}k_{y}^{2}k_{z}^{2}\) above does not allow us to fully evaluate Eq.(2), Eq.(3), and Eq.(8) analytically. However, it is possible to integrate out \(k_{y}\) (one can pick any one out of the three \(k\) coordinates) and get rid of the delta function in exchange for a new constraint (see Appendix E for details).
The biggest advantage of going from a triple to a double integral is that the new constraint now defines a closed area compared to a closed surface before, which makes it much easier to analyze. The expressions for JDOS and injection conductivities (non-zero components) thus obtained are given in Table 2. Note that the shift conductivities are zero. Although these integrals appear complicated, they are easy to evaluate numerically. A key result of our analysis is the precise location of the energy window and condition under which trace of the CPGE tensor is quantized for different amounts of doping.
When \(\mu=0\), quantization is seen only for \(c_{1}/|c_{2}|<1\) starting at a frequency of \(\frac{54c_{1}^{3}}{c_{2}^{3}}\) as shown in Fig. 5 (a). The situation for \(1<c_{1}/|c_{2}|<2\) and \(2<c_{1}/|c_{2}|\) is also shown in Fig. 5 (b) and (c), respectively. While the trace is non-zero for any finite frequency in the former case, it turns non-zero only after \(\frac{54c_{1}^{3}}{c_{2}^{3}}\) for the latter.
For \(\mu<0\), a perfect quantization is again only possible for \(c_{1}/|c_{2}|<1\). When this is the case, the trace becomes non-zero after \(\min\left(\omega_{p},\frac{2\mu}{1-\frac{\omega_{p}}{2|c_{2}|}}\right)\) and reaches \(\pm 4\) at \(\max\left(\omega_{p},\frac{2|\mu|}{1-\frac{\omega_{p}}{2|c_{2}|}}\right)\) where \(\omega_{p}\) is the unique real positive root of \((\omega-2|\mu|)^{3}-54\frac{c_{1}^{3}}{c_{3}^{3}}\omega^{2}=0\). For \(1<c_{1}/|c_{2}|<2\), the trace becomes non-zero after \(\min\left(\omega_{p},\frac{2|\mu|}{1-\frac{\omega_{p}}{2|c_{2}|}}\right)\) while for \(2<c_{1}/|c_{2}|\), this happens after \(\omega_{p}\). The three cases are shown in Fig. 5 (d), (e) and (f), respectively.
When \(\mu>0\), we are presented with a wider range of possibilities for observing quantization. We find that, irrespective of the \(c_{1}/|c_{2}|\) value, the trace becomes non-zero after \(\min\left(\omega_{p},\frac{2\mu}{1+\frac{\omega_{p}}{2|c_{2}|}}\right)\) where \(\omega_{p}\) is now the unique real positive root of the cubic equation \((\omega-2\mu)^{3}+54\frac{c_{1}^{3}}{c_{3}^{3}}\omega^{2}=0\). For \(c_{1}/|c_{2}|<1\), it goes on to reach a saturation value of \(\pm 4\) at \(\max\left(\omega_{p},\frac{2\mu}{1+\frac{\omega_{p}}{2|c_{2}|}}\right)\), as shown in Fig. 5(g). Interestingly for \(\mu>0\), quantization becomes possible even for \(1<c_{1}/|c_{2}|\) provided \(\max\left(\omega_{p},\frac{2\mu}{1+\frac{\omega_{p}}{2|c_{2}|}}\right)<\frac{ 2\mu}{1-\frac{\omega_{p}}{2|c_{2}|}-1}\). When this condition is met, perfect quantization is seen but only for a finite window of energy \(\omega\) satisfying \(\max\left(\omega_{p},\frac{2\mu}{1+\frac{\omega_{p}}{2|c_{2}|}}\right)<\omega< \frac{2\mu}{1-\frac{\omega_{p}}{2|c_{2}|}-1}\) which is shown in Fig. 5(h). The situation when no quantization is possible for \(\mu>0\) is shown in Fig. 5(i). It is clear that while \(|c_{1}/c_{2}|\) plays a crucial role, \(c_{1},c_{2},c_{3}\) and \(\mu\) intricately determine the behavior of the CPGE trace and its quantization. Note that the plots in Fig. 5 have been obtained by numerically evaluating the integrals in Table 2. We have included the JDOS plot at zero doping in Fig. 6(a). As shown in the figure, the JDOS has a \(\sqrt{\omega}\) behavior going towards zero frequency. Note that the JDOS result from Table 1 also predicts a \(\sqrt{\omega}\) dependence if we take \(n=4\). This seems more like a coincidence as that model still has a linearly dispersing band along \(k_{z}\), very different from the C-4 model in Eq.(16).
We would like to point out that so far the results presented assume \(c_{1}>0\). It turns out that we can continue to use the same results for a charge-4 node with
\(c_{1}<0\) (and a chemical potential \(\mu\)) by treating it as a \(|c_{1}|\) node with chemical potential \(-\mu\). With this small but important extension, our analysis covers all the possible cases.
For completeness, we also compute the CPGE trace using the full tight-binding model Eq.(15), and the results are shown in Fig. 6(b),(c). Note that in this model, when going from \(\Gamma\) to \(R\), we find \((c_{1},c_{2},c_{3})\rightarrow(-c_{1},-c_{2},-c_{3})\). Since \(c_{1}\) turns negative, when using results from Table 2, we treat the \((c_{1}<0,\mu_{R})\)\(R\)-node as a \((|c_{1}|,-\mu_{R})\) node. Also, since \(c_{3}\) flips sign too, \(\chi_{\Gamma}=-\chi_{R}\), as expected.
In Fig. 6(b), we chose \(c_{1}/|c_{2}|<1\) and therefore expect both nodes to show perfect quantization. Since \(\mu\neq 0\), the two nodes will show quantization starting at different frequencies which results in an overall finite quantization window. The dashed curves represent the sum of contributions from the two nodes based on the low energy result from Table 2 (as remarked earlier, this makes sense here because at no point in the energy range under consideration do the contributions from both nodes become non-constant simultaneously).
In Fig. 6(c), we chose \(1<c_{1}/|c_{2}|\) and \(\mu=-0.3\). This gives \(\mu_{\Gamma}=0.1\) and \(\mu_{R}=-0.7\). Since the node closer to zero energy falls under the \(\mu>0\) category when using the low-energy results, we can choose \(c_{1},c_{2},c_{3}\) such that it shows quantization for a finite window (blue curve) or no quantization at all (red curve). For the former, we also ensure that the contribution from other node starts only after the end of the quantization window. The dashed curves show contribution from the \(\Gamma\)-node in both cases. In both Fig. 6(b) and (c), we find excellent
Figure 5: Trace of the CPGE tensor for a single charge-4 Weyl point Eq.(16) with \(c_{1}>0\), obtained from the numerical evaluation of integrals in Table 2 for different combinations of \(\mu\) and \(c_{1}/|c_{2}|\) (respective values shown in the inset). We have taken \(c_{1}=0.0665,c_{3}=0.4\). (a)-(c) \(\mu=0\), (d)-(f) \(\mu<0\). In both these cases, perfect quantization is seen only when \(c_{1}/|c_{2}|<1\). (g)-(i) \(\mu>0\), perfect quantization is guaranteed for \(c_{1}/|c_{2}|<1\) however, unlike previous two cases, it can also be seen for \(c_{1}/|c_{2}|>1\) as long as \(\frac{2\mu}{c_{1}/|c_{2}|-1}>\max\left(\omega_{p},\frac{2\mu}{1+c_{1}/2|c_{2}| }\right)\), as shown in (h).
agreement between the tight binding and low energy results, showing that the higher order terms are not at play in this parameter range and can be neglected.
## VI Conclusion and discussion
In summary, we have presented a comprehensive and unified study of the second-order dc response in tilted multi-Weyl systems with a focus on the roles played by tilt (\(W\)) and doping (\(\mu\)). For charges \(n\)=1, 2 and 3, we have derived analytical expressions for shift and injection conductivity using a low energy continuum model and then compared its predictions against more realistic two- and four-band tight binding models of time-reversal broken systems for the charge-2 case.
Beyond the extremely important CPGE quantization, we also report other features of the photogalvanic response arising mainly from the finite tilt and band curvatures. We systematically investigated the role of tilt, band curvatures, and higher bands in deciding the shift and injection current conductivities of multi-Weyl semimetals. We find that in TRS broken multi-Weyl semimetals, finite tilt can lead to non-zero injection current from linearly polarized light which not only provides a probe for the tilt direction but can possibly also provide a way to engineer the injection current by using strain or some other mechanism which controls the tilt of Weyl nodes.
We have also provided the first complete analysis of the photogalvanic response in charge-4 WSM based on a low-energy two-band model, covering all possibilities arising from different combinations of model parameters and the chemical potential. Although C-4 WSMs do not have a tilt in the usual sense (like the other three charges which have linear dispersing band in at least one direction), the ratio \(|c_{1}/c_{2}|\) plays a similar role, and together with \(c_{1}^{3}/c_{3}^{2}\) and \(\mu\) determines nature of the response. Within the confines of the low-energy model, our results help point out exactly when CPGE quantization can be seen in C-4 WSMs.
We believe that the new approach we have taken here to study C-4 would find applications in studying many other optical responses as well. For example, it can easily be extended to the study of SHG and first-order conductivity for the low energy two band model. In principle it should work with any quantity that requires evaluating a \(k\)-space integral with \(f_{21}\delta(\omega_{21}-\omega)\) term in it at \(T=0\) K.
## VII Acknowledgement
S.C. would like to acknowledge the funding from the National Science Foundation through the Center for Dynamics and Control of Materials: an NSF MRSEC under Cooperative Agreement No. DMR-1720595. G.A.F. acknowledges additional support from NSF DMR-2114825
## Appendix A Analytical expression for shift and injection conductivity tensors
We work with the low-energy effective Hamiltonian,
\[\mathcal{H}_{n}=\begin{pmatrix}u_{z}k_{z}+u_{t}k_{z}-\mu&\varepsilon_{0}( \tilde{k}_{x}-i\zeta\tilde{k}_{y})^{n}\\ \varepsilon_{0}(\tilde{k}_{x}+i\zeta\tilde{k}_{y})^{n}&-u_{z}k_{z}+u_{t}k_{z}- \mu\end{pmatrix}\!, \tag{10}\]
with eigenvalues
\[E_{n,\pm}=u_{t}k_{z}-\mu\pm\varepsilon_{0}\sqrt{(\tilde{k}_{x}^{2}+\tilde{k}_ {y}^{2})^{n}+u_{z}^{2}k_{z}^{2}/\varepsilon_{0}^{2}}. \tag{11}\]
The domain for the integrals in Eq.(2), Eq.(3) is determined by \(f_{21}\) and \(\delta(\omega_{21}-\omega)=\delta(2\varepsilon_{0}\sqrt{(\tilde{k}_{x}^{2}+ \tilde{k}_{y}^{2})^{n}+u_{z}^{2}k_{z}^{2}/\varepsilon_{0}^{2}}-\omega)\). Let us focus on
Figure 6: (a) JDOS for a single charge-4 Weyl point with \(\mu=0,c_{1}=0.0665,c_{2}=0.4668,c_{3}=4\) (\(c_{1}/c_{2}<1\)). (b) The red and blue curves capture the CPGE quantization for the two-band tight binding model Eq.(15) with \(\mu=-0.3,0.5\), respectively. Note that \(\mu_{\Gamma}=\mu+6c_{1},\mu_{R}=\mu-6c_{1}\). The corresponding dashed green curve is obtained by evaluating expressions from Table 2 for each node separately and then adding the results. We have used the same \(c_{1},c_{2},c_{3}\) from before. (c) Results with \(c_{1}/c_{2}>1\), \(\mu=-0.3\) (same \(c_{1},c_{3}\) as before). The dashed curves show contribution from the node closer to \(E=0\) (obtained using results from Table 2) for each case and are discontinued after contribution from the other node becomes non-zero.
the delta function first. To simplify things, we split the integral in \(k_{x}-k_{y}\) plane over the four quadrants: \(\int\mathrm{d}k_{x}\int\mathrm{d}k_{y}=\int_{+}\mathrm{d}k_{x}\int_{+}\mathrm{d }k_{y}+\int_{+}\mathrm{d}k_{x}\int_{-}\mathrm{d}k_{y}+\int_{-}\mathrm{d}k_{x} \int_{+}\mathrm{d}k_{y}+\int_{-}\mathrm{d}k_{x}\int_{-}\mathrm{d}k_{y}\), and combine them into a single integral over the first quadrant by making substitutions \(k_{x}=\pm k_{0}\sqrt{x}\) and \(k_{y}=\pm k_{0}\sqrt{y}\) depending on the sign (both \(x,y>0\)). We also put \(k_{z}=\frac{c_{0}}{u_{z}}z\). By making \(x\rightarrow\frac{x-y}{\sqrt{2}}\) and \(y\rightarrow\frac{x+y}{\sqrt{2}}\), we rotate the \(x-y\) axis counterclockwise by \(\pi/4\), and scale \(z\) by \(z\to 2^{n/4}z\). Finally, we let \(x\to x^{2/n}\) to get \(\delta(2^{1+n/4}\varepsilon_{0}\sqrt{x^{2}+z^{2}}-\omega)\). These transformations also change the integration measure \(\int_{\mathbf{k}}\rightarrow\int\frac{2^{1/2+n/4}k_{0}^{2}\varepsilon_{0}x^{ 2/n-1}\mathrm{d}x\mathrm{d}y}{16\pi^{3}|u_{z}|n\sqrt{x^{2}-y^{2}}}\), with \(x>0\) and \(-x<y<x\) (we do the \(y\) integral first with these limits). The delta function defines a circle in \(xz\)-plane which lets us use \(x=r\cos\theta,z=r\sin\theta\) to obtain \(\delta(r-\omega/(2^{1+n/4}\varepsilon_{0}))/(2^{1+n/4}\varepsilon_{0})\).
Since we are taking the temperature to be zero, \(f_{21}=\Theta(E_{1})-\Theta(E_{2})\) where, \(\Theta\) is the Heaviside step function. Because of the condition put by the delta function, we have
\[f_{21}=\Theta\left(u_{t}k_{z}-\mu-\frac{\omega}{2}\right)-\Theta\left(u_{t}k_ {z}-\mu+\frac{\omega}{2}\right). \tag{10}\]
Since we have assumed \(\omega>0\), the only non-zero value for \(f_{21}\) is \(-1\) when \(\mu-\omega/2<u_{z}k_{z}<\mu+\omega/2\). Using coordinate transformations from before, this condition becomes
\[\frac{2\mu}{\omega}-1<\frac{u_{t}}{u_{z}}\sin\theta<\frac{2\mu}{ \omega}+1, \tag{11}\] \[\frac{\mathrm{sgn}\left(\frac{u_{t}}{u_{z}}\right)\frac{2\mu}{ \omega}-1}{W}<\sin\theta<\frac{\mathrm{sgn}\left(\frac{u_{t}}{u_{z}}\right) \frac{2\mu}{\omega}+1}{W}, \tag{12}\]
where \(W=|u_{t}/u_{z}|\). The definitions for \(\theta_{1},\theta_{2}\) given in main text follow from this. With this, we can easily compute other ingredients of the integral from the eigenvalues and normalized eigenfunctions of \(\mathcal{H}_{n}\), and combine them to obtain analytical expressions for the JDOS, shift and injection conductivities.
## Appendix B Higher order terms for charge-2 WSM
Based on the higher order terms appearing in the expansion of Eq.(11) near its nodes, we look at the effect of including \((\frac{1}{2}(k_{x}^{2}+k_{y}^{2})+u_{m}k_{z}^{2})\sigma_{z}-\frac{1}{2}gu_{z}k _{z}^{2}\sigma_{0}\) in Eq.(12) where \(u_{m}=\frac{1}{2}M-1\). As before, we use a series of transformations to simplify the Dirac delta constraint. Key steps are as follows (with \(\varepsilon_{0}=1\)),
1. \(k_{x}\rightarrow\pm\sqrt{x}\), \(k_{y}\rightarrow\pm\sqrt{y}\), \(k_{z}\rightarrow\frac{u_{z}}{u_{m}}z\).
2. \(x\rightarrow\frac{1}{\sqrt{2}}(x-y)\), \(y\rightarrow\frac{1}{\sqrt{2}}(x+y)\), \(z\rightarrow\frac{1}{2}(\sqrt{z}-1)\).
3. \(z\rightarrow\frac{\sqrt{40}u_{m}}{u_{z}^{2}}z+1\), integrate out \(y\) (from \(-x,x\)).
4. \(x\rightarrow\frac{x}{2\sqrt{5}+\sqrt{5}}-\frac{z}{2\sqrt{5}-\sqrt{5}}\), \(z\rightarrow\frac{x}{2\sqrt{5}+\sqrt{5}}+\frac{z}{2\sqrt{5}-\sqrt{5}}\).
5. \(x\rightarrow\omega\cos\theta\), \(z\rightarrow\omega\sin\theta\), integrate from \(\theta_{1},\theta_{2}\).
Analytical expressions for JDOS, shift and injection conductivity tensors can be obtained as before. We still have \(f_{21}=-1\), however, the condition that determines \(\theta_{1},\theta_{2}\) becomes,
\[2\mu-\omega<\frac{u_{t}u_{z}}{u_{m}}\left(\sqrt{\frac{\omega\sqrt {5}u_{m}\sin(\theta+\beta)}{u_{z}^{2}}+1}-1\right) \tag{13}\] \[-\frac{gu_{z}^{3}}{4u_{m}^{2}}\left(\sqrt{\frac{\omega\sqrt{5}u_ {m}\sin(\theta+\beta)}{u_{z}^{2}}+1}-1\right)^{2}<2\mu+\omega,\]
where \(\beta=\arctan(\varphi-1)\), \(-\frac{\pi}{2}-\arctan(\varphi)\leq\theta\leq\frac{\pi}{2}-\arctan(\varphi)\), and \(\varphi\) is the golden ratio. Allowed values of \(\theta\) can be found by solving this inequality numerically. When solutions turn out to be disjoint intervals, each interval defines its own \(\theta_{1},\theta_{2}\). The analytical expression is evaluated for each interval and then summed.
## Appendix C Tilt and Zeeman terms for charge-2 WSM
We can also include additional terms of the form \(A(\tilde{k}_{x}^{2}+\tilde{k}_{y}^{2})\sigma_{0}\) and \(B\sigma_{z}\) into Eq.(5). These correspond to second-order tilt and Zeeman terms, respectively. The \(B\) term only shifts the origin along \(k_{z}\), modifying the \(k_{z}\to z\) transformation to \(k_{z}=\frac{c_{0}z-B}{u_{z}}\). It not difficult to see that these terms only affect \(f_{21}\),
\[\begin{split} f_{21}=\Theta&\left(\frac{A}{ \varepsilon_{0}}\frac{\omega\cos\theta}{2}+\frac{u_{t}}{u_{z}}\frac{\omega\sin \theta}{2}-\widetilde{\mu}-\frac{\omega}{2}\right)\\ &-\Theta\left(\frac{A}{\varepsilon_{0}}\frac{\omega\cos\theta}{2}+ \frac{u_{t}}{u_{z}}\frac{\omega\sin\theta}{2}-\widetilde{\mu}+\frac{\omega}{2} \right),\end{split} \tag{14}\]
where \(\widetilde{\mu}=\mu+Bu_{t}/u_{z}\). Since \(\omega>0\), we have \(f_{21}=-1\) when,
\[\frac{2\widetilde{\mu}}{\omega}-1<\frac{A}{\varepsilon_{0}}\cos\theta+\frac{u_{t} }{u_{z}}\sin\theta<\frac{2\widetilde{\mu}}{\omega}+1. \tag{15}\]
By defining \(\widetilde{W}=\sqrt{A^{2}/\varepsilon_{0}^{2}+u_{t}^{2}/u_{z}^{2}}\), \(\sin\phi=\frac{A/\varepsilon_{0}}{\widetilde{W}}\), and \(\cos\phi=\frac{|u_{t}/u_{z}|}{\widetilde{W}}\), we obtain
\[\begin{split}\frac{\mathrm{sgn}\left(\frac{u_{t}}{u_{z}}\right) \frac{2\widetilde{\mu}}{\omega}-1}{\widetilde{W}}<\sin\!\left(\theta+\mathrm{ sgn}\left(\frac{u_{t}}{u_{z}}\right)\phi\right)<\\ \frac{\mathrm{sgn}\left(\frac{u_{t}}{u_{z}}\right)\frac{2 \widetilde{\mu}}{\omega}+1}{\widetilde{W}},\end{split} \tag{16}\]
with \(-\pi/2\leq\theta,\phi\leq\pi/2\). Also, we define \(\alpha=\mathrm{sgn}\left(\frac{u_{t}}{u_{z}}\right)\phi\) and \(\widetilde{\varphi}_{p}=\frac{1}{\widetilde{W}}\left(\mathrm{sgn}\left(\frac{u_{t}}{u_ {z}}\right)\frac{2\widetilde{\mu}}{\omega}+(-1)^{p}\right)\) with \(p=1,2\). The
inequality becomes \(\widetilde{\varphi}_{1}<\sin(\theta+\alpha)<\widetilde{\varphi}_{2}\) which can be solved for the minimum (\(\widetilde{\theta}_{1}\)) and maximum (\(\widetilde{\theta}_{2}\)) allowed values of \(\theta\). These can be obtained as the left and right end points of the intervals,
\[(\widetilde{\theta_{1}},\widetilde{\theta_{2}})=\begin{cases}(\pi/2,\pi/2),&1< \widetilde{\varphi}_{1},1<\widetilde{\varphi}_{2}\\ (-\pi/2,-\pi/2),&\widetilde{\varphi}_{1}<-1,\widetilde{\varphi}_{2}<-1\\ I_{0},&\widetilde{\varphi}_{1}<-1,1<\widetilde{\varphi}_{2}\\ I_{0}\cap I_{2},&0<\widetilde{\varphi}_{1}<1,1<\widetilde{\varphi}_{2}\\ I_{0}\cap I_{1},&-1<\widetilde{\varphi}_{1}<0,1<\widetilde{\varphi}_{2}\\ I_{0}\cap I_{3},&\widetilde{\varphi}_{1}<-1,-1<\widetilde{\varphi}_{2}<0\\ I_{0}\cap I_{4},&\widetilde{\varphi}_{1}<-1,0<\widetilde{\varphi}_{2}<1\\ I_{0}\cap I_{1}\cap I_{4},&-1<\widetilde{\varphi}_{1}<0<\widetilde{\varphi}_{2}< 1\\ I_{0}\cap I_{2}\cap I_{4},&0<\widetilde{\varphi}_{1}<\widetilde{\varphi}_{2}< 1\end{cases} \tag{123}\]
where,
\[\begin{split} I_{0}&=(-\pi/2,\pi/2),\\ I_{1}&=(-\pi-\alpha,-\pi-\alpha-\arcsin\widetilde{\varphi}_{1}) \cup(-\alpha+\arcsin\widetilde{\varphi}_{1},\pi-\alpha),\\ I_{2}&=(-\alpha+\arcsin\widetilde{\varphi}_{1},\pi-\alpha- \arcsin\widetilde{\varphi}_{1}),\\ I_{3}&=(-\pi-\alpha-\arcsin\widetilde{\varphi}_{2},-\alpha+ \arcsin\widetilde{\varphi}_{2}),\\ I_{4}&=(-\pi-\alpha,-\alpha+\arcsin\widetilde{\varphi}_{2}) \cup(\pi-\alpha-\arcsin\widetilde{\varphi}_{2},\pi-\alpha).\end{split}\]
## Appendix D Sign changing of \(\sigma^{xyz}_{\rm shift}\) for \(W>2\)
For \(W<2\), we found that the sign change occurred at \(\omega=2|\mu|\) when one angle was \(\pm\pi/2\) and the other zero. However, when \(W>2\), \((\theta_{2},\theta_{1})\) cannot take either \((0,-\pi/2)\), or \((\pi/2,0)\). Finding the point of sign change now requires us to seek other solutions of \(\sin\theta_{2}\cos^{2}\theta_{2}-\sin\theta_{1}\cos^{2}\theta_{1}=0\). Converting cosine into sine we get \((\sin\theta_{2}-\sin\theta_{1})(\sin^{2}\theta_{2}+\sin\theta_{2}\sin\theta_{1 }+\sin^{2}\theta_{1}-1)=0\). Let's look for solutions other than \(\theta_{2}=\theta_{1}\), \((0,-\pi/2)\), and \((\pi/2,0)\). We can solve for \((\sin\theta_{2},\sin\theta_{1})\) to get,
\[(\sin\theta_{2},\sin\theta_{1})=\begin{cases}\left(\frac{-x-\sqrt{4-3x^{2}}}{ 2},x\right),\,-1<x<\frac{-1}{\sqrt{3}}\\ \left(x,\frac{-x-\sqrt{4-3x^{2}}}{2},x\right),\,\frac{-1}{\sqrt{3}}<x<0\\ \left(\frac{-x+\sqrt{4-3x^{2}}}{2},x\right),\,0<x<\frac{1}{\sqrt{3}}\\ \left(x,\frac{-x+\sqrt{4-3x^{2}}}{2}\right),\,\frac{1}{\sqrt{3}}<x<1\end{cases} \tag{124}\]
Using definitions of \(\theta_{1},\theta_{2}\), we solve for \(\omega\) by eliminating \(x\) to obtain \(\omega=2|\mu|\sqrt{\frac{3}{W^{2}-1}}\). Note that for \(W=2\), this gives \(\omega=2|\mu|\) as expected.
## Appendix E Analytical results for charge-4 WSM
The delta function constraint \(\delta(\omega-\omega_{21})\) translates to
\[\begin{split} 4|c_{2}|\bigg{(}k_{x}^{4}+k_{y}^{4}+k_{z}^{4}-k_{z}^{ 2}k_{y}^{2}-k_{y}^{2}k_{z}^{2}-k_{z}^{2}k_{x}^{2}\\ +\left(\frac{c_{3}}{2c_{2}}\right)^{2}k_{x}^{2}k_{y}^{2}k_{z}^{2} \bigg{)}^{\frac{1}{2}}=\omega.\end{split} \tag{125}\]
To simplify this, we use the following transformations,
1. \(k_{x}\rightarrow\pm\sqrt{x}\), \(k_{y}\rightarrow\pm\sqrt{y}\), \(k_{z}\rightarrow\pm\sqrt{z}\) (reduce the integral to \(x,y,z>0\) octant).
2. \(x\rightarrow\frac{1}{\sqrt{2}}(x-y)\), \(y\rightarrow\frac{1}{\sqrt{2}}(x+y)\).
3. Integrate out \(y\) (from \(-x,x\)). To do this, we need to find roots of the equation \(\omega=\left(2c_{3}^{2}z\left(x^{2}-y^{2}\right)+8c_{2}^{2}\left(\left(x-\sqrt{ 2}z\right)^{2}+3y^{2}\right)\right)^{\frac{1}{2}}\). The condition for existence of real roots satisfying \(-x\leq y\leq x\) is given by (post step 4 substitution). \[\begin{split}\left(4c_{2}^{2}\left(x^{2}-xz+z^{2}\right)-c_{1}^{2} \right)\big{(}-c_{3}^{2}x^{2}\omega z\\ -8c_{2}^{2}c_{1}(x-2z)^{2}+8c_{1}^{3}\big{)}>0.\end{split}\] (126)
4. \(x\rightarrow\frac{\omega}{2\sqrt{2}c_{1}}x\), \(z\rightarrow\frac{\omega}{2c_{1}}z\).
Using these transformations along with the eigenvalues and normalized eigenfunctions of Eq.(16), we simply Eq.(2), Eq.(3), Eq.(8) to obtain the expressions shown in Table 2 (the shift conductivities are zero).
Behavior of \(\frac{2\pi}{i\pi c^{3}/\hbar^{2}}\epsilon_{abc}\sigma^{abc}\) is determined by the interplay between conditions set by \(\Theta\left(-x-z+1+\frac{2\mu}{\omega}\right)\), \(\Theta\left(x+z+1-\frac{2\mu}{\omega}\right)\) and Eq.(126). Since \(c_{1}>0\) by choice and \(c_{2},c_{3}\) appear only as their squares, the analysis of region defined by Eq.(126) becomes quite general. To understand this, let us focus on the curves \(4c_{2}^{2}\left(x^{2}-xz+z^{2}\right)-c_{1}^{2}=0\) and \(-c_{3}^{2}x^{2}\omega z-8c_{2}^{2}c_{1}(x-2z)^{2}+8c_{1}^{3}=0\) for \(x,z>0\). They intersect the \(x\)-axis at \(x=c_{1}/2|c_{2}|\) and \(x=c_{1}/|c_{2}|\), respectively, but cross the \(z\)-axis together at \(z=c_{1}/2|c_{2}|\) (intercepts are independent of \(\omega\)). Tangents to these curves with slope \(-1\) are important. For the ellipse this happens at \((z=c_{1}/2|c_{2}|,x=c_{1}/2|c_{2}|)\), the tangent has equation \(x+z=c_{1}/|c_{2}|\). For the second curve we have several cases. For \(\omega<48\frac{|c_{2}|^{3}}{c_{3}^{3}}\), there is only one such tangent at \((z=\frac{1/3c_{1}}{|c_{3}|^{2/3}c_{1}^{3}},x=\frac{2\times 2^{1/3}c_{1}}{|c_{3}|^{2/3} \omega^{1/3}})\), described by \(x+z=\frac{54^{1/3}c_{1}}{|c_{3}|^{2/3}\omega^{1/3}}\). For larger \(\omega\), there is another tangent with slope \(-1\), but its presence is of no consequence to our analysis. The important thing to note is that \(x+z=\frac{54^{1/3}c_{1}}{|c_{3}|^{2/3}\omega^{1/3}}\) is completely sandwiched between \(x+z=c_{1}/|c_{2}|\) and \(x+z=c_{1}/2|c_{2}|\) for \(54\frac{|c_{2}|^{3}}{c_{3}^{3}}<\omega<8\times 54\frac{|c_{2}|^{3}}{c_{3}^{3}}\). These features are illustrated in Fig. 7. With these key observations in mind, we now analyze the \(\mu=0\), \(\mu<0\), and \(\mu>0\) cases separately.
For \(\mu=0\), the theta function constraints reduce to \(x+z<1\). When \(c_{1}/|c_{2}|<1\), the CPGE trace is non-zero for any finite \(\omega\) and becomes \(\pm 4\) after \(\frac{54c_{1}^{3}}{c_{3}^{3}}\). When \(c_{1}/|c_{2}|>1\), some portion of Eq.(10) is always left out and we do not get perfect quantization. For \(1<\frac{c_{1}}{|c_{2}|}<2\), the trace is non-zero for any finite \(\omega\) where as for \(\frac{c_{1}}{|c_{2}|}>2\), this happens only after \(\frac{54c_{1}^{3}}{c_{3}^{3}}\).
For \(\mu<0\), the condition set by \(\Theta\left(x+z+1-\frac{2\mu}{\omega}\right)\) is always satisfied where as \(\Theta\left(-x-z+1+\frac{2\mu}{\omega}\right)\) requires \(x+z<1-\frac{2\mu|}{\omega}\). An important thing to note here is that the term \(1-\frac{2|\mu|}{\omega}\in(-\infty,1)\). We are only interested when it lies in \((0,1)\) which happens for \(\omega>2|\mu|\). Since it can ever only reach \(1\), full overlap with region Eq.(10) is possible if \(c_{1}/|c_{2}|<1\), the condition to get perfect quantization. When this condition is met, the amount of overlap between \(x+z<1-\frac{2|\mu|}{\omega}\) and Eq.(10) is determined by solutions to equations \(1-\frac{2|\mu|}{\omega}=\frac{c_{1}}{|c_{2}|}\), \(1-\frac{2|\mu|}{\omega}=\frac{c_{1}}{2|c_{2}|}\), and \(1-\frac{2|\mu|}{\omega}=\frac{54^{1/3}c_{1}}{|c_{3}|^{2/3}\omega^{1/3}}\). The last equation can be rewritten as \((\omega-2|\mu|)^{3}-54\frac{c_{1}^{3}}{c_{2}^{3}}\omega^{2}=0\). This cubic equation never has three real roots. Since the product of its roots is \(8|\mu|^{3}>0\), the only real root, \(\omega_{p}\), is always positive. Note that \(\omega_{p}>2|\mu|\). The CPGE trace becomes non-zero after \(\min\left(\omega_{p},\frac{2|\mu|}{1-\frac{2|\mu|}{|c_{2}|}}\right)\), and saturates to \(\pm 4\) after \(\max\left(\omega_{p},\frac{2|\mu|}{1-\frac{2|\mu|}{|c_{2}|}}\right)\). When \(1<c_{1}/|c_{2}|<2\), the trace is non-zero after \(\min\left(\omega_{p},\frac{2|\mu|}{1-\frac{c_{1}}{|c_{2}|}}\right)\) where as for \(c_{1}/|c_{2}|>2\), this happens after \(\omega_{p}\) (it never reaches \(\pm 4\) in either case).
For \(\mu>0\), the possibilities become even more interesting. \(\Theta\left(-x-z+1+\frac{2\mu}{\omega}\right)\Theta\left(x+z+1-\frac{2\mu}{ \omega}\right)\) sets bounds on the integration region, requiring \(\frac{2\mu}{\omega}-1<x+z<\frac{2\mu}{\omega}+1\). The term \(1+\frac{2\mu}{\omega}\in(1,\infty)\), which means that if \(c_{1}/|c_{2}|>1\), a portion of Eq.(10) will necessarily be left out for \(\omega>\frac{2\mu}{|c_{2}|^{2}-1}\) (perfect quantization still possible for smaller energies). Now, the solutions to equations \(\frac{2\mu}{\omega}-1=\frac{c_{1}}{|c_{2}|}\), \(\frac{2\mu}{\omega}-1=\frac{c_{1}}{2|c_{2}|}\), and \(\frac{2\mu}{\omega}-1=\frac{54^{1/3}c_{1}}{|c_{3}|^{2/3}\omega^{1/3}}\) become crucial in determining the amount of region Eq.(10) available for integration. The last equation can be rewritten as the cubic equation \((\omega-2\mu)^{3}+54\frac{c_{1}^{3}}{c_{3}^{3}}\omega^{2}=0\). The product of its roots is \(8\mu^{3}>0\) which means that when two roots are complex (conjugate pair), the real root must be positive. However, when all roots are real, there are two possibilities \(-\) one positive two negative roots or three positive roots. It turns out the latter case is not possible because the condition for all roots being real is \(\mu<4\frac{c_{1}^{3}}{c_{3}^{3}}\) where as for all roots to be positive, \(\mu>9\frac{c_{1}^{3}}{c_{3}^{3}}\). Thus, we always get exactly one positive root, \(\omega_{p}\). Note that \(\omega_{p}<2\mu\) in this case. The CPGE trace becomes non-zero after \(\min\left(\omega_{p},\frac{2\mu}{1+\frac{c_{1}}{|c_{2}|}}\right)\). For \(c_{1}/|c_{2}|<1\), it goes on to reach a saturation value of \(\pm 4\) after \(\max\left(\omega_{p},\frac{2\mu}{1+\frac{c_{1}}{2|c_{2}|}}\right)\). When \(c_{1}/|c_{2}|>1\), we see quantization for \(\max\left(\omega_{p},\frac{2\mu}{1+\frac{c_{1}}{2|c_{2}|}}\right)<\omega<\frac{ 2\mu}{|c_{2}|-1}\). Perfect quantization is not possible when \(\frac{2\mu}{\frac{c_{1}}{|c_{2}|}-1}<\max\left(\omega_{p},\frac{2\mu}{1+\frac {c_{1}}{2|c_{2}|}}\right)\).
|
2304.08611 | Multispin Clifford codes for angular momentum errors in spin systems | The physical symmetries of a system play a central role in quantum error
correction. In this work we encode a qubit in a collection of systems with
angular-momentum symmetry (spins), extending the tools developed in Phys. Rev.
Lett. 127, 010504 for single large spins. By considering large spins present in
atomic systems and focusing on their collective symmetric subspace, we develop
new codes with octahedral symmetry capable of correcting errors up to second
order in angular-momentum operators. These errors include the most physically
relevant noise sources such as microwave control errors and optical pumping. We
additionally explore new qubit codes that exhibit distance scaling commensurate
with the surface code while permitting transversal single-qubit Clifford
operations. | Sivaprasad Omanakuttan, Jonathan A. Gross | 2023-04-17T20:55:49Z | http://arxiv.org/abs/2304.08611v2 | # Multispin Clifford codes for angular momentum errors in spin systems
###### Abstract
The physical symmetries of a system play a central role in quantum error correction. In this work we encode a qubit in a collection of systems with angular-momentum symmetry (spins), extending the tools developed in [1] for single large spins. By considering large spins present in atomic systems and focusing on their collective symmetric subspace, we develop new codes with octahedral symmetry capable of correcting errors up to second order in angular-momentum operators. These errors include the most physically relevant noise sources such as microwave control errors and optical pumping. We additionally explore new qubit codes that exhibit distance scaling commensurate with the surface code while permitting transversal single-qubit Clifford operations.
## I Introduction
Quantum error correction (QEC) is an essential ingredient for implementing quantum computation reliably. In simple words, QEC uses a large Hilbert space to encode a smaller-dimensional system to overcome the detrimental effects of decoherence and recover the ideal state of an encoded system. One standard strategy for QEC, analogous to classical error correction, where the major error is the bit flip, is to encode a qubit of information in multiple qubits. However, due to the fact that for QEC one needs to account for both bit flip and phase flip errors, the number of physical qubits required to encode a logical qubit is very large. In spite of this difficulty, these techniques are widely considered for QEC and have found a lot of success including recent experimental implementation using the surface codes and color codes [2; 3; 4].
Another approach for QEC is to encode a qubit in a single system with a large Hilbert space; for example, the standard GKP code where a qubit is encoded in a simple harmonic oscillator, whose large Hilbert space provides natural protection from many errors native to this system [5; 6]. This approach in general reduces the overhead and thus makes the scaling easier. There have been many recent ideas about quantum computation using GKP states [7; 8; 9; 10] and a recent experiment where real-time quantum error correction beyond break-even is demonstrated [11].
In [1], quantum error-correcting codes native to spin systems with spin larger than \(1/2\) were developed using the special symmetries associated with these systems. In particular, the binary octahedral symmetry was used; however, one needs a very large spin (\(j\geq 13/2\)) to build a fully error-correcting code for this symmetry. In this work, we find a way out of this need for big spins by using the tensor product of multiple spins for spin larger than \(j=1/2\) and using the irreducible SU(2) representations in the symmetric subspace of these tensor products. These systems could generally be of great potential as they are easier to scale and systems with an order of 100 spins have been used for quantum simulation experiments with neutral atoms [12; 13]. In spin systems, the main source of decoherence is random rotations which contribute to the first-order errors in angular momentum and optical pumping which is a second-order effect in angular momentum involving vector and tensor light shifts [14; 15]. Accordingly, designing codes in these composite spin systems that correct for first- and second-order angular-momentum errors could reduce the overhead required to achieve fault-tolerant regimes of quantum computation and thus accelerate the path to useful quantum computation.
Similarly, we also consider the case of the tensor product of qubit systems. We encode a qubit in the symmetric subspace of multiple qubits to find codes that have transversal Cliffords and correct arbitrarily large errors. Using the binary octahedral symmetry we demonstrate explicit codewords with distance 3 and distance 5, and generally find that the minimum number of qubits required for a given distance scales similarly to the surface code while allowing full single-qubit transversal Clifford operations.
The remainder of this article is organized as follows. In Section II we gave a brief introduction to the binary octahedral code and the natural symmetry associated with these quantum error-correcting codes. In Section III we study the Knill-Laflamme condition for a general spin system by using the spherical tensor operators. In Section IV we find the relevant SU(2) irreps in the symmetric subspace for the tensor product of spin systems by mapping it to bosons. We used these approaches to find useful codes that correct for first-order angular momentum (small random SU(2)) errors in Section V and the second-order (Light shift) errors in Section VI. In Section VII, we study how one can apply these approaches to the tensor product of multiple spins \(j=1/2\) (qubit) systems and create error-correcting codes in the symmetric subspace of this multipartite system, finding explicit codes with distance 3 and 5. We give the outlook and
possible future directions in Section VIII.
## II Introduction to binary octahedral code
We build upon work [1] done to encode information against random SU(2) rotations in large single spins (irreps of SU(2)). This task is simplified by restricting ourselves to codespaces that are preserved under the action of a finite subgroup of SU(2), such as the single-qubit Clifford group (binary octahedral group). If the finite subgroup is rich enough, the full set of Knill-Laflamme conditions for first-order rotation errors reduces to a single expectation value, which is simple to check. The single-qubit Clifford group is one such rich subgroup, in that it can map any of \(\{J_{x},J_{y},J_{z}\}\) to any other, with either sign. These symmetries allow one to consolidate the conditions to
\[\left\langle i\right|J_{z}\left|j\right\rangle = C_{0z}\delta_{ij} \tag{1}\] \[\left\langle i\right|J_{x}J_{y}\left|j\right\rangle = C_{xy}\delta_{ij}\] (2) \[\left\langle i\right|J_{z}^{2}\left|j\right\rangle = C_{zz}\delta_{ij}\,. \tag{3}\]
The fact that a \(\pi\) rotation about \(J_{z}\) must put a relative phase between logical \(0\) and \(1\) means that one must have "odd" support on the \(J_{z}\) basis states and the other must-have"even" support, which further reduces the conditions to
\[\left\langle 0\right|J_{z}\left|0\right\rangle=0\,. \tag{4}\]
It turns out the binary tetrahedral group (a subgroup of the binary octahedral group) has enough symmetries for the above argument to go through as well, so we will also consider codes with that symmetry in this work.
The binary octahedral group, having additionally the \(S\) gate, a \(\pi/2\) rotation about \(J_{z}\), further constrains the support of the codewords in the \(J_{z}\) basis, such that the \(J_{z}\) eigenvalues included in logical \(0\) are either \(4\mathbf{Z}+\frac{1}{2}\) or \(4\mathbf{Z}-\frac{3}{2}\), depending on the code, and the eigenvalues for logical \(1\) are the negatives.
## III Derivation of Knill-Laflamme conditions
In this section, we extend the Knill-Laflamme condition derived for small random SU(2) rotations in large single spins in [1] to general errors which are powers of angular momentum operators. Since products of angular-momentum operators up to a given order are not linearly independent (due to equivalence relations such as the commutation relations), it can be convenient to use spherical tensors [16; 17; 18] as an error basis:
\[T_{q}^{k}(j)=\\ \sqrt{\frac{2k+1}{2j+1}}\sum_{m}\left\langle j,m+q|k,q;j,m\right\rangle \left|j,m+q\right\rangle\!\!\left\langle j,m\right| \tag{5}\]
which are basically the sums of powers of the angular momentum operators and are related to spherical harmonics. Using this as our basis of errors, the Knill-Laflamme conditions [19] require that
\[\left\langle i\right|E_{a}^{\dagger}E_{b}\left|j\right\rangle = \delta_{ij}C_{ab} \tag{6}\] \[E_{a},E_{b}\in\{T_{q}^{k}\}_{0\leq k\leq N} \tag{7}\]
if we want to be able to correct angular-momentum errors of orders up to \(N\). Because products of spherical tensors are sums of spherical tensors
\[T_{q}^{k}T_{q^{\prime}}^{k^{\prime}}=\sqrt{(2k+1)(2k^{\prime}+1)}\sum_{\vec{ k}}c_{\vec{q}}^{\vec{k}}T_{\vec{q}}^{\vec{k}} \tag{8}\]
where \(\tilde{q}=q+q^{\prime}\) and that the sum over \(\tilde{k}\) is restricted over \(|k-k^{\prime}|\leq\tilde{k}\leq k+k^{\prime}\) and \(c_{\vec{q}}^{\vec{k}}\) is defined in terms of \(6j\) symbols and Clebsch-Gordon coefficients [17]
\[c_{\vec{q}}^{\vec{k}}=(-1)^{2j+\tilde{k}}\left\{\begin{array}{ccc}k&k^{\prime }&\tilde{k}\\ j&j&j\end{array}\right\}C_{k,q,k^{\prime}q^{\prime}}^{\tilde{k}\tilde{q}}. \tag{9}\]
Figure 1: The codewords \(\left|0\right\rangle\) and \(\left|1\right\rangle\) for the \(\varrho_{4}\) irrep of the binary octahedral symmetry for \(j=7/2\) in the angular momentum basis. The colored boxes indicate the states occupied whereas the blank ones indicate those states are not occupied for the codeword. The states in the codeword are spaced by four units of angular momentum \(m_{z}=\pm 4\), a standard property of the octahedral symmetry, and contribute to the error correction condition. The codewords \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are separated a single unit of angular momentum and hence overlap of \(\left\langle 0\right|T_{1}^{k}\left|1\right\rangle=(-1)^{k}\left\langle 1 \right|T_{1}^{k}\left|0\right\rangle\neq 0\) for odd values of \(k\) whereas \(\left\langle 0\right|T_{-1}^{k}\left|1\right\rangle=(-1)^{k}\left\langle 1 \right|T_{1}^{k}\left|0\right\rangle=0\). This contributes to the off-diagonal terms to consider for error correction in Eq. (24).
We can equivalently consider the conditions
\[\left\langle j\right|T_{q}^{k}\left|k\right\rangle =\delta_{jk}C_{q}^{k} \tag{10}\] \[0 \leq\tilde{k}\leq 2N\,. \tag{11}\]
Consider the unitary \(U_{X}=\exp(-i\pi J_{x})\) the octahedral symmetry of the states gives us, an overall global phase that is irrelevant,
\[\begin{split} U_{X}\left|0\right\rangle=&\left|1 \right\rangle\\ U_{X}\left|1\right\rangle=&\left|0\right\rangle \end{split} \tag{12}\]
and we can find that
\[U_{X}T_{q}^{k}U_{X}^{\dagger}=(-1)^{k}T_{-q}^{k} \tag{13}\]
where the details of this calculation are given in Appendix A. Using this we see that for the codewords
\[\left\langle 0\right|T_{q}^{k}\left|0\right\rangle=(-1)^{k}\left\langle 1 \right|T_{-q}^{k}\left|1\right\rangle \tag{14}\]
For the case of the code words with octahedral symmetry, the code words are real in the angular-momentum basis (see Appendix B) and so is \(T_{q}^{k}\) and thus when we have two states \(\left|\psi\right\rangle\) and \(\left|\phi\right\rangle\) which are real linear combinations of the code words that respect the binary octahedral symmetry,
\[\left\langle\psi\right|T_{-q}^{k}\left|\phi\right\rangle=(-1)^{q}\left\langle \phi\right|T_{q}^{k}\left|\psi\right\rangle \tag{15}\]
which we prove in Appendix B. Thus one gets,
\[\left\langle 0\right|T_{q}^{k}\left|0\right\rangle=(-1)^{k}\left\langle 1 \right|T_{-q}^{k}\left|1\right\rangle=(-1)^{k-q}\left\langle 1\right|T_{q}^{k} \left|1\right\rangle \tag{16}\]
so from the above equation, the error condition is trivially satisfied unless
\[(k-q)\mod 2=1 \tag{17}\]
However, the code words have support on the \(J_{z}\) eigenstates that are separated by \(q\bmod 4=0\), as described in Section II and is given in Fig. 1, and hence the expression is identically zero unless \(q\) is even and thus the only diagonal conditions we need to check are those when \(k\) is odd:
\[\{T_{0}^{1},T_{0}^{3},T_{0}^{5},T_{4}^{5},\ldots\} \tag{18}\]
Now thinking about the next error-correction condition we get
\[\begin{split}\left\langle 0\right|T_{q}^{k}\left|1\right\rangle=& (-1)^{k}\left\langle 1\right|T_{-q}^{k}\left|0\right\rangle\\ =&(-1)^{k-q}\left\langle 0\right|T_{q}^{k}\left|1 \right\rangle\end{split} \tag{19}\]
The above equation states that when
\[k-q\bmod 2=1 \tag{20}\]
we automatically get
\[\left\langle 0\right|T_{q}^{k}\left|1\right\rangle=0 \tag{21}\]
Now again the support of the different code words is separated by odd shifts in angular momentum and hence we also automatically get that
\[\left\langle 0\right|T_{q}^{k}\left|1\right\rangle=0 \tag{22}\]
when \(q\bmod 4=1\) and can be seen from Fig. 1. Thus the only off-diagonal conditions we need to check are when both \(k\) and \(q\) are odd
\[\{T_{1}^{1},T_{1}^{3},T_{-3}^{3},T_{5}^{5},T_{1}^{5},T_{-3}^{5},\ldots\} \tag{23}\]
Hence the error correction conditions can be written as,
\[\begin{split}\left\langle 0\right|T_{q}^{(k)}\left|0\right\rangle=& (-1)^{(k-q)}\left\langle 1\right|T_{q}^{(k)}\left|1\right\rangle \implies\text{only consider ($k\in\text{ odd and }q\equiv 0\bmod 4$)},\\ \left\langle 0\right|T_{q}^{(k)}\left|1\right\rangle=& (-1)^{(k-q)}\left\langle 0\right|T_{q}^{(k)}\left|1\right\rangle \implies\text{only consider ($k\in\text{ odd and }q\equiv 1\bmod 4$)}.\end{split} \tag{24}\]
This gives the general error correction conditions one needs to check for the binary octahedral codes. One can easily see that a large number of conditions are trivially satisfied accounting for the symmetry of the codewords. In the following sections, we will see how these correction conditions will help us in obtaining useful quantum-error-correction codes.
IV The \(\mathrm{SU(2)}\) irreps in the symmetric subspace of the tensor product of \(n\) spin \(j\) systems
Now, consider the tensor product of \(n\) spin \(j\) systems. This forms a Hilbert space \(\mathcal{H}\) of dimension \(d^{n}\) where \(d=2j+1\). We focus on the symmetric subspace [20] where expectation values are unchanged by permuting the subsystems, so for any arbitrary opera
tors \(A_{1},A_{2},\ldots,A_{n}\) we have
\[\langle A_{1}\otimes A_{2}\otimes\cdots\otimes A_{n}\rangle=\\ \langle A_{\pi(1)}\otimes A_{\pi(2)}\otimes\cdots\otimes A_{\pi(n)}\rangle. \tag{25}\]
for any permutation \(\pi\). Restricting our attention to the symmetric subspace simplifies the Knill-Laflamme conditions, as many of the error terms \(E_{a}^{\dagger}E_{b}\) that arise are permutations of each other and need only be verified once within the symmetric subspace.
The dimension of the symmetric subspace for the tensor product of \(n\) spin-\(j\) systems is,
\[\dim\left(S_{n}(d)\right)=\frac{d(d+1)...(d+n-1)}{n!}. \tag{26}\]
Since we are interested in encoding qubits in the symmetric subspace, we need to identify how the symmetric subspace decomposes into SU(2) irreps. For \(j=1/2\) the decomposition is simple, as the symmetric subspace is itself a spin-\((n+1)/2\) irrep. For larger spins, we must work harder, as the symmetric subspace decomposes into multiple SU(2) irreps.
One way to see that we must get multiple SU(2) irreps in the symmetric subspace is to notice that the operator \(J_{z}\) gains some degeneracies for \(j>1/2\). For example, \(|+1,-1\rangle+|-1,+1\rangle\) and \(|0,0\rangle\) are both symmetric states that are also eigenstates of \(J_{z}\) with eigenvalue \(m_{z}=0\). Since \(J_{z}\) is nondegenerate within any SU(2) irrep, this means the symmetric subspace of 2 spin-1 systems must decompose into multiple SU(2) irreps.
A useful perspective on the decomposition is to consider the symmetric subspace as \(n\) bosonic modes with at most \(2j\) bosons in each mode [21]. Each mode is associated with one of the spins, and the number of bosons in a mode corresponds to the \(J_{z}\) eigenvalue of the associated spin (adding \(j\) to the eigenvalue so the number of bosons ranges from 0 to \(2j\)). The total \(J_{z}\) eigenvalue is then given by the total number of bosons, and the degeneracy of that eigenvalue in the symmetric subspace is given by the number of partitions of those bosons into \(n\) distinct modes, restricted to putting no more than \(2j\) bosons in a single mode. These can be counted using restricted Young diagrams, where the number of columns must not exceed \(2j\) and the number of rows must not exceed \(n\). An example of such restricted Young diagrams and their associated states is given in Fig. 2.
For example consider the symmetric subspace of 2 spin-1/2 particles, where the symmetric subspace is spanned by the triplet states and has a total spin \(J=1\) (the largest possible angular momentum under the tensor product). Mapping this to the 2 bosonic modes with at most \(2j=1\) boson each, we enumerate all partitions of \(N\) bosons among these modes for \(N\in\{0,1,2\}\). The possible partitions are given in Table 1. Each total photon number \(N\) corresponds only to a single restricted partition, consistent with our previous statement that the symmetric subspace is a single SU(2) irrep.
As a first non-trivial example consider the case of spin \(j=1\) and \(n=2\). The restricted partitions of bosons into two modes are given in Table 2. As we can see from the table there are two partitions of \(N=2\) bosons into two modes, revealing a degeneracy of the \(J_{z}\) operator for eigenvalue \(m_{z}=0\). Since a one-dimensional subspace of this degenerate subspace must belong to the spin-2 irrep, and there are no degeneracies for larger \(m_{z}\), we see that the symmetric subspace decomposes into one copy of spin 2 and one copy of spin 0.
Using this same approach,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(N\) & \(n_{1}\) & \(n_{2}\) & \(n_{1}\) & \(n_{2}\) \\ \hline
0 & 0 & 0 & \\ \hline
1 & 1 & 0 & \\ \hline
2 & 1 & 1 & 2 & 0 \\ \hline
3 & 2 & 1 & \\ \hline
4 & 2 & 2 & \\ \hline \end{tabular}
\end{table}
Table 2: The symmetric subspace of \(n=2\) spin \(j=1\) systems. We find we need two columns to account for the distinct partitions of \(N=2\) bosons. Filling in the columns from left to right for each \(N\), we can identify the SU(2) irreps present by the number of occupied rows in each column. Here the first column has 5 occupied rows, corresponding to the 5-dimensional spin-2 irrep, and the second column has 1 occupied entry, corresponding to the spin-0 irrep. The particular partition of \(N\) appearing in each column here has no special meaning, as the actual basis states of the irreps are generally superpositions of these partitions.
Figure 2: Restricted Young diagram showing a basis for the three-dimensional subspace of the totally symmetric subspace of 3 spin-2 systems for which \(J_{z}=2\). The associated states are obtained by converting the number of boxes in each row to a \(J_{z}\) eigenvalue by subtracting \(j=-2\). Once symmetrized over the three subsystems, these states form a basis for the \(J_{z}=2\) symmetric subspace.
that for the case of the tensor product of any two spin \(j\), we could find that,
\[j\otimes j\stackrel{{ s\,\mathrm{s}\,\mathrm{s}}}{{=}}2j\oplus(2j-2) \oplus(2j-4)\oplus\cdots. \tag{27}\]
Simple counting of the total dimensions verifies this and is given in detail in Appendix C.
Similarly, we can use the same approach for more complex cases, for example, consider the case of \(n=3\) and \(j=1\), the possible restricted partitions are given in Table 3. As we can see from the table we have two occupied columns with \(d=7\) and \(d=3\) which yields the two SU(2) irreps spin 3 and spin 1.
Since the specific symmetries we are interested in are only present for half-integer spins [1], the tensor product of two spins will not give us valid codespaces as it only produces integer spins. Hence the first non-trivial cases of interest are three copies of a half-integer spin. The decompositions into SU(2) irreps for the cases of \(j=3/2,5/2,7/2\), and \(9/2\) are given in Eq. (28), where the bracket on top of the spins represent the multiplicity.
\[\begin{split}\langle 0|\otimes_{i}T_{q_{i}}^{k_{i}}\,|0 \rangle=&(-1)^{\sum_{i}k_{i}-\sum_{i}q_{i}}\,\langle 1|\otimes_{i}T_{q_{i} }^{k_{i}}\,|1\rangle\implies\text{only consider }\left(\sum_{i}k_{i}\in\text{ odd and }\sum_{i}q_{i}\equiv 0 \text{ mod }4\right),\\ \langle 0|\otimes_{i}T_{q_{i}}^{k_{i}}\,|1\rangle=& (-1)^{\sum_{i}k_{i}-\sum_{i}q_{i}}\,\langle 0|\otimes_{i}T_{q_{i} }^{k_{i}}\,|1\rangle\implies\text{only consider }\left(\sum_{i}k_{i}\in\text{ odd and }\sum_{i}q_{i}\equiv 1 \text{ mod }4\right).\end{split} \tag{33}\]
Where we used the fact that the tensor product of spherical tensors shifts the total angular momentum by the sum of the individual shifts,
\[\begin{split}&\otimes_{i}T_{q_{i}}^{k_{i}}\,|j_{z}=m_{1},j_{z}=m_{2}, \ldots,j_{z}=m_{N}\rangle\\ &\propto|j_{z}=m_{1}+q_{1},j_{z}=m_{2}+q_{2},\ldots,j_{z}=m_{N} +q_{N}\rangle\,,\end{split} \tag{34}\]
w
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(n_{1}\) & \(n_{2}\) & \(n_{3}\) & \(n_{1}\) & \(n_{2}\) & \(n_{3}\) \\ \hline
0 & 0 & 0 & 0 & \\ \hline
1 & 1 & 0 & 0 & & \\ \hline
2 & 1 & 1 & 0 & 2 & 0 & 0 \\ \hline
3 & 1 & 1 & 1 & 2 & 1 & 0 \\ \hline
4 & 2 & 1 & 1 & 2 & 2 & 0 \\ \hline
5 & 2 & 2 & 1 & & \\ \hline
6 & 2 & 2 & 2 & & \\ \hline \end{tabular}
\end{table}
Table 3: The symmetric subspace of \(n=3\) spin \(j=1\). Three values of \(N\) have multiple partitions, resulting in the second column having 3 occupied rows, and giving us a decomposition of the symmetric subspace into one copy of spin 3 and one copy of spin 1.
and hence the spacing arguments we used to get the mod 4 are still valid for a code respecting the binary octahedral group.
Turning our attention back to the case of the Knill-Laflamme conditions for the first-order errors in the angular momentum operators in Eq. (30), the condition is trivially satisfied when \(\sum_{i}k_{i}\) is even. Now using the fact that when one multiplies two spherical tensors of rank \(k_{1},k_{2}\) the decomposition consists of all the spherical tensors with rank \(k\), where \(\left|k_{1}-k_{2}\right|\leq k\leq k_{1}+k_{2}\), the condition
\[\left\langle i\right|T_{q}^{1}T_{q^{\prime}}^{1}\otimes\mathds{1}\otimes \mathds{1}\left|j\right\rangle \tag{35}\]
leaves us with spherical tensors with rank \(0,1,2\). However, from Eq. (30) the rank 0 and 2 cases are trivially satisfied, and hence the only term to check is \(\left\langle i\right|T_{q}^{1}\otimes\otimes\mathds{1}\left|j\right\rangle\). We recall that, when correcting for total angular momentum errors on binary octahedral codes, it was sufficient to check
\[\left\langle 0\right|J_{z,\text{total}}\left|0\right\rangle=0\,. \tag{36}\]
Since we're considering codes in the symmetric subspace, we have
\[\tfrac{1}{3}\left\langle 0\right|J_{z,\text{total}}\left|0\right\rangle =\left\langle 0\right|J_{z}\otimes\mathds{1}\otimes\mathds{1} \left|0\right\rangle \tag{37}\] \[=\left\langle 0\right|\mathds{1}\otimes J_{z}\otimes\mathds{1} \left|0\right\rangle\] (38) \[=\left\langle 0\right|\mathds{1}\otimes\mathds{1}\otimes J_{z} \left|0\right\rangle \tag{39}\]
so correcting first-order single-system angular momentum errors in a binary octahedral code is equivalent to correcting first-order global angular-momentum errors.
### Case of three \(j=3/2\)
According to Eq. (28) the symmetric subspace of three spin-3/2 systems decomposes into three SU(2) irreps. Faithful two-dimensional binary-octahedral irreps are present both in the \(j=9/2\) and the \(j=5/2\) SU(2) irreps. However, these irreps are incompatible with each other. In the notation of [1], \(j=9/2\) has a single copy of \(\varrho_{4}\) while \(j=5/2\) has a single copy of \(\varrho_{5}\). While this prevents us from engineering a code with binary-octahedral symmetry, one obtains more freedom by relaxing to binary-tetrahedral symmetry [1].
For the binary tetrahedral symmetry, the error condition becomes,
\[\begin{split}\left\langle 0\right|\otimes_{i}T_{q_{i}}^{k_{i}} \left|0\right\rangle=&(-1)^{\sum_{i}k_{i}-\sum_{i}q_{i}}\left \langle 1\right|\otimes_{i}T_{q_{i}}^{k_{i}}\left|1\right\rangle\implies\text{ only consider }\left(\sum_{i}k_{i}\in\text{ odd and }\sum_{i}q_{i}\equiv 0\text{ mod }2\right),\\ \left\langle 0\right|\otimes_{i}T_{q_{i}}^{K_{i}}\left|1 \right\rangle=&(-1)^{\sum_{i}k_{i}-\sum_{i}q_{i}}\left\langle 0 \right|\otimes_{i}T_{q_{i}}^{k_{i}}\left|1\right\rangle\implies\text{ only consider }\left(\sum_{i}k_{i}\in\text{ odd and }\sum_{i}q_{i}\equiv 1\text{ mod }2\right).\end{split} \tag{40}\]
the factor of mod 2 appears as the spacing of the binary tetrahedral code words is 2 instead of the 4 for the binary octahedral codewords. However, for the case of first-order errors in the angular momentum, the only non-trivial condition we need to satisfy is \(\left\langle i\right|T_{q}^{1}\otimes 1\otimes 1\left|j\right\rangle\).
Making this relaxation, we find that \(j=9/2\) and \(j=5/2\) each have a copy of the faithful two-dimensional binary-tetrahedral irrep \(\varrho_{4}\) (again in the notation of the appendix of [1]). The expectation values of \(J_{z}\) for the logical 0s of these two irreps have opposite signs, so we engineer a combined codeword with vanishing \(J_{z}\) expectation value to satisfy the error-correction conditions:
\[\left|0\right\rangle=\frac{1}{\sqrt{16}}\left(\sqrt{5}\left|0\right\rangle_{ \frac{5}{2}}+\sqrt{11}\left|0\right\rangle_{\frac{5}{2}}\right). \tag{41}\]
where
\[\begin{split}\left|0\right\rangle_{\frac{2}{2}}&= \frac{\sqrt{6}}{4}\left|\frac{9}{2},\frac{9}{2}\right\rangle+\frac{\sqrt{21}}{6 }\left|\frac{9}{2},\frac{1}{2}\right\rangle+\frac{\sqrt{6}}{12}\left|\frac{9}{ 2},\frac{-7}{2}\right\rangle,\\ \left|0\right\rangle_{\frac{5}{2}}&=-\frac{\sqrt{6}}{6 }\left|\frac{5}{2},\frac{5}{2}\right\rangle+\frac{\sqrt{30}}{6}\left|\frac{5} {2},\frac{-3}{2}\right\rangle.\end{split} \tag{42}\]
The projectors onto the irreps in \(j=9/2\) and \(j=5/2\) can be constructed from the character for \(\varrho_{4}\) along with the representatives for the binary-tetrahedral group elements provided by the SU(2) irreps as discussed in [1].
### Case of three \(j=5/2\)
Next, consider the case of three spin \(5/2\) whose symmetric-subspace decomposition is also given in Eq. (28). Again we are looking for multiple copies of
one of the faithful two-dimensional irreps of the binary-octahedral group. For this case, we have multiple options and for simplicity we chose the irrep \(\varrho_{4}\) appearing in \(j=9/2\) and \(j=11/2\). The corresponding logical zero states are
\[\left|0\right\rangle_{\frac{11}{2}}=\frac{\sqrt{21}}{12}\left|\frac{11}{2}; \frac{9}{2}\right\rangle-\frac{\sqrt{2}}{4}\left|\frac{11}{2};\frac{1}{2} \right\rangle+\frac{\sqrt{105}}{12}\left|\frac{11}{2};\frac{-7}{2}\right\rangle, \tag{43}\]
These codewords have equal and opposite expectation values
\[\left\langle 0\right|J_{z}\otimes\mathds{1}\otimes\mathds{1} \left|0\right\rangle_{\frac{11}{2}}= -\frac{11}{18} \tag{44}\] \[\left\langle 0\right|J_{z}\otimes\mathds{1}\otimes\mathds{1} \left|0\right\rangle_{\frac{9}{2}}= \frac{11}{18}\]
meaning we get a codeword that corrects for first-order errors by simply taking a uniform superposition:
\[\left|0\right\rangle_{L}=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle_{\frac{ 11}{2}}+\left|0\right\rangle_{\frac{9}{2}}\right). \tag{45}\]
## VI Correcting optical pumping
In the case of the error that is similar to optical pumping [14], the error operators are of the form \(J_{i}^{l}J_{j}^{m}\), where \(\{i,j=x,y,z\}\) and \(l+m\leq 2\). However, we find it convenient again to express these errors in terms of the spherical tensors \(\{T_{q}^{k};-k\leq q\leq k\}\) as they form an orthogonal basis for errors and can be written in terms of angular momentum operators as given in Appendix A. Errors of this type acting on a single spin are permutations of
\[\mathcal{E}=A\otimes\mathds{1}\otimes\mathds{1} \tag{46}\]
where \(A\in\{T_{q}^{k};1\leq k\leq 2,-k\leq q\leq k\}\). We see the Knill-Laflamme conditions in Eq. (33) are trivially satisfied except the ones given in Table 4. The errors with total \(\sum k\) mod \(2=0\) are trivially satisfied by Eq. (33).
In our numerical simulations, we observed that we either need to satisfy the diagonal or off-diagonal condition for the codes respecting the binary octahedral symmetry. Thus if one finds a code satisfying the diagonal conditions the off-diagonal conditions will be trivially satisfied and vice-versa, which is also true for the error operators that are linear in the angular-momentum operators. Unlike the case of linear angular-momentum errors finding the codeword analytically is hard and one needs to rely on numerical methods to find the codewords; the method is described in detail in Appendix D. Also as one is interested in the local rather than global errors we need to transform the basis from the \(\left|j_{\rm tot},j_{z}^{\rm tot}\right\rangle\rightarrow\left|j_{1},m_{1};j _{2},m_{2};j_{3},m_{3}\right\rangle\) using the Clebsch-Gordan coefficients where \(\{j_{i},m_{i}\}\) refers to the angular momentum basis of the individual spins.
From, Eq. (28), there are multiple SU(2) irreps within the symmetric subspace of the threefold tensor product of spin-\(j\) systems. Decomposing these further into binary octahedral irreps gives us high multiplicities for the two faithful two-dimensional irreps and therefore many degrees of freedom with which to satisfy the error-correction conditions. For example, consider the case of spin \(j=7/2\). A possible codeword obtained numerically for the \(\varrho_{4}\) irrep [1] is
\[\left|0\right\rangle\propto\sqrt{\frac{70}{849}}\left|0\right\rangle_{\frac{ 21}{2}}+\sqrt{\frac{1}{4468}}\left|0\right\rangle_{\frac{17}{2}}^{1}+\sqrt{ \frac{338}{1251}}\left|0\right\rangle_{\frac{17}{2}}^{2} \tag{47}\] \[+\sqrt{\frac{112}{479}}\left|0\right\rangle_{\frac{15}{2}}+\sqrt {\frac{515}{1246}}\left|0\right\rangle_{\frac{13}{2}}.\]
where \(\left|0\right\rangle_{\frac{17}{2}}^{1}\) and \(\left|0\right\rangle_{\frac{17}{2}}^{2}\) are orthogonal choices for \(\left|0\right\rangle\) within the multiplicity-two \(\varrho_{4}\) irrep of the binary-octahedral representation derived from \(j=17/2\), where the degeneracy is broken by diagonalizing \(J_{z}\) in the subspace spanned by the logical \(\left|0\right\rangle\)s.
Similarly for the case of the \(j=9/2\) we can use the SU(2) irreps given in Eq. (28) and can find a code numerically as
\[\left|0\right\rangle\propto-\sqrt{\frac{2}{439}}\left|0\right\rangle_{\frac{ 21}{2}}^{1}+\sqrt{\frac{55}{739}}\left|0\right\rangle_{\frac{22}{2}}^{2}- \sqrt{\frac{216}{349}}\left|0\right\rangle_{\frac{23}{2}}^{1} \tag{48}\] \[+\sqrt{\frac{133}{1090}}\left|0\right\rangle_{\frac{23}{2}}^{2}- \sqrt{\frac{237}{1316}}\left|0\right\rangle_{\frac{21}{2}}\]
where again we have used the \(\varrho_{4}\) irrep and where superscripts in the codeword represent the multiplicities for \(j=27/2\) and \(j=23/2\) and degeneracy is broken by diagonalizing \(J_{z}\) in the subspace spanned by the logical \(\left|0\right\rangle\)s.
Thus using the tensor-product structure of a minimum of 3 spins with individual spins \(j>1/2\) one can encode a qubit correcting the most significant error in these physical platforms, which are rotation errors and optical pumping. This, in turn, provides an alternate approach
\begin{table}
\begin{tabular}{c|c} \hline diagonal errors & off-diagonal errors \\ \hline \(\left\langle 0\right|T_{0}^{1}\otimes\mathds{1}\otimes\mathds{1} \left|0\right\rangle_{L}\) & \(\left\langle 0\right|T_{1}^{1}\otimes\mathds{1}\otimes\mathds{1}\left|1 \right\rangle_{L}\) \\ \(\left\langle 0\right|T_{0}^{2}T_{0}^{1}\otimes\mathds{1}\left|0\right\rangle_{L}\) & \(\left\langle 0\right|T_{-1}^{1}\otimes T_{2}^{2}\otimes\mathds{1}\left|1 \right\rangle_{L}\) \\ \(\left\langle 0\right|T_{-1}^{1}\otimes T_{2}^{2}\otimes\mathds{1}\left|0\right\rangle_{L}\) & \(\left\langle 0\right|T_{1}^{1}\otimes T_{0}^{2}\otimes\mathds{1}\left|1 \right\rangle_{L}\) \\ \(\left\langle 0\right|T_{1}^{1}\otimes T_{2}^{2}\otimes\mathds{1}\left|0\right\rangle_{L}\) & \(\left\langle 0\right|T_{1}^{1}\otimes T_{1}^{2}\otimes\mathds{1}\left|1 \right\rangle_{L}\) \\ \(\left\langle 0\right|T_{0}^{1}\otimes T_{0}^{2}\otimes\mathds{1}\left|0\right\rangle_{L}\) & \(\left\langle 0\right|T_{-1}^{1}\otimes T_{-2}^{2}\otimes\mathds{1}\left|1 \right\rangle_{L}\) \\ & \(\left\langle 0\right|T_{-1}^{1}T_{-2}^{2}\otimes\mathds{1}\otimes\mathds{1}\left| 1\right\rangle_{L}\) \\ \hline \end{tabular}
\end{table}
Table 4: The relevant errors we need to satisfy for the error correction up to the second order for the tensor product of three spins. The table is constructed using the Eq. (24) and the tensor product structure.
for error correction with very low overhead, the number of physical systems to encode a logical qubit, by caring about the most significant error mechanisms.
## VII Correcting multibody errors with spin \(j=\frac{1}{2}\)
Now we turn our attention to the case of the \(N\)-fold tensor product of \(j=1/2\). Here the only irrep in the symmetric subspace is spin \(N/2\). Hence we are shifting away from the paradigm of local (one-body) first- and second-order angular momentum errors, and will be considering non-local (multi-body) errors in this section. For this case, we can work with collective spin operators,
\[J_{k}=\frac{1}{2}\sum_{i=1}^{N}\sigma_{k,i}, \tag{49}\]
where \(\sigma_{k,i}\) is the Pauli matrix acting on the \(i\)-th location and \(k\in\{x,y,z\}\).
Using the property of the symmetric subspace in Eq. (25) we get
\[\left\langle J_{k}\right\rangle=\frac{N}{2}\langle\sigma_{k,1}\rangle=\frac{N }{2}\langle\sigma_{k,2}\rangle=\ldots=\frac{N}{2}\langle\sigma_{k,N}\rangle. \tag{50}\]
Thus making the expectation value of the collective spin operator vanish makes all the local expectation values vanish which is the condition we studied for small random SU(2) errors in Section V.
Now looking for codes for the qubit with the capacity to correct individual qubit errors, one can think of the same in terms of the collective spin operators. For example consider the case of the code corrects for all single body Pauli errors i.e. a code with distance \(3\), the Knill-Laflamme conditions one needs to consider
\[\begin{split}\left\langle i\right|\sigma_{k,p}\left|j\right\rangle \\ \left\langle i\right|\sigma_{k,p}\sigma_{l,p^{\prime}}\left|j \right\rangle,\end{split} \tag{51}\]
where we used the fact that \(\left(\sigma_{k,i}\right)^{2}=\mathds{1}\) and \(p,p^{\prime}=\{1,2,\ldots,N\}\), \(k,l=\{x,y,z\}\). However, if we restrict ourselves to the case of the codes respecting the binary octahedral symmetry and using the error correction condition derived in Eq. (33) where we have all the operators with rank \(k_{i}=1\), the only conditions remaining to check are
\[\left\langle i\right|\sigma_{k,p}\left|j\right\rangle_{=}\frac{2}{N}\left\langle i \right|J_{k}\left|j\right\rangle. \tag{52}\]
However for the binary octahedral symmetry for the collective spin operators the only condition we need to satisfy is [1],
\[\left\langle 0\right|J_{z}\left|0\right\rangle. \tag{53}\]
For example one can think of a code with parameter \([[n,k,d]]=[[13,1,3]]\) in the \(\varrho_{5}\) irrep for the octahedral symmetry and the codeword is,
\[\left|0\right\rangle_{=}\frac{\sqrt{105}}{14}\left|0\right\rangle_{0}+\frac{ \sqrt{91}}{14}\left|0\right\rangle_{1}, \tag{54}\]
where the states in the basis \(\left|J,J_{z}\right\rangle\) is,
\[\begin{split}\left|0\right\rangle_{0}&=\frac{\sqrt{ 910}}{56}\left|\frac{13}{2},\frac{13}{2}\right\rangle-\frac{3\sqrt{154}}{56} \left|\frac{13}{2},\frac{5}{2}\right\rangle-\frac{\sqrt{770}}{56}\left|\frac{ 13}{2},-\frac{3}{2}\right\rangle+\frac{\sqrt{70}}{56}\left|\frac{13}{2},- \frac{11}{2}\right\rangle\\ \left|0\right\rangle_{1}&=\frac{\sqrt{231}}{84} \left|\frac{13}{2},\frac{13}{2}\right\rangle-\frac{3\sqrt{1365}}{84}\left| \frac{13}{2},\frac{5}{2}\right\rangle-\frac{\sqrt{273}}{84}\left|\frac{13}{2},-\frac{3}{2}\right\rangle+\frac{\sqrt{3003}}{84}\left|\frac{13}{2},-\frac{ 11}{2}\right\rangle.\end{split} \tag{55}\]
Next, we can consider the case of the error correcting code that corrects two Pauli errors, otherwise known as a distance-5 code. We start by considering correcting global angular-momentum errors up to the second order. The octahedral symmetry of the codes reduces the Knill-Laflamme conditions Eq. (24) we need to satisfy to
\[\left\langle i\right|J_{z}\left|j\right\rangle =C_{z}\delta_{ij}, \tag{56}\] \[\left\langle i\right|J_{z}^{3}\left|j\right\rangle =C_{zz}\delta_{ij},\] (57) \[\left\langle i\right|J_{z}J_{x}^{2}\left|j\right\rangle =C_{xz}\delta_{ij},\] (58) \[\left\langle i\right|J_{x}J_{y}J_{z}\left|j\right\rangle =C_{xyz}\delta_{ij}, \tag{59}\]
where \(i,j=\{0,1\}\). Now as we have seen in Section II the condition \(\left\langle i\right|J_{z}\left|j\right\rangle\) is equivalent to just satisfying \(\left\langle 0\right|J_{z}\left|0\right\rangle=0\). Again invoking the support structure of octahedral codes in Section II and the operator \(U_{X}\) defined in Eq. (12) yields
\[\begin{split}\left\langle 0\right|J_{z}^{3}\left|1\right\rangle& =\left\langle 1\right|J_{z}^{3}\left|0\right\rangle=0\\ \left\langle 0\right|J_{z}^{3}\left|0\right\rangle&=- \left\langle 1\right|J_{z}^{3}\left|1\right\rangle.\end{split} \tag{60}\]
Thus the condition need to satisfy the Eq. (57) reduces to \(\left\langle 0\right|J_{z}^{3}\left|0\right\rangle=0\).
Now using the fact that \(J_{\pm}=J_{x}\pm iJ_{y}\), we get
\[J_{x}^{2}=\frac{1}{4}\left(J_{+}^{2}+J_{-}^{2}+2j(j+1)\mathds{1}+2J_{z}^{2} \right), \tag{61}\]
therefore \(J_{x}J_{x}^{2}=\frac{1}{4}\left(J_{z}J_{+}^{2}+J_{z}J_{-}^{2}+2j(j+1)\mathds{1} +J_{z}^{3}\right)\). Again invoking the support property of the binary oc
tahedral symmetry yields
\[\begin{split}\langle 0|\,J_{z}J_{\pm}^{2}\,|1\rangle& =\langle 1|\,J_{z}J_{\pm}^{2}\,|0\rangle=0\\ \langle 0|\,J_{z}J_{\pm}^{2}\,|0\rangle&=\langle 1|\,J_{z} J_{\pm}^{2}\,|1\rangle=0.\end{split} \tag{62}\]
Thus to satisfy Eq. (58) it is sufficient to satisfy Eq. (57). Now for Eq. (59) one can use
\[J_{x}J_{y}=\frac{-i}{4}\left(J_{+}^{2}-J_{-}^{2}-J_{z}\right) \tag{63}\]
to show \(J_{x}J_{y}J_{z}=\frac{-i}{4}\left(J_{+}^{2}J_{z}-J_{-}^{2}J_{z}-J_{z}^{2}\right)\). However, from Eq. (62), and using
\[\begin{split}\langle 0|\,J_{z}^{2}\,|1\rangle& =\langle 1|\,J_{z}^{2}\,|0\rangle=0\\ \langle 0|\,J_{z}^{2}\,|0\rangle&=\langle 1|\,J_{z}^{2} \,|1\rangle\end{split} \tag{64}\]
from [1], we see that Eq. (59) is trivially satisfied, and thus to correct all the errors up to second power in angular momentum one only needs to satisfy
\[\begin{split}\langle 0|\,J_{z}\,|0\rangle&=0\\ \langle 0|\,J_{z}^{3}\,|0\rangle&=0.\end{split} \tag{65}\]
Armed with this result, we turn our attention to the local errors that actually concern us. For a collection of spin \(1/2\) systems,
\[\begin{split} J_{z}^{2}=&\frac{1}{4}\sum_{i,j} \sigma_{z,i}\sigma_{z,j}\\ =&\frac{1}{4}\sum_{i=j}\mathds{1}+\frac{1}{4}\sum_{i \neq j}\sigma_{z,i}\sigma_{z,j}.\end{split} \tag{66}\]
Again using the fact that \(\left(\sigma_{z,i}\right)^{2}=\mathds{1}\) we get
\[\begin{split} J_{z}^{3}&=\frac{1}{8}\sum_{i,j,k} \sigma_{z,i}\sigma_{z,j}\sigma_{z,k}\\ &=\frac{1}{8}\left(4\sum_{k}\sigma_{z,k}+\sum_{i\neq j\neq k} \sigma_{z,i}\sigma_{z,j}\sigma_{z,k}\right).\end{split} \tag{67}\]
For a state in the symmetric subspace for \(N\) spins,
\[\begin{split}\langle J_{z}^{3}\rangle&=\frac{1}{8} \left(4\sum_{k}\langle\sigma_{z,k}\rangle+\sum_{i\neq j\neq k}\langle\sigma_ {z,i}\sigma_{z,j}\sigma_{z,k}\rangle\right)\\ &=\langle J_{z}\rangle+N(N-1)(N-2)\langle\sigma_{z,1}\sigma_{z,2 }\sigma_{z,3}\rangle\end{split} \tag{68}\]
Thus if we have a code that satisfies Eq. (65), the code the Knill-Laflamme conditions of the form \(\sigma_{z,i}\sigma_{z,j}\sigma_{z,k}\). Now consider a general Knill-Laflamme condition,
\[\langle i|\,\sigma_{p,k}\sigma_{q,l}\sigma_{r,m}\,|j\rangle\;, \tag{69}\]
where \(p,q,r=\{x,y,z\}\) and \(k,l,m=\{1,2,\ldots,N\}\) for \(N\) spin \(1/2\) systems. One can again look at the collective spin operators and the expansion of \(J_{x}J_{y}J_{z}\) and \(J_{z}J_{x}^{2}\) in terms of Pauli operators. We have,
\[J_{x}J_{y}J_{z}=\frac{1}{8}\sum_{i,j,k}\sigma_{x,i}\sigma_{y,j}\sigma_{z,k}. \tag{70}\]
now using the fact that \(\sigma_{x}=\sigma_{+}+\sigma_{-}\) and \(\sigma_{y}=-i\left(\sigma_{+}-\sigma_{-}\right)\),
\[\begin{split} J_{x}J_{y}J_{z}&=\frac{-i}{8}\left( \sum_{i,j,k}\sigma_{+,i}\sigma_{+,j}\sigma_{z,k}-\sigma_{-,i}\sigma_{-,j} \sigma_{z,k}\right)\\ &+\frac{i}{8}\left(\sum_{i,j,k}\sigma_{+,i}\sigma_{-,j}\sigma_{z, k}-\sigma_{-,i}\sigma_{+,j}\sigma_{z,k}\right).\end{split} \tag{71}\]
However, the Knill-Laflamme condition for the first two terms is trivially satisfied using Eq. (33) and we need not consider the case when either \(i,j,k\) are repeated as the total rank \(\sum_{i}k_{i}\) is even for that case and those cases are trivially satisfied again by Eq. (33). Thus the only non-trivial terms to consider are,
\[\begin{split}&\frac{i}{8}\left(\sum_{i,j,k}\sigma_{+,i}\sigma_{-,j} \sigma_{z,k}-\sigma_{-,i}\sigma_{+,j}\sigma_{z,k}\right)\\ &=i[J_{+},J_{-}]J_{z}\\ &=-J_{z}^{2}.\end{split} \tag{72}\]
Thus the condition for \(\sigma_{x,i}\sigma_{y,j}\sigma_{z,k}\) is satisfied if the global condition for \(J_{z}^{2}\) is satisfied and for the binary octahedral symmetry the condition for \(J_{z}^{2}\) is trivially satisfied. Now we can look at the expansion of \(J_{z}J_{x}^{2}\) and we get,
\[J_{z}J_{x}^{2}=\frac{1}{8}\sum_{i,j,k}\sigma_{z,i}\sigma_{x,j}\sigma_{x,k}, \tag{73}\]
again expanding the \(\sigma_{x}\) and ignoring the trivially satisfied cases we are left with the terms,
\[\begin{split}&\frac{1}{8}\left(\sum_{i,j,k}\sigma_{+,i}\sigma_{-,j} \sigma_{z,k}+\sigma_{-,i}\sigma_{+,j}\sigma_{z,k}\right)\\ &=2J_{z}(J_{x}^{2}+J_{y}^{2})\\ &=2J_{z}^{3}+2J_{z}\left(j(j+1)\right)\end{split} \tag{74}\]
where \(j=N/2\), is the spin of the totally symmetric subspace. Thus if we satisfy the global condition of \(J_{z}^{3}\) and \(J_{z}\), the condition for \(\sigma_{z,i}\sigma_{x,j}\sigma_{x,k}\) is satisfied, and hence only condition we need to check to satisfy all the errors up to distance \(5\) is to check the global conditions given in Eq. (65).
The minimum spin we need to find conditions to correct for \(J_{z}\) and \(J_{z}^{3}\) is \(j=25/2\) in the \(\varrho_{4}\) irrep, i.e we need \(25\) qubits and form a \([[25,1,5]]\) code. The codeword is approximately
\[\begin{split}|0\rangle\propto-\sqrt{\frac{267}{1213}}\,|0\rangle _{1}+\sqrt{\frac{701}{1457}}\,|0\rangle_{2}+\sqrt{\frac{337}{1128}}\,|0 \rangle_{3}\,,\end{split} \tag{75}\]
where
\[\begin{split}\ket{0}_{1}&=-\sqrt{\frac{1377}{4132}}\ket{ \frac{25}{2}\frac{25}{2}}-\sqrt{\frac{1}{674}}\ket{\frac{25}{2}\frac{17}{2}}- \sqrt{\frac{109}{1169}}\ket{\frac{25}{2},\frac{9}{2}}-\sqrt{\frac{803}{1918}} \ket{\frac{25}{2},\frac{1}{2}}\\ &-\sqrt{\frac{103}{690}}\ket{\frac{25}{2},\frac{-7}{2}}-\sqrt{ \frac{1}{263}}\ket{\frac{25}{2},-\frac{13}{2}}-\sqrt{\frac{1}{3608}}\ket{\frac{2 5}{2},-\frac{21}{2}},\\ \ket{0}_{2}&=\sqrt{\frac{1}{4402}}\ket{\frac{25}{2} \frac{25}{2}}-\sqrt{\frac{2}{839}}\ket{\frac{25}{2}\frac{17}{2}}-\sqrt{\frac{2 93}{983}}\ket{\frac{25}{2},\frac{9}{2}}-\sqrt{\frac{11}{1264}}\ket{\frac{25}{2 },\frac{1}{2}}\\ \sqrt{\frac{913}{2925}}\ket{\frac{25}{2},\frac{-7}{2}}+\sqrt{ \frac{21}{412}}\ket{\frac{25}{2},-\frac{13}{2}}-\sqrt{\frac{1069}{3264}}\ket{ \frac{25}{2},-\frac{21}{2}},\\ \ket{0}_{3}&=-\sqrt{\frac{1}{61408}}\ket{\frac{25}{2} \frac{25}{2}}+\sqrt{\frac{1750}{2781}}\ket{\frac{25}{2}\frac{17}{2}}-\sqrt{ \frac{325}{3548}}\ket{\frac{25}{2},\frac{9}{2}}+\sqrt{\frac{43}{763}}\ket{\frac {25}{2},\frac{1}{2}}\\ &-\sqrt{\frac{47}{551}}\ket{\frac{25}{2},-\frac{7}{2}}+\sqrt{ \frac{183}{1349}}\ket{\frac{25}{2},-\frac{13}{2}}+\sqrt{\frac{2}{1011}}\ket{ \frac{25}{2},-\frac{21}{2}}.\end{split} \tag{76}\]
The distance 5 code for the binary octahedral code has the same code parameters as the distance-5 surface code [22; 2]. These codes have another interesting correspondence, in that they both belong to efficiently representable subsets of the full Hilbert space. The codes we study in this article all belong to the symmetric subspace, which is spanned by the Dicke basis and has dimension \(N+1\) which is linear instead of exponential in the number of qubits \(N\). The codewords for the surface code are stabilizer states, which we can efficiently represent by specifying a generating set of stabilizers of size \(N-1\)[23]. One notable difference is that, unlike the surface code, the binary octahedral codes have full transversal single-qubit Cliffords.
Using the same approach as we did for the distance 3 and 5 codes one can build codes that have higher distances. In Fig. 3 the number of physical qubits as a function of distance is given for both the binary octahedral (Clifford) codes and the surface codes. Both scale quadratically in the distance, though the Clifford codes have an improved constant factor.
One can use the binary tetrahedral symmetry to find code words with even fewer qubits. For example one can construct a \([[7,1,3]]\) code with codeword
\[\ket{0}=\sqrt{\frac{7}{16}}\ket{0}_{0}+\sqrt{\frac{16}{16}}\ket{0}_{1}, \tag{77}\]
where
\[\begin{split}\ket{0}_{0}&=-\frac{\sqrt{3}}{2}\ket{ \frac{7}{2},\frac{5}{2}}+\frac{1}{2}\ket{\frac{7}{2},-\frac{3}{2}},\\ \ket{0}_{1}&=\sqrt{\frac{7}{12}}\ket{\frac{7}{2}, \frac{1}{2}}+\sqrt{\frac{5}{12}}\ket{\frac{7}{2},-\frac{7}{2}}.\end{split} \tag{78}\]
The smallest distance 3 stabilizer code that has transversal Cliffords is the Steane code [24] with code parameters \([[7,1,3]]\) and also with binary octahedral symmetry, as it has transversal Clifford operators. The Steane code lies outside our classification as it does not live entirely within the symmetric subspace (being a superposition of spin \(1/2\) and spin \(7/2\)), suggesting that more interesting codes might be found by looking beyond the symmetric subspace.
## VIII Conclusion and outlook
In this work, we focused on using binary octahedral symmetry to construct useful quantum error-correcting codes extending the ideas in [1]. In [1], the codes were designed to protect against SU(2) errors in a single large spin. In this article, we developed a technique for designing codes for multiple copies of spins. We leveraged the multiple SU(2) irreps within the symmetric subspace of the tensor product of several large spins to correct for the additional physically relevant error channel of tensor light shifts. This resulted in numerically derived codes correcting tensor light shifts in three copies of spin \(j=7/2\) and in three copies of spin \(j=9/2\).
We derived general simplified error-correction condi
Figure 3: **Scaling of distance for Binary octahedral codes.** The figure gives the number of physical qubits required for correcting errors up to a distance \(d\) for the Surface code (rotated) and the binary octahedral code.
tions for correcting errors at arbitrary order using the structure of spherical tensors Eqs. (24) and (33), which are polynomials of the angular momentum operators and well studied in the spin systems.
We additionally studied the case of qubits (\(j=1/2\)) and extended the framework to multi-body errors. Again we used the symmetric subspace for a large number of spin \(1/2\) systems and used the symmetries to find codes with distance \(3\) for \(n=7\) and distance \(5\) for \(n=25\). The distance-\(5\) code contrasts interestingly with the distance-\(5\) surface code, which has the same code parameters but gives up the transversal Cliffords of the binary octahedral code in favor of its stabilizer structure.
The techniques outlined in this work can easily be extended to further develop codes with higher distances with octahedral symmetry. An important open question is whether one can develop fault-tolerant schemes for these kinds of codes, as their highly non Abelian nature makes applying existing fault-tolerant strategies difficult. Finally, it would be interesting to explore whether binary octahedral codes might have use as non stabilizer versions of the metrological codes discussed in [25].
###### Acknowledgements.
The authors would like to acknowledge fruitful discussions with Ivan Deutsch and Milad Marvian about quantum error correction for spin systems. S.O. would like to acknowledge useful discussions with Tyler Thurtled during various stages of the work. S.O. also thanks Pablo Poggi and Karthik Chinni for useful discussions about spherical tensors and their interesting properties, in particular Karthik for helping numerically create the spherical tensors as a polynomial of angular momentum operators. The derivation of the counting of parameters for finding the symmetric subspace of two spins in the appendix was proven as a follow-up to a discussion with Austin Daniel. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator (QSA).
## Appendix A Spherical Tensors
The spherical tensor operators for a spin \(j\) is defined in terms of the commutator relations [16; 17],
\[\begin{split}\left[J_{z},T_{q}^{k}\right]&=qT_{q}^{ k}\\ \left[J_{\pm},T_{q}^{k}\right]&=\sqrt{k(k+1)-q(q\pm 1)}T_{q \pm 1}^{k}\end{split} \tag{34}\]
Using the above relations the irreducible spherical tensors can be explicitly written in terms of the angular momentum basis as [18; 16],
\[T_{q}^{k}(j)=\sqrt{\frac{2k+1}{2j+1}}\sum_{m}\left\langle j,m+q|k,q;j,m \right\rangle\left|j,m+q\right\rangle\!\!\left\langle j,m\right|, \tag{35}\]
where \(0\leq k\leq 2j\) and \(-k\leq q\leq k\). The spherical tensor operators of rank \(k\) can be expressed as order-\(k\) polynomials in the angular-momentum operators [18; 26]. The spherical tensor operators also form an orthonormal basis for the operators on an SU(2) irrep with respect to the Hilbert-Schmidt inner product:
\[\mathrm{Tr}\!\left(T_{q_{1}}^{k_{1}}T_{q_{2}}^{k_{2}}\right)=\delta_{k_{1},k_ {2}}\delta_{q_{1},q_{2}}. \tag{36}\]
Now consider the unitary transformation given as, \(U_{X}=\exp(-i\pi J_{x})\) which can also be written in terms of the angular momentum basis as,
\[U_{X}=-i\sum_{m=-j}^{j}|j,m\rangle\!\langle j,-m| \tag{37}\]
Thus the action of the unitary operator on the irreducible spherical tensor gives,
\[U_{X}T_{q}^{k}U_{X}^{\dagger}=\sum_{m=-j}^{j}\left\langle j,m+q|k,q;j,m\right\rangle \left|j,-m-q\right\rangle\!\!\left\langle-m\right| \tag{38}\]
Now using the transformation \(m\rightarrow-m\) and using the fact that
\[\left\langle j,m+q|k,q;j,m\right\rangle=(-1)^{k}\left\langle j,-m-q|k,-q;j,-m \right\rangle, \tag{39}\]
we get
\[U_{X}T_{q}^{k}U_{X}^{\dagger}= (-1)^{k}\sum_{m}\left\langle j,m-q|k,-q;j,m\right\rangle\left|j,m -q\right\rangle\!\!\left\langle j,m\right| \tag{40}\] \[= (-1)^{k}T_{-q}^{k}.\]
Thus the action of the \(U_{X}\) on the spherical tensor operators is to flip the sign of \(q\) and to add a rank-dependent phase of \(\pm 1\) to the operator.
## Appendix B Error correction condition
The logical Pauli Z operator on an irrep \(\varrho\) of the binary octahedral group is given by [1],
\[\sigma_{z}=P_{\varrho}(i\exp(-i\pi J_{z}))P_{\varrho} \tag{41}\]
Logical \(\left|0\right\rangle\) is taken to be a \(+1\) eigenstate of the logical Pauli Z operator. The projector for the binary octahedral group is given as
\[P_{\varrho}=\frac{\mathrm{dim}\varrho}{\left|2\mathrm{O}\right|}\sum_{g\in 2 \mathrm{O}}\chi_{\varrho}(g)^{*}D(g). \tag{42}\]
where 2O is the single-qubit Clifford group [27], also called the binary octahedral group. Now from [1], \(\chi_{\varrho}(g)\) for the SU(2) irreps of interest are real. For the binary octahedral group, we also have that every representative \(D(g)\) is in the same conjugacy class as \(D(g)^{\dagger}\), \(D(g)^{T}\) and \(D(g)^{*}\). Restricting the sum to a fixed conjugacy class \([g]\) gives
\[\tfrac{1}{4}\chi_{\varrho}(g)^{*}\sum_{h\in[g]}\left(D(h)+D(h)^{\dagger}+D(h)^ {T}+D(h)^{*}\right). \tag{101}\]
The term for each conjugacy class is real-symmetric since \(\chi_{\rho}\) is real and \(D(g)+D(g)^{\dagger}+D(g)^{T}+D(g)^{*}\) is manifestly real and symmetric. Thus we get \(P_{\varrho}\) to be a real symmetric matrix. The term sandwiched by the projectors in Eq. (100) is also real and symmetric for half-integer spins
\[i\exp(-i\pi J_{z})=(i\exp(-i\pi J_{z}))^{\dagger}=(i\exp(-i\pi J_{z}))^{T}\,, \tag{102}\]
hence \(\sigma_{z}\) is a real-symmetric operator.
Now the eigenvector of a real symmetric matrix (\(A\)) can be found by solving the eigenvalue equation,
\[\left(A-\lambda\mathds{1}\right)\ket{\psi}=0\,. \tag{103}\]
Since the eigenvalue \(\lambda\) is real from \(A\) being Hermitian, when solved by Gaussian elimination we get a real vector and hence the eigenvectors of a real symmetric matrix are also real (up to an overall constant which is not important).
Consider the following expectation value for two states \(\ket{\psi}=\sum_{i}\alpha_{i}\ket{i}\) and \(\ket{\phi}=\sum_{i}\beta_{i}\ket{i}\), where \(\ket{i}\) is in the angular momentum basis,
\[\bra{\psi}T_{-q}^{k}(j)\ket{\phi}=\\ \quad d_{j}^{k}\sum_{i,i^{\prime},m}\alpha_{i}^{*}\beta_{i^{\prime }}C_{j,m,j,m-q}^{k,-q}\bra{i^{\prime}}\!{j,m-q}\bra{j,m}\!{i} \tag{104}\]
where \(d_{j}^{k}=\sqrt{2k+1/2j+1}\) and
\[C_{j_{1},m_{1},j_{2},m_{2}}^{j_{3},m_{3}}=\bra{j_{3},m_{3}}\!{j_{1},m_{1};j_{2 },m_{2}} \tag{105}\]
is the Clebsch-Gordan coefficient. Now using the property that \(\bra{i}\!{j,m+q}=\bra{j,m+q}\!{i}\) as they are in both in the angular momentum basis.
\[\bra{\psi}T_{-q}^{k}\ket{\phi}=\\ \quad d_{j}^{k}\sum_{i,i^{\prime},m}\alpha_{i}^{*}\beta_{i^{\prime }}C_{j,m,j,m-q}^{k,-q}\bra{j,m-q}\!{i^{\prime}}\bra{i}\!{j,m}. \tag{106}\]
Also, by transforming the above equation by \(m\to m+q\) we get,
\[\bra{\psi}T_{-q}^{k}\ket{\phi}=\\ \quad d_{j}^{k}\sum_{i,i^{\prime},m}\alpha_{i}^{*}\beta_{i^{\prime }}C_{j,m+q,j,m}^{k,-q}\bra{j,m}\!{i^{\prime}}\bra{i}\!{j,m+q}. \tag{107}\]
Now using the property of the Clebsch-Gordan coefficients
\[C_{j_{1},m_{1},j_{2},m_{2}}^{j_{3},m_{3}}=(-1)^{j_{1}+j_{2}+j_{3}}C_{j_{2},m_{ 2},j_{1},m_{1}}^{j_{3},m_{3}}, \tag{108}\]
we get
\[\bra{\psi}T_{-q}^{k}\ket{\phi}=\\ \quad(-1)^{k}d_{j}^{k}\sum_{i,i^{\prime},m}\alpha_{i}^{*}\beta_{i^{ \prime}}C_{j,m,j,m+q}^{k,-q}\bra{J,m}\!{i^{\prime}}\bra{i}\!{j,m+q}. \tag{109}\]
Again using another property of Clebsch-Gordan coefficients,
\[C_{j_{1},m_{1},j_{2},m_{2}}^{j_{3},m_{3}}=\sqrt{\frac{2j_{1}+1}{2j_{2}+1}}(-1)^ {j_{2}+m_{2}}C_{j_{1},m_{1},j_{2},m_{2}}^{j_{3},-m_{3}} \tag{110}\]
we get,
\[\bra{\psi}T_{-q}^{k}\ket{\phi}=\\ \quad(-1)^{q}d_{j}^{k}\sum_{i,i^{\prime},m}\alpha_{i}^{*}\beta_{i^{ \prime}}C_{j,m,j,m+q}^{k,q}\bra{j,m}\!{i^{\prime}}\bra{i}\!{j,m+q}. \tag{111}\]
Since the computational-basis codewords for the binary octahedral case are real the amplitudes \(\alpha_{i}\) and \(\beta_{i}\) are real when \(\ket{\psi}\) and \(\ket{\phi}\) are computational-basis codewords, as when we're checking error-correction conditions, and thus
\[\bra{\psi}T_{-q}^{k}\ket{\phi}=(-1)^{q}\bra{\phi}T_{q}^{k}\ket{\psi}. \tag{112}\]
## Appendix C Symmetric subspace under the tensor product of two spins
It is known that the SU(2) irreps under the addition of two spin \(j\) system is given as,
\[j\otimes j=2j\oplus(2j-1)\oplus(2j-1)\oplus\cdots. \tag{113}\]
Now focussing our attention on the symmetric subspace, in Eq. (27), we numerically found that the symmetric subspace of two spin \(j\) systems is composed of all SU(2) subspaces interleaving one in between starting from the highest possible angular momentum. To verify this one could do dimension counting of these subspaces. First, consider the case of even multiple of spin \(1/2\) and thus the dimension of the alternate SU(2) subspaces are given as,
\[\dim =\sum_{k=0}^{j}4j+1-4k \tag{114}\] \[=4j(j+1)+j+1-2j(j+1)\] \[=2j^{2}+3j+1=\frac{(2j+1)(2j+2)}{2}\] \[=\dim\left(S_{2}(2j+1)\right).\]
Now for the case of odd multiples of \(1/2\) we have,
\[\dim =\sum_{k=0}^{j-\frac{1}{2}}4j+1-4k \tag{10}\] \[=4j\left(j+\frac{1}{2}\right)+j+\frac{1}{2}-2\left((j-\frac{1}{2} )\left(j+\frac{1}{2}\right)\right.\] \[=4j^{2}+2j+j+\frac{1}{2}-2j^{2}+\frac{1}{2}\] \[=2j^{2}+3j+1=\frac{(2j+1)(2j+2)}{2}\] \[=\dim\left(S_{2}(2j+1)\right),\]
thus we get both for even and odd multiple of spin \(1/2\) the dimension of the symmetric subspace is SU(2) subspaces interleaving one in between starting from the highest possible angular momentum.
## Appendix D Algorithm for finding the codeword for the case of second order errors
The simple algorithm for finding the codeword follows these three steps,
**Step I:**
Write the codewords as
\[\left|0\right\rangle_{L}=\sum_{i=1}^{n}c_{i}\left|0\right\rangle_{i},\left|1 \right\rangle_{L}=\sum_{i=1}^{n}c_{i}\left|1\right\rangle_{i}, \tag{11}\]
where \(i\) corresponds to the two-dimensional qubit spaces one has access to and \(c_{i}\in\mathbf{R}\).
**Step II:**
Define the cost function,
\[\mathcal{F}[(\mathbf{c})]=\sum_{\text{constraints}}\left|f(\mathbf{c})\right| \tag{12}\]
where \(f(\mathbf{c})\) is the value we get for each constraint we need to satisfy according to the Knill-Laflamme conditions in Eq. (24).
**Step II:**
Minimize the cost function to obtain the right codeword where \(\mathbf{c}\in\mathbf{R}^{n}\) such that,
\[\mathbf{c}_{\text{opt}}=\underset{\mathbf{c}\in\mathbf{R}}{\text{arg min}}\ \mathcal{F}[(\mathbf{c})], \tag{13}\]
which in turn gives the codewords as,
\[\left|0\right\rangle_{L}=\sum_{i}c_{i}^{\text{opt}}\left|0\right\rangle_{i}, \left|1\right\rangle_{L}=\sum_{i}c_{i}^{\text{opt}}\left|1\right\rangle_{i}. \tag{14}\]
|
2307.01403 | Learning Multi-Agent Communication with Contrastive Learning | Communication is a powerful tool for coordination in multi-agent RL. But
inducing an effective, common language is a difficult challenge, particularly
in the decentralized setting. In this work, we introduce an alternative
perspective where communicative messages sent between agents are considered as
different incomplete views of the environment state. By examining the
relationship between messages sent and received, we propose to learn to
communicate using contrastive learning to maximize the mutual information
between messages of a given trajectory. In communication-essential
environments, our method outperforms previous work in both performance and
learning speed. Using qualitative metrics and representation probing, we show
that our method induces more symmetric communication and captures global state
information from the environment. Overall, we show the power of contrastive
learning and the importance of leveraging messages as encodings for effective
communication. | Yat Long Lo, Biswa Sengupta, Jakob Foerster, Michael Noukhovitch | 2023-07-03T23:51:05Z | http://arxiv.org/abs/2307.01403v3 | # Learning to Communicate using Contrastive Learning
###### Abstract
Communication is a powerful tool for coordination in multi-agent RL. But inducing an effective, common language is a difficult challenge, particularly in the decentralized setting. In this work, we introduce an alternative perspective where communicative messages sent between agents are considered as different incomplete views of the environment state. By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning to maximize the mutual information between messages of a given trajectory. In communication-essential environments, our method outperforms previous work in both performance and learning speed. Using qualitative metrics and representation probing, we show that our method induces more symmetric communication and captures global state information from the environment. Overall, we show the power of contrastive learning and the importance of leveraging messages as encodings for effective communication.
## 1 Introduction
Communication is a key capability necessary for effective coordination among agents in partially observable environments. In multi-agent reinforcement learning (MARL) (Sutton and Barto, 2018),
Figure 1: Multi-view contrastive learning and CACL, contrastive learning for multi-agent communication. In multi-view learning, augmentations of the original image or “views” are positive samples to contrastively learn features. In CACL, different agents’ views of the same environment states are considered positive samples and messages are contrastively learned as encodings of the state.
agents can use their actions to transmit information (Grupen et al., 2020) but continuous or discrete messages on a communication channel (Foerster et al., 2016), i.e., linguistic communication (Lazaridou and Baroni, 2020), are more flexible and powerful because they can convey more complex concepts. To successfully communicate, a speaker and a listener must share a common language with a shared understanding of the symbols being used (Skyrms, 2010; Dafoe et al., 2020). Emergent communication or learning a common protocol (Wagner et al., 2003; Lazaridou and Baroni, 2020), is a thriving research direction but most works focus on simple, single-turn, sender-receiver games (Lazaridou et al., 2018; Chaabouni et al., 2019). In more visually and structurally complex MARL environments (Samvelyan et al., 2019), existing approaches often rely on centralized learning mechanisms by sharing models (Lowe et al., 2017) or gradients (Sukhbaatar et al., 2016).
However, a centralized controller is impractical in many real-world environments (Mai et al., 2021; Jung et al., 2021) where agents cannot easily synchronize and must act independently i.e. decentralized. Centralized training with decentralized execution (CTDE) (Lowe et al., 2017) is a middle-ground between purely centralized and decentralized methods but may not perform better than purely decentralized training (Lyu et al., 2021). A centralized controller suffers from the _curse of dimensionality_: as the number of agents it must control increases, the amount of communication between agents to process increases exponentially (Jin et al., 2021). Furthermore, the fully decentralized setting is more flexible and requires fewer assumptions about other agents, making it more realistic in many real-world scenarios (Li et al., 2020). Hence, this work explores learning to communicate to coordinate agents in the decentralized setting. In MARL, this means each agent will have its own model to decide how to act and communicate, and no agents share parameters or gradients.
Typical RL approaches to decentralized communication are known to perform poorly even in simple tasks (Foerster et al., 2016) due to the large space of communication to explore, the high variance of RL, and a lack of common grounding on which to base communication (Lin et al., 2021). Earlier work leveraged how communication influences other agents (Jaques et al., 2018; Eccles et al., 2019) to learn the protocol. Most recently, Lin et al. (2021) proposed agents that autoencoder their observations and use the encodings as communication, using the shared environment as the common grounding. We build on this work in using both the shared environment and the relationship between sent and received messages to ground a protocol. We extend the Lin et al. (2021) perspective that agents' messages are encodings and propose that agents in similar states should produce similar messages. This perspective leads to a simple method based on contrastive learning to ground communication.
Inspired by the literature in representation learning that uses different "views" of a data sample (Bachman et al., 2019), for a given trajectory, we propose that an agent's observation is a "view" of the environment state. Thus, different agents' messages are encodings of different incomplete "views" of the same underlying state. From this perspective, messages from the same state should be more similar to each other than to those from distant states or other trajectories. We visually show our perspective in Figure 1. We propose Communication Alignment Contrastive Learning (CACL), which each agent uses contrastive learning between sent and received messages to learn to communicate.
We experimentally validate CACL in three communication-essential environments and show how CACL leads to improved performance and speed, outperforming state-of-the-art decentralized MARL communication algorithms. To understand CACL's success, we propose a suite of qualitative and quantitative metrics. We demonstrate that CACL leads to more symmetric communication (i.e., different agents communicate similarly when faced with the same observations), allowing agents to be more mutually intelligible. By treating our messages as representations, we show that CACL's messages capture global semantic information about the environment better than baselines. Overall, we argue that contrastive learning is a powerful direction for multi-agent communication and has fundamental benefits over previous approaches.
## 2 Related Work
Learning to coordinate multiple RL agents is a challenging and unsolved task where naively applying single-agent RL algorithms often fails (Foerster et al., 2016). Recent approaches focus on neural network-based agents (Goodfellow et al., 2016) with a message channel to develop a common communication protocol (Lazaridou and Baroni, 2020). To handle issues of non-stationarity, some work focuses on centralized learning approaches that globally share models (Foerster et al., 2016), training procedures (Lowe et al., 2017), or gradients (Sukhbaatar et al., 2016) among agents. This
improves coordination and can reduce optimization issues but results are often still sub-optimal in practise (Foerster et al., 2016; Lin et al., 2021) and may violate independence assumptions, effectively modelling the multi-agent scenario as a single agent (Eccles et al., 2019).
This work focuses on independent, decentralized agents and non-differentiable communication. In previous work, Jaques et al. (2018) propose a loss to influence other agents but require explicit and complex models of other agents and their experiments focus on mixed cooperative-competitive scenarios. Eccles et al. (2019) add biases to each agent's loss function that separately encourage positive listening (i.e., the listener to act differently for different messages) and positive signaling (i.e., the speaker to produce diverse messages in different situations). Their method is simpler but requires task-specific hyperparameter tuning to achieve reasonable performance and underperforms in sensory-rich environments (Lin et al., 2021). Our work is closest to Lin et al. (2021), who leverage autoencoding as their method to learn a message protocol in cooperative 2D MARL games. Agents learn to reconstruct their observations and communicate their autoencoding. It outperforms previous works while being algorithmically and conceptually simpler. Our method builds on this encoding perspective by considering other agents' messages to ground communication. Whereas agents in Lin et al. (2021) can only learn to encode the observation, our approach leverages the relationship between different agents' messages to encode global state information. Empirically, our method is also more efficient as it requires no extra learning parameters whereas Lin et al. (2021) learn and discard their decoder network. Note that our setup uses continuous messages instead of discrete (Eccles et al., 2019; Lin et al., 2021), a standard choice in contrastive learning (Chopra et al., 2005; He et al., 2020; Chen et al., 2020) and embodied multi-agent communication (Sukhbaatar et al., 2016; Jiang and Lu, 2018; Das et al., 2019).
Autoencoding is a form of generative self-supervised learning (SSL) (Doersch et al., 2015). We propose to use another form of SSL, contrastive learning (Chen et al., 2020), as the basis for learning communication. We are motivated by recent work that achieves state-of-the-art representation learning on images using contrastive learning methods (Chen et al., 2020) and leverages multiple "views" of the data. Whereas negative samples are simply different images, positive samples are image data augmentations or "views" of the original image (Bachman et al., 2019). We treat agents' messages of the same state in a trajectory as positives of each other, so we base our method on SupCon (Supervised Contrastive Learning) (Khosla et al., 2020) which modifies the classic contrastive objective to account for multiple positive samples. Relatedly, Dessi et al. (2021) use a two-agent discrete communication setup to do contrastive learning on images, we do the opposite and leverage contrastive learning to learn multi-agent communication in an RL environment.
## 3 Preliminaries
We base our investigations on decentralized partially observable Markov decision processes (DecPOMDPs) with \(N\) agents to describe a _fully cooperative multi-agent task_(Oliehoek and Amato, 2016). A Dec-POMDP consists of a tuple \(G=\langle S,A,P,R,Z,\Omega,n,\gamma\rangle\). \(s\in S\) is the true state of the environment. At each time step, each agent \(i\in N\) chooses an action \(a\in A^{i}\) to form a joint action \(a\in A\equiv A^{1}\times A^{2}...\times A^{N}\). It leads to an environment transition according to the transition function \(P(s^{\prime}|s,a^{1},...a^{N}):S\times A\times S\rightarrow[0,1]\). All agents share the same reward function \(R(s,a):S\times A\rightarrow\mathbb{R}\). \(\gamma\in[0,1)\) is a discount factor. As the environment is partially observable, each agent \(i\) receives individual observations \(z\in Z\) based on the observation function \(\Omega^{i}(s):S\to Z\).
We denote the environment trajectory and the action-observation history (AOH) of an agent \(i\) as \(\tau_{t}=s_{0},a_{0},...s_{t},a_{t}\) and \(\tau_{t}^{i}=\Omega^{i}(s_{0}),a_{0}^{i},...\Omega^{i}(s_{t}),a_{t}^{i}\in T \equiv(Z\times A)^{*}\) respectively. A stochastic policy \(\pi(a^{i}|\tau^{i}):T\times A\rightarrow[0,1]\) conditions on AOH. The joint policy \(\pi\) has a corresponding action-value function \(Q^{\pi}(s_{t},a_{t})=\mathbb{E}_{s_{t+1:\infty},a_{t+1:\infty}}[R_{t}|s_{t},a_ {t}]\), where \(R_{t}=\sum_{i=0}^{\infty}\gamma^{i}r_{t+i}\) is the discounted return. \(r_{t+i}\) is the reward obtained at time \(t+i\) from the reward function \(R\).
To account for communication, similar to Lin et al. (2021), at each time step \(t\), an agent \(i\) takes an action \(a_{t}^{i}\) and produces a message \(m_{t}^{i}=\Psi^{i}(\Omega^{i}(s_{t}))\) after receiving its observation \(\Omega^{i}(s_{t})\) and messages sent at the previous time step \(m_{t-1}^{i}\), where \(\Psi^{i}\) is agent \(i\)'s function to produce a message given its observation and \(m_{t-1}^{-1}\) refers to messages sent by agents other than agent \(i\). The messages are continuous vectors of dimensionality \(D\).
Methodology
We propose a different perspective on the message space used for communication. At each time step \(t\) for a given trajectory \(\tau\), a message \(m^{i}_{t}\) of an agent \(i\) can be viewed as an incomplete view of the environment state \(s_{t}\) because \(m^{i}_{t}\) is a function of \(s_{t}\) as formulated in section 3. Naturally, messages of all the \(N\) agents are different incomplete perspectives of \(s_{t}\). To ground decentralized communication, we hypothesize that we could leverage this relationship between messages from similar states to encourage consistency and similarity of the messages space across agents. Specifically, we propose maximizing the mutual information using contrastive learning which aligns the message space by pushing messages from similar states closer together and messages of different states further apart. Note that agents see a partial view of the state from their observation, so they will inherently communicate different messages to reflect their partial knowledge. However, aligning their message space enables them to communicate the specific parts of the state they observe in a more mutually-intelligible way.
As a heuristic for state similarity, we consider a window of timesteps within a trajectory to be all similar states i.e. positive samples of each other. To guarantee dissimilar negative samples (Schroff et al., 2015), we use states from other trajectories as negatives. Since each underlying state has multiple positive views (\(w\) steps, \(N\) agent messages each), we leverage the recent contrastive learning method SupCon (Khosla et al., 2020). We refer to the contrastive SupCon objective across multiple MARL trajectories as _Communication Alignment Contrastive Learning (CACL)_.
Let \(M\) be all the messages in a batch of trajectories and \(M_{\tau}\) be the messages in trajectory \(\tau\). Let \(m^{i}_{t}\in M_{\tau}\) the message of agent \(i\) at time \(t\). Thus, positives \(H\) for a message \(m^{i}_{t}\) given a timestep window \(w\) are all other messages from the same trajectory \(\tau\) sent within that timestep window \(H(m^{i}_{t})\equiv\{m^{j}_{\nu}\in M_{\tau}\setminus\{m^{i}_{t}\}:t^{\prime} \in[t-w,t+w]\}\). Let all other messages \(K\) from all trajectories in the batch be \(K(m^{i}_{t})\equiv M\setminus\{m^{i}_{t}\}\). Formally, the contrastive loss \(L_{CACL}\):
\[\sum_{m^{i}_{t}\in M_{\tau}}\frac{-1}{|H(m^{i}_{t})|}\sum_{m_{k}\in H(m^{i}_{ t})}\log\frac{\exp(m^{i}_{t}\cdot m_{h}/\eta)}{\sum_{m_{k}\in K(m^{i}_{t})} \exp(m^{i}_{t}\cdot m_{k}/\eta)} \tag{1}\]
Where \(\eta\in\mathbb{R}^{+}\) is a scalar temperature and \(|H(m^{i}_{t})|\) is the cardinality. Practically, each agent has a replay buffer that maintains a batch of trajectory data collected from multiple environment instances. It contains messages received during training to compute the _CACL_ loss. We use a timestep window of size 5 for all the environments based on hyperparameter tuning of different window sizes. Following Khosla et al. (2020), messages are normalized before the loss computation and a low temperature (i.e. \(\eta=0.1\)) is used as it empirically benefits performance and training stability. The total loss for each agent is a reinforcement learning loss \(L_{RL}\) using the reward to learn a policy (but not message head) and a separate contrastive loss \(L_{CACL}\) to learn just the message head, formulated as follows:
\[L=L_{RL}+\kappa L_{CACL} \tag{2}\]
where \(\kappa\in\mathbb{R}^{+}\) is a hyperparameter to scale the _CACL_ loss.
## 5 Experiments and Results
### Experimental Setup
We evaluate our method on three multi-agent environments with communication channels. Given the limited information each agent observes, agents must meaningfully communicate in order to improve task performance.
**Traffic-Junction**: Proposed by Sukhbaatar et al. (2016), it consists of a 4-way traffic junction with cars entering and leaving the grid. The goal is to avoid collision when crossing the junction. We use 5 agents with a vision of 1. Although not necessary, given the limited vision in agents, communication could help in solving the task. We evaluate each algorithm with success rate during evaluation episodes. All results are averaged over 12 evaluation episodes and over 6 random seeds. More details of the environments and parameters can be found in appendix A.1.
**Predator-Prey**: A variant of the classic game (Benda et al., 1986; Barrett et al., 2011) based on Koul (2019) where 4 agents (i.e. predators) have the cooperative goal to capture 2 randomly-moving prey by surrounding each prey with more than one predator. We devise a more difficult variation where agents have to entirely surround a prey on all 4 sides to successfully capture it and they cannot see each other in their observations. Thus, agents must communicate their positions and actions in order to coordinate their attacks. We evaluate each algorithm with episodic rewards in evaluation episodes.
**Find-Goal**: Proposed by Lin et al. (2021), agents' goal is to reach the green goal location as fast as possible in a grid environment with obstacles. We use 3 agents, each observes a partial view of the environment centered at its current position. Unlike in Lin et al. (2021), we use a field of view of \(3\times 3\) instead of \(5\times 5\) to make the problem harder. Each agent receives an individual reward of 1 for reaching the goal and an additional reward of 5 when all of them reach the goal. Hence, it is beneficial for an agent to communicate the goal location once it observes the goal. As in Lin et al. (2021), we measure performance using episode length. An episode ends quicker if agents can communicate goal locations to each other more efficiently. Hence, a method performs better if it has shorter episode lengths.
### Training Details
We compare CACL to the state-of-the-art independent, decentralized method, autoencoded communication (AEComm; Lin et al., 2021), which grounds communication by reconstructing encoded observations. We also compare to baselines from previous work: independent actor critic without communication (IAC) and positive listening loss (PLE; Eccles et al., 2019) (See Appendix A.4). We exclude the positive signalling loss (Eccles et al., 2019) as extending it to continuous messages is non-trivial but note that AEComm outperforms it in the discrete case (Lin et al., 2021). We also include DIAL (Foerster et al., 2016) which learns to communicate through differentiable messages to share gradients so is decentralized but not independent.
All methods use the same architecture based on the IAC algorithm with n-step returns and asynchronous environments (Mnih et al., 2016). Each agent has an encoder for observations and received messages. For methods with communication, each agent has a communication head to produce messages based on encoded observations. For policy learning, a GRU (Cho et al., 2014) is used to generate a hidden representation from a history of observations and messages. Agents use the hidden state for their the policy and value heads, which are 3-layer fully-connected neural networks. We perform spectral normalization (Gogianu et al., 2021) in the penultimate layer for each head to improve training stability. The architecture is shown in Figure 8 and hyperparameters are further described, both in Appendix A.2.
Figure 2: CACL (red) outperforms all other methods on Traffic-Junction (left), Predator-Prey (left) and Find-Goal (right). Predator-Prey shows evaluation reward, higher is better. Traffic-Junction plots the percent of successful episodes, higher is better. Find-Goal plots the episode length until the goal is reached, lower is better. The performance curves are smoothed by a factor of 0.5 with standard errors plotted as shaded areas.
### Task Performance
We run all methods on the three selected environments and plot results in Figure 2. Our proposed method CACL outperforms all baseline methods in both final performance and learning speed and, consistent with previous results (Lin et al., 2021), AEComm is the strongest baseline. The largest performance increase from CACL is in FindGoal where partial observability is most prominent due to agents' small field-of-view which makes communication more necessary (hence why IAC performs worst). These results show the effectiveness of self-supervised methods for learning communication in the fully-decentralized setting, as they both outperform DIAL which, notably, backpropogates gradients through other agents. Furthermore, it demonstrates CACL's contrastive learning as a more powerful alternative to AEComm's autoencoding for coordinating agents with communication.
Improvement on Traffic-Junction is not as significant as others because communication is less essential for task completion, as shown by the strong performance of IAC. For Predator-Prey, results are clearly better than baselines but have high variance due to the difficulty of the task. The goal of Predatory-Prey is to capture two moving prey and requires coordinating precisely to surround and attack a prey at the same time. Any slight miscoordination leads to sharp drop in rewards. For another metric of success, we compute the percentages of evaluation episodes that capture no, one, or two preys. Averaging over 6 random seeds, we show results in Figure 3. CACL does significantly better on the task, outperforms all baselines, and solving the complete task more robustly while failing less frequently. Find-Goal requires the most communication among the environments because the gridworld is the largest and agents must clearly communicate the location of goal. Here, CACL significantly outperforms the baselines, demonstrating that as the communicative task gets harder, CACL outperforms more.
We confirm the effectiveness of CACL with an ablation study of the key design decisions: sliding window and SupCon. CACL leverages the temporal nature of RL to treat a sliding window of timesteps as positive views of each other. We plot results for a range of window sizes run on Predator-Prey in Figure 4. No sliding window (size 1) performs poorly, demonstrating its necessity and that the choice of sliding window size is an important hyperparameter. Through the use of SupCon (Khosla et al., 2020) we treat all sent and received messages in the sliding window as all positive views of
Figure 4: Predator-Prey ablation experiment on \(L_{CACL}\) varying the sliding window size and \(\kappa\).
Figure 3: Success rate in Predator-Prey: the percentage of final evaluation runs that captured no prey, one prey, or both prey. Average over 6 random seeds, each with 10 evaluation episodes. See Appendix 3 for the same results with standard deviation.
each other, with many positives per batch. Creating a batch with just one positive view per message corresponds to SimCLR (Chen et al., 2020) and results in much worse performance (\(1.36\pm 9.46\)). We also run Predator-Prey and search across values of the CACL loss coefficient \(\kappa\) in Figure 4. We used the best values (5-step window, \(\kappa\)=0.5) across all the environments, demonstrating that the choice of CACL hyperparameters is robust. Overall, we show the issues in naively implementing contrastive learning for communication, and the clear, important design decisions behind CACL.
### Augmenting CACL with RL
The contrastive loss in the communication head of CACL is very performant without optimizing for reward, so a natural question is whether we can achieve even better results if we learn the message using reward as well. To answer this, we add DIAL to both CACL and the next best method, AEComm, and evaluate in the three environments. This is equivalent to backpropogating \(L_{RL}\) from Equation 2 through agents to learn the message head. In this way, both RL and SSL (contrastive or autoencoding) signals are used to learn the message head.
Figure 5 compares the performance of CACL and AEComm with their DIAL-augmented variants. Our findings are consistent with Lin et al. (2021), who find that mixing AEComm and RL objectives are detrimental to performance. We observe that augmenting either AEComm or CACL with DIAL performs generally worse, except in Find-Goal, where performances is similar but not better. We hypothesize that decentralized DIAL is a complex, and high-variance optimization that is difficult to stabilize. DIAL's gradient updates may clash with CACL and result in neither a useful contrastive representation, nor a strong reward-oriented one. It is also possible that CACL's messages would not be improved with reward-oriented gradients. As we show in Section 5.6, CACL already captures useful semantic information that other agents can effectively extract.
### Protocol Symmetry
We hypothesize that CACL's improved performance over the baselines is because it induces a more consistent, mutually-intelligible language that is shared among agents. More specifically, we consider a language's consistency to be how similarly agents communicate (i.e., sending similar messages) when faced with the same observations. A consistent protocol can reduce the optimization complexity since agents only need to learn one protocol for the whole group and it also makes agents more mutually intelligible.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & DIAL & PL & AEComm & CACL (Ours) \\ \hline Predator-Prey & \(0.66\pm 0.07\) & \(0.66\pm 0.06\) & \(0.89\pm 0.01\) & \(\mathbf{0.95\pm 0.01}\) \\ \hline FindGoal & \(0.50\pm 0.05\) & \(0.49\pm 0.04\) & \(0.85\pm 0.02\) & \(\mathbf{0.92\pm 0.01}\) \\ \hline Traffic Junction & \(0.69\pm 0.01\) & \(0.61\pm 0.04\) & \(0.80\pm 0.01\) & \(\mathbf{0.98\pm 0.002}\) \\ \hline \end{tabular}
\end{table}
Table 1: Protocol symmetry across environments, average and standard deviation over 10 episodes and 6 random seeds. CACL consistently learns the most symmetric protocol.
Figure 5: Comparing CACL and AEComm with their respective variants when combined with DIAL. Variants with DIAL have generally worse performance.
To evaluate consistency, we measure protocol symmetry (Graesser et al., 2019) so if an agent swaps observations and trajectory with another agent, it should produce a similar message as what the other agent produced. We extend this metric from previous work to the continuous, embodied case. We feed the same trajectory to all agents and measure the pairwise cosine similarities of the messages that they produce. Given a trajectory \(\tau\) and \(\{t\in T\}\) as a set of time steps of \(\tau\), protocol symmetry (\(protocol\_sym\)) is written as:
\[\frac{1}{|T|}\sum_{t\in T}\frac{1}{|N|}\sum_{i\in N}\frac{1}{|N|-1}\sum_{j\in N \setminus i}\frac{\Psi^{i}(\Omega^{j}(s_{t}))\cdot\Psi^{j}(\Omega^{j}(s_{t})) }{\|\Psi^{i}(\Omega^{j}(s_{t}))\|\|\Psi^{j}(\Omega^{j}(s_{t}))\|} \tag{3}\]
Therefore, a more consistent protocol has higher symmetry. We swap agent trajectory and observations and compute this metric over 10 evaluation episodes for 6 random seeds, and show results in Table 1. The self-supervised methods (CACL and AEComm) clearly outperform the others (DIAL and PL) implying that SSL is better for learning consistent representations in decentralized MARL. Furthermore, CACL's protocol is very highly symmetric, clearly outperforming all others. Each AEComm agent autoencodes their own observation without considering the other agents' messages, leading to the formation of multiple protocols between agents. In contrast, CACL induces a common protocol by casting the problem in the multi-view perspective and implicitly aligning agents' messages. The possible correlation between protocol symmetry and overall performance and speed further indicates the benefits of learning a common language in the decentralized setting.
### Protocol Representation Probing
To further investigate how informative our protocols are, we propose a suite of qualitative and quantitative representation probing tests based on message clustering and classification, respectively. We perform these tests on the protocols learned in the Find-Goal environment.
Similar to Lin et al. (2021), we cluster messages generated from 10 evaluation episodes to qualitatively assess how informative CACL's protocol is. The messages are first compressed to a dimension of 2 using t-SNE (Van der Maaten and Hinton, 2008) and then clustered using DBSCAN (Ester et al., 1996). We look at each cluster's messages and their corresponding observations to extract any patterns and semantics captured. As shown in Figure 6, we observe a cluster of messages for observations when the goal is visible and another one when another agent is visible. Two clusters correspond to agents seeing neither the goal nor another agent. Notably, the messages in these clusters can come from different agents in different episodes, demonstrating that agents can indeed communicate symmetrically. The clusters indicate that CACL learns to compress meaningful, global state information in messages, allowing other agents to reasonably learn this semantic information.
Figure 6: DBSCAN (Ester et al., 1996) clustering results of messages produced by CACL after reduced in dimensions using t-SNE (Van der Maaten and Hinton, 2008). Exemplary clusters are shown with their corresponding observational patterns. Specifically, two clusters correspond to messages sent when the goal is visible and another agent is visible respectively. The other two clusters of when only individual agents are visible.
To quantitatively evaluate the informativeness of learned protocols, we propose to treat messages as representations and learn a classifier on top of the messages, following work in RL representation learning (Lazaridou et al., 2018; Anand et al., 2019). Since FindGoal is focused on reaching a goal, intuitively, agents should communicate whether they have found the goal and, if so, where other agents should go to reach the goal. Thus, we propose to probe the goal visibility and goal location. The former uses the messages to classify whether the goal is visible in observations or not (i.e. a binary classification). The latter uses messages where the goal is visible in the observations to classify the general location of the goal (i.e., a 5-class classification: Top-Left, Top-Right, Bottom-Left, Bottom-Right and Middle). Whereas goal visibility is easy for egocentric communication, goal location requires detailed spatial information and communicating the absolute location from their relative position. This tests whether the communication protocol can consider other agents' perspectives and give global information from an egocentric observation. We use 30 evaluation episodes per method to generate messages for our experiments but different methods may have different numbers of acceptable messages for our probing task (e.g. a limited number of messages where the goal is visible for predicting goal location). To ensure fair comparison, we choose an equal number of samples per class (i.e., positive/negative, 5-class location) for all methods and use a 70%/30% random split for training and testing. We use a 2-layer fully-connected neural network to test each method, as this corresponds to the same network that agents use to encode each others' messages as part of their observations.
Table 2 shows the classification results for the two probing tests. Goal visibility is an easier task and all methods' messages can be effectively used to determine it. In the more difficult goal location task, all methods perform above chance (20%) but CACL's protocol significantly outperforms baselines. Contrastive learning across different agents' messages can enable CACL to learn a more global understanding of location from their egocentric viewpoint. By encoding the goal's spatial information, CACL agents are more likely able to move directly towards it, and reduce episode length. If other methods simply communicate that a goal is found, agents know to alter their search but are not as precise in direction. This explains why AEComm, PL, and DIAL perform better than IAC but worse than CACL, which also learns much quicker as shown in Figure 2. For completeness, we also provide similar classification results with a one-layer (linear) probe in Appendix A.5.
## 6 Limitations
Our work investigates fully-cooperative environments but learning to communicate in less cooperative settings, such as those with adversaries (Noukhovitch et al., 2021), is a harder optimization problem. CACL would likely need stronger regularization to be effective. Furthermore, our empirical testing has revealed that SSL objectives are ineffective with reward-oriented gradients, as demonstrated in section 5.4. Although this phenomenon is well known (Lin et al., 2021), it is still not fully understood and future work should aim to combine the two objectives. Finally, this work evaluates agents that were trained together. A more challenging frontier is zero-shot communication, an extension of zero-shot cooperation (Hu et al., 2020), in which agents must communicate effectively with novel partners, unseen during training. In Appendix A.6, we show how existing methods perform poorly in this settings and leave this challenging setup to future work.
## 7 Conclusion and Future Work
This work introduces an alternative perspective for learning to communicate in decentralized MARL based on the relationship between sent and received messages within a trajectory. Drawing inspiration from multi-view learning, we ground communication using contrastive learning by considering
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & DIAL & PL & AEComm & CACL (Ours) \\ \hline Goal Visibility & \(99.45\%\pm 2.68\) & \(98.87\%\pm 0.67\) & \(99.75\%\pm 0.04\) & \(97.75\%\pm 0.69\) \\ \hline Goal Location & \(68.15\%\pm 1.76\) & \(78.31\%\pm 2.39\) & \(76.14\%\pm 3.36\) & \(\mathbf{91.28}\%\pm\mathbf{1.71}\) \\ \hline \end{tabular}
\end{table}
Table 2: Classification results of the two probing tests in Find-Goal. All methods perform similarly in the easier Goal Visibility Test while CACL outperforms the baselines significantly in the more difficult Goal Location Test.
agents' messages to be encoded views of the same state. We empirically show that our method leads to better performance through a more consistent, common language and learns to communicate more global state information. We believe this work solidifies contrastive learning as an effective perspective for learning to communicate and hope it invigorates research into contrastive methods for communication with a focus on consistency. Furthermore, by establishing the connection between multi-view SSL, which has traditionally focused on images, and communication in MARL, we hope to encourage more cross-domain research. Finally, we see contrastive learning as a potential method for simulating human language evolution, and hope to inspire research in this direction. |
2308.11095 | Ramsey interferometry in three-level and five-level systems of $^{87}Rb$
Bose-Einstein condensates | Our work here presents the analytical expressions for a typical Ramsey
interferometric sequence for a three- and a five-level system. The analytical
expressions are derived starting from the first principals of unitary time
evolution operators. We focus on the three- and five-level systems because we
propose a novel Ramsey interferometer created by a trapped two-state
Bose-Einstein Condensate driven by dipole oscillations and gravitational sag.
It involves the $^{87}Rb$ atoms in states $\vert F=2, m_F=+2 \rangle$ $(\vert
+2 \rangle)$ and $\vert F=2, m_F=+1 \rangle$ $(\vert +1 \rangle)$ of the $5
^2S_{\frac{1}{2}}$ ground state. Though the interferometer focusses on the
two-levels, the experimental readouts involve all the five states in $F = 2$
hyperfine manifold. Therefore, the analytical derivation was first tested for
three-levels and then expanded to five-levels. We developed the expressions for
five-levels for greater analytical accuracy of the experimental scenario. This
work provides a step-by-step outline for the derivation and methodology for the
analytical expressions. These analytical formulae denote the population
variation during Rabi and Ramsey oscillations for each state as well as the
overall average for both the three- and five-level cases. The expressions are
derived within the rotating wave approximation (RWA) under the equal Rabi
condition. Further, by following the derivation methodology, these analytical
expressions can be easily expanded for Ramsey sequences with unequal pulses,
and Ramsey sequences with spin echo techniques. | Anushka Thenuwara, Andrei Sidorov | 2023-08-22T00:32:25Z | http://arxiv.org/abs/2308.11095v1 | Ramsey interferometry in three-level and five-level systems of \({}^{87}Rb\) Bose-Einstein condensates
###### Abstract
Our work here presents the analytical expressions for a typical Ramsey interferometric sequence for a three- and a five-level system. The analytical expressions are derived starting from the first principals of unitary time evolution operators. We focus on the three- and five-level systems because we propose a novel Ramsey interferometer created by a trapped two-state Bose-Einstein Condensate driven by dipole oscillations and gravitational sag. It involves the \({}^{87}Rb\) atoms in states \(\left|F=2,m_{F}=+2\right\rangle\) (\(\left|+2\right\rangle\)) and \(\left|F=2,m_{F}=+1\right\rangle\) (\(\left|+1\right\rangle\)) of the \(5^{2}S_{\frac{1}{2}}\) ground state. Though the interferometer focusses on the two-levels, the experimental readouts involve all the five states in \(F=2\) hyperfine manifold. Therefore, the analytical derivation was first tested for three-levels and then expanded to five-levels. We developed the expressions for five-levels for greater analytical accuracy of the experimental scenario. This work provides a step-by-step outline for the derivation and methodology for the analytical expressions. These analytical formulae denote the population variation during Rabi and Ramsey oscillations for each state as well as the overall average for both the three- and five-level cases. The expressions are derived within the rotating wave approximation (RWA) under the equal Rabi condition. Further, by following the derivation methodology, these analytical expressions can be easily expanded for Ramsey sequences with unequal pulses, and Ramsey sequences with spin echo techniques.
* July 2021
## 1 Introduction
Following the work of I.I. Rabi, N.F. Ramsey [1] significantly improved the Rabi method by using two oscillatory fields with a short pulse \(\tau\) separated by a long free evolution time \(T\) to study molecular resonances and demonstrated 0.6 times narrower linewidths. This work of N.F. Ramsey won him the Nobel prize in physics in 1989 and is named as Ramsey interferometry. This method provides the basis of the exquisite time standards of \(Cs\) fountain clocks such as the NIM5 clock in China and the NIST-F1 in the USA that have uncertainties of \(1.6\times 10^{-15}\)[2] and
[3] respectively. Further, Ramsey interferometry allows sensitive measurements of local gravity [4] and a Ramsey-type method with a spin-echo pulse (i.e. a \(\pi\)-pulse during the free evolution time \(T\)) allows precise measurements of the Newtonian gravitational constant \(G=6.671\,91(99)\times 10^{-11}\,\mathrm{m^{3}kg^{-1}s^{-2}}\)[5]. This applied in multilevel systems allows to measure below the standard quantum limit [6] where the phase difference between states are mapped onto the populations of states [7].
The primary goal of this work is to develop analytical expressions to explore and analyse experimental data of a novel Ramsey interferometer created by a trapped two-state Bose-Einstein condensate (BEC) driven by dipole oscillations and gravitational sag. A BEC is formed in a pure compressed magnetic trap (CMT) via a cloud of \({}^{87}Rb\) atoms in state \(\left|F=2,m_{F}=+2\right\rangle\) (\(\left|+2\right\rangle\)) of the \(5^{2}S_{\frac{1}{2}}\) ground state, from which Rmasey interferometry is performed between states \(\left|+2\right\rangle\) and \(\left|F=2,m_{F}=+1\right\rangle\) (\(\left|+1\right\rangle\)). The state \(\left|+1\right\rangle\) experiences a shallower radial trap with a larger gravitational sag; whereas, state \(\left|+2\right\rangle\) experiences a tighter radial trap with a gravitational sag that is half of state \(\left|+1\right\rangle\). Due to this, a superposition between the states \(\left|+1\right\rangle\) and \(\left|+2\right\rangle\) experiences multipath propagation resulting in an interference pattern. This may be utilised to measure local gravitational fields and measure inter-sate scattering lengths.
In previous works [8, 9, 10, 11] analytical expressions for Rabi oscillations in multi-level system were attained based on the two-level atom (Equation 12). In [8] considers an N-level atom interacting with a near-resonant laser field in two fronts; where the energy separation between all N-states are equal (equal-Rabi), and the energy separation increases as a harmonic potential (harmonic-Rabi). It also shows the association with Chebyshev and Hermite polynomials. The work in [9] incorporates the treatment of losses and [12, 13, 14] shows the importance of understanding Rabi oscillations of a multi-level system for quantum computation. These formulations are performed within the rotating wave approximation (RWA) which is valid when the coupling constant (Rabi frequency) is much smaller than the energy separation between the two levels [12]. In this work we derive analytical expressions for a three- and five-level systems for the full Ramsey sequence via the unitary time evolution operator formalism under the equal-Rabi and RWA conditions.
The equal-Rabi assumption is valid as the trap bottom of the harmonic oscillator potential of the experiment is about \(1\,\mathrm{G}\) relating to about \(700\,\mathrm{kHz}\) energy separation between the five states in the \(F=2\) hyperfine manifold [15]. The Breit-Rabi formula [16, 17] indicates there is only a \(0.02\,\mathrm{\char 37}\) variation in energy between adjacent states spanning \(\left|+2\right\rangle\) to \(\left|-2\right\rangle\). Due to this the equal-Rabi model suffices and the harmonic-Rabi model only complicates the analysis. Further, the maximum experimental Rabi frequency is less than \(15\,\mathrm{\char 37}\) of that of the energy separation between states which justifies the RWA approximation.
Within these considerations, Section 2 derives the analytical expressions for the unitary time evolution \(\hat{U}\) for the three-level system via \(\hat{U}=\sum_{i=1}^{n}e^{\frac{-i\lambda_{i}t}{\hbar}}\left|V_{i}\right\rangle \left\langle V_{i}\right|\) (Equation 3), where \(\lambda_{i}\) are the eigenvalues and \(\left|V_{i}(t)\right\rangle\) are the eigenvectors of the interaction Hamiltonian \(\hat{H}_{I}\). Once the methodology is validated, analytical expressions
for the five-level system are derived and presented in Section 3. Mathematica was used to solve these analytically dense problems.
## 2 Rabi and Ramsey analytical models for three-level system
Consider a three-level system with equal energy separation between adjacent states leading to \(\omega_{\left|+1\right\rangle}-\omega_{\left|0\right\rangle}=\omega_{\left|0 \right\rangle}-\omega_{\left|-1\right\rangle}=\omega_{Sep}\) being coupled to an external EM field with a frequency \(\omega_{EM}=\omega_{Sep}-\Delta\) where \(\Delta\) is the detuning. The Rabi frequency for this system follows the resonant Rabi frequency \(\Omega_{R}\) in Equation 3 where \(\Omega_{R}=\frac{\left\langle 1\right|\hat{\mu}\left|2\right\rangle B_{0}}{\hbar}\). Here, the magnetic dipole coupling between adjacent states are equal \(\left\langle+1\right|\hat{\mu}\left|0\right\rangle B_{0}=\left\langle 0 \right|\hat{\mu}\left|-1\right\rangle B_{0}\) due to the equal energy separation between states.
The equal-Rabi interaction Hamiltonian \(\hat{H}_{I}\) for the three-level system in the rotating wave approximation (RWA) is shown below as adapted from [10, 11].
\[\hat{H}_{I}=\hbar\left[\begin{array}{ccc}\Delta&\frac{1}{\sqrt{2}}\Omega_{R }e^{-i\phi}&0\\ \frac{1}{\sqrt{2}}\Omega_{R}e^{i\phi}&0&\frac{1}{\sqrt{2}}\Omega_{R}e^{-i\phi }\\ 0&\frac{1}{\sqrt{2}}\Omega_{R}e^{i\phi}&-\Delta\end{array}\right]. \tag{1}\]
Using standard means of solving the eigenvector/eigenvalue problem we find the eigenvalues \(\lambda_{i}\) via \(\left|\hat{H}_{I}-\lambda 1\right|_{det}=0\) and the eigenvectors \(V_{i}\) via \((\hat{H}_{I}-\lambda 1)\vec{V}=0\) where \(1\) is the identity matrix, leading to
\[\left[V_{1}\right]_{\lambda_{1}}=\left[\begin{array}{c}\frac{-\Omega_{R}}{ \sqrt{2}\Omega_{G}}\\ \frac{\Delta}{\Omega_{G}}\\ \frac{\Omega_{R}}{\sqrt{2}\Omega_{G}}\end{array}\right]_{0},\left[V_{2} \right]_{\lambda_{2}}=\left[\begin{array}{c}\frac{1}{2}\left(-1+\frac{ \Delta}{\Omega_{G}}\right)\\ \frac{\Omega_{R}}{\sqrt{2}\Omega_{G}}\\ \frac{1}{2}\left(-1-\frac{\Delta}{\Omega_{G}}\right)\end{array}\right]_{- \hbar\Omega_{G}},\left[V_{3}\right]_{\lambda_{3}}=\left[\begin{array}{c} \frac{1}{2}\left(1+\frac{\Delta}{\Omega_{G}}\right)\\ \frac{\Omega_{R}}{\sqrt{2}\Omega_{G}}\\ \frac{1}{2}\left(1-\frac{\Delta}{\Omega_{G}}\right)\end{array}\right]_{\hbar \Omega_{G}}, \tag{2}\]
where \(V_{i}\) is the eigenvector normalized to \(1\), the eigenvalues are \(\begin{bmatrix}\lambda_{1}&\lambda_{2}&\lambda_{3}\end{bmatrix}^{T}=\begin{bmatrix} 0&-\hbar\Omega_{G}&\hbar\Omega_{G}\end{bmatrix}^{T}\) and \(\Omega_{G}=\sqrt{\Delta^{2}+\Omega_{R}^{2}}\) is the general Rabi frequency.
Once the eigenvalues and eigenvectors are obtained, the unitary time evolution operator \(\hat{U}\) for the general case is \(\hat{U}=\sum_{i=1}^{n}e^{\frac{-i\lambda_{i}t}{\hbar}}\left|V_{i}\right\rangle \left\langle V_{i}\right|\). This expression is derived as \(\hat{H}\) is independent of time in the RWA. However, via the unitary time evolution operator \(\hat{U}\), we can use \(\left|\Psi(t)\right\rangle=\hat{U}\left|\Psi(0)\right\rangle\) where \(\hat{U}=e^{\frac{i\hat{H}t}{\hbar}}\) which facilitates an easier method to obtain analytical solutions. Therefore it is crucial to obtain an expression for \(e^{\frac{i\hat{H}t}{\hbar}}\). In
order to find an expression for \(e^{\frac{i\hat{H}t}{\hbar}}\) we use \(e^{x}=\sum_{n=0}^{\infty}\frac{x^{n}}{n!}\) as follows
\[\begin{split} e^{\frac{i\hat{H}t}{\hbar}}&=\sum_{n=0} ^{\infty}\frac{\left(\frac{i\hat{H}t}{\hbar}\right)^{n}}{n!}\\ \sum_{k=1}^{N}e^{\frac{i\hat{H}t}{\hbar}}\left|V_{k}\right\rangle \left\langle V_{k}\right|&=\sum_{k=1}^{N}\sum_{n=1}^{\infty} \frac{\left(\frac{i\hat{H}t}{\hbar}\right)^{n}}{n!}\left|V_{k}\right\rangle \left\langle V_{k}\right|\\ &=\sum_{k=1}^{N}\sum_{n=1}^{\infty}\frac{\left(\frac{i\lambda_{k }t}{\hbar}\right)^{n}}{n!}\left|V_{k}\right\rangle\left\langle V_{k}\right| \text{ as }\hat{H}^{n}\left|V_{k}\right\rangle=\lambda_{k}^{n}\left|V_{k}\right\rangle \end{split} \tag{3}\]
where \(\lambda_{k}\) are eigenvalues and \(V_{k}\) are eigenvectors of \(\hat{H}\).
The unitary time evolution operator \(\hat{U}\) for the three-level system takes the form
\[\hat{U}=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ a_{12}&a_{22}&-a_{12}^{*}\\ a_{13}&-a_{12}^{*}&a_{11}^{*}\end{bmatrix}, \tag{4}\]
where the matrix elements \(a_{ij}\) are functions of parameters \(\Delta,\Omega_{R},\Omega_{G}\) and time which takes the form
\[\begin{split} a_{11}&=\frac{\left(\Delta^{2}+\Omega_{G}^{2} \right)\cos\left(\Omega_{G}t\right)-2i\Delta\Omega_{G}\sin\left(\Omega_{G}t \right)+\Omega_{R}^{2}}{2\Omega_{G}^{2}}\\ a_{12}&=\frac{\Omega_{R}\left(\Delta\left(\cos\left( \Omega_{G}t\right)-1\right)-i\Omega_{G}\sin\left(\Omega_{G}t\right)\right)}{ \sqrt{2}\Omega_{G}^{2}}\\ a_{13}&=\frac{\Omega_{R}^{2}\left(\cos\left( \Omega_{G}t\right)-1\right)}{2\Omega_{G}^{2}}\\ a_{22}&=\frac{\Delta^{2}+\Omega_{R}^{2}\cos\left( \Omega_{G}t\right)}{\Omega_{G}^{2}}\end{split} \tag{5}\]
In the case of initially populated top state \(\begin{bmatrix}1&0&0\end{bmatrix}^{T}\), the population at the end of the Rabi pulse is shown in Equation 6. In a similar way to the two-level case we find the three-level state vector at any time of the atom-EM interaction \(\left|\Psi(t)\right\rangle=\hat{U}\left|\Psi(0)\right\rangle\) and the population of each level is
\[P=\begin{bmatrix}\left|\Psi_{\left|+1\right\rangle}\right|^{2}\\ \left|\Psi_{\left|0\right\rangle}\right|^{2}\\ \left|\Psi_{\left|-1\right\rangle}\right|^{2}\end{bmatrix}=\begin{bmatrix}\frac{ \Delta^{4}+6\Delta^{2}\Omega_{G}^{2}+4\Omega_{R}^{2}\left(\Delta^{2}+\Omega_{G }^{2}\right)\cos(\Omega_{G}t)+\left(\Delta^{2}-\Omega_{G}^{2}\right)^{2}\cos( 2\Omega_{G}t)+\Omega_{G}^{4}+2\Omega_{R}^{4}}{8\Omega_{G}^{4}}\\ \frac{\Omega_{R}^{2}\left(\Delta^{2}(\cos(\Omega_{G}t)-1)^{2}+\Omega_{G}^{2} \sin^{2}(\Omega_{G}t)\right)}{2\Omega_{G}^{4}}\\ \frac{\Omega_{R}^{4}(1-\cos(\Omega_{G}t))^{2}}{4\Omega_{G}^{4}}\end{bmatrix} \tag{6}\]
Figure 1 shows the population variation in a three-level system for three combinations of \(\Omega_{R}\) and \(\Delta\) during the Rabi pulse. The figure shows three interesting
scenarios when the detuning \(\Delta=0\), \(\Delta=\Omega_{R}\) and \(\Delta=\sqrt{2}\Omega_{R}\). The key feature when \(\Delta=\Omega_{R}\) is that \(|\Psi_{|+1\rangle}|^{2}=|\Psi_{|-1\rangle}|^{2}\) at \(t=\frac{2.221}{\Omega_{R}}\). Further, when the detuning increases to \(\Delta=\sqrt{2}\Omega_{R}\), \(|\Psi_{|+1\rangle}|^{2}=|\Psi_{|0\rangle}|^{2}\) at \(t=\frac{1.814}{\Omega_{R}}\). Knowledge of these conditions is of great importance as it will assist in improving the stability of the splitting in a three-level Ramsey interferometer such as the \({}^{87}Rb\)\(5^{2}S_{\frac{1}{2}}F=1\) amidst experimental uncertainties in the applied EM field.
For preliminary analysis the equal splitting of \(|\Psi_{|+1\rangle}|^{2}=|\Psi_{|0\rangle}|^{2}\) at the \(\Delta=0\) condition is explored via the expressions in Equation 6 as the expected Ramsey signal for \(\Delta=0\) is non-oscillatory. When the equation \(|\Psi_{|+1\rangle}|^{2}-|\Psi_{|0\rangle}|^{2}=0\) for \(\Delta=0\) where \(\Omega_{G}=\Omega_{R}\) is simplified the expression \(\frac{1}{8}\left(4\cos\left(\Omega_{R}t\right)+3\cos\left(2\Omega_{R}t\right) +1\right)=0\) is achieved. When this is solved for \(t\), \(t=\frac{\arccos\left(\frac{1}{2}+2\pi c_{1}\right)}{\Omega_{R}}\), where \(c_{1}\) is an integer and set at \(c_{1}=0\) for the first equal splitting time where \(t=\frac{\arccos\left(\frac{1}{2}\right)}{\Omega_{R}}\). The general unitary time evolution operator \(\hat{U}\) in Equation 4 for the conditions \(\Delta=0\) and \(t=\frac{\arccos\left(\frac{1}{2}\right)}{\Omega_{R}}\) can be substituted leading to the much more simplified form of \(\hat{U}_{\Delta=0}^{Split}\) shown below. Also, the free evolution operator \(\hat{U}_{\Delta=0}^{Free}\) is achieved by substituting \(\Omega_{R}=\Omega_{G}=\Delta=0\) to Equation 4.
\[\hat{U}_{\Delta=0}^{Split}=\begin{bmatrix}\frac{2}{3}&-\frac{2i}{3}&-\frac{1} {3}\\ -\frac{2i}{3}&\frac{1}{3}&-\frac{2i}{3}\\ -\frac{1}{3}&-\frac{2i}{3}&\frac{2}{3}\end{bmatrix},\hskip 28.452756pt\hat{U}_{ \Delta=0}^{Free}=\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix}. \tag{7}\]
By applying these unitary time evolution operations for each step of the Ramsey sequence, the wavefunction for the three-level system at the end of the sequence takes the form
\[\left|\Psi_{sys}(t)\right\rangle=\hat{U}^{Split}.\hat{U}^{Free}.\hat{U}^{Split }.\left|\Psi(0)\right\rangle. \tag{8}\]
Figure 1: Population during the Rabi pulse in a three-level system for three combinations of \(\Omega_{R}\) and \(\Delta\) where the red solid, blue dashed and black dotted lines denote the populations of states \(\left|+1\right\rangle,\left|0\right\rangle\) and \(\left|-1\right\rangle\), respectively. Detuning are **a)**\(\Delta=0\), **b)**\(\Delta=\Omega_{R}\) and **c)**\(\Delta=\sqrt{2}\Omega_{R}\).
The resulting system wavefunction \(|\Psi_{sys}(t)\rangle\) is shown below for the starting condition of \(|\Psi(0)\rangle=F=\begin{bmatrix}1&0&0\end{bmatrix}^{T}\). This can be easily converted to population which takes the form \(P_{Rsy}\) as
\[|\Psi_{sys}(t)\rangle=\begin{bmatrix}\frac{1}{9}\\ \frac{-4}{9}i\\ \frac{-8}{9}\end{bmatrix},\qquad P_{Rsy}=\begin{bmatrix}\frac{1}{81}\\ \frac{16}{81}\\ \frac{64}{81}\end{bmatrix}, \tag{9}\]
where \(|\Psi_{sys}(t)\rangle\) and \(P_{Rsy}\) are respectively the wavefunction and the population for the three-level system at the end of the Ramsey sequence. Here, the populations of states are at constant values of \(P_{Rsy}=\begin{bmatrix}\frac{1}{81}&\frac{16}{81}&\frac{64}{81}\end{bmatrix}^{T}\). The Ramsey signal can be obtained via the average spin projection for a multilevel system \(\langle\hat{F}_{Z}\rangle=\hbar\sum_{m_{F}}m_{F}P_{m_{F}}\) where \(P_{m_{F}}\) is the fractional population of the relevant \(m_{F}\) state [10]. This leads to the constant value of \(\frac{\langle\hat{F}_{Z}\rangle}{\hbar}=\frac{-7}{9}\) which is the expected behaviour of the system at \(\Delta=0\).
The scenario in Figure 1**c)** is explored next as the Ramsey signal for \(\Delta\neq 0\) is oscillatory and the equal splitting condition of \(|\Psi_{|+1\rangle}|^{2}=|\Psi_{|0\rangle}|^{2}\) at \(\Delta=\sqrt{2}\Omega_{R}\) is considered. When the condition \(|\Psi_{|+1\rangle}|^{2}-|\Psi_{|0\rangle}|^{2}=0\) for \(\Delta=\sqrt{2}\Omega_{R}\) where \(\Omega_{G}=\sqrt{3}\Omega_{R}\) is applied to Equation 6, the expression \(\frac{1}{6}\cos^{2}\left(\frac{1}{2}\sqrt{3}\Omega_{R}t\right)\left(\cos\left( \sqrt{3}\Omega_{R}t\right)+5\right)=0\) is achieved. When this is solved for \(t\), \(t=\frac{4\pi c_{1}\pm\pi}{\sqrt{3}\Omega_{R}}\) where \(c_{1}\) is an integer and set at \(c_{1}=0\) for the first equal splitting time where \(t=\frac{\pi}{\sqrt{3}\Omega_{R}}\). The general unitary time evolution operator \(\hat{U}\) in Equation 4 for the conditions \(\Delta=\sqrt{2}\Omega_{R},\Omega_{G}=\sqrt{3}\Omega_{R}\) and \(t=\frac{\pi}{\sqrt{3}\Omega_{R}}\) can be substituted and takes the much more simplified form of \(\hat{U}_{\Delta=\sqrt{2}\Omega_{R}}^{Split}\) shown in Equation 10. Further, the operator during free evolution is derived from Equations 4 via Equations 5 for the conditions \(\Omega_{R}=0\) when the general Rabi frequency \(\Omega_{G}=\Delta\) for an evolution time \(t=T\). This leads to the free evolution operator \(\hat{U}_{\Omega_{R}=0}^{Free}\) in Equation 10,
\[\hat{U}_{\Delta=\sqrt{2}\Omega_{R}}^{Split}=\begin{bmatrix}-\frac{2}{3}&- \frac{2}{3}&-\frac{1}{3}\\ -\frac{2}{3}&\frac{1}{3}&\frac{2}{3}\\ -\frac{1}{3}&\frac{2}{3}&-\frac{2}{3}\end{bmatrix}\qquad\hat{U}_{\Omega_{R}= 0}^{Free}=\begin{bmatrix}e^{-iT\Delta}&0&0\\ 0&1&0\\ 0&0&e^{iT\Delta}\end{bmatrix}. \tag{10}\]
Following Equation 8 for the full sequence, the wavefunction for the three-level system at the end of the Ramsey sequence takes the form for the condition of \(\Delta=\sqrt{2}\Omega_{R}\)
\[|\Psi_{sys}(T)\rangle=\begin{bmatrix}\frac{1}{9}(5\cos(\Delta T)-3i\sin( \Delta T)+4)\\ \frac{2}{9}(\cos(\Delta T)-3i\sin(\Delta T)-1)\\ \frac{4}{9}(\cos(\Delta T)-1)\end{bmatrix}, \tag{11}\]
Further, an overall expression for the Ramsey signal can be obtained when the average spin projection for a multilevel system \(\langle\hat{F}_{Z}\rangle=\hbar\sum_{m_{F}}m_{F}P_{m_{F}}\) is applied. Based on \(\langle\hat{F}_{Z}\rangle\) the Ramsey signal takes the form \(\frac{\langle\hat{F}_{Z}\rangle}{\hbar}=\frac{1}{9}(1+8\cos(\Delta T))\) where the Ramsey signal and populations of each state are shown in Figure 2.
## 3 Rabi and Ramsey analytical models for five-level system
A schematic of the five-level system is shown in Figure 3 with equal energy separation between adjacent states leading to \(\omega_{|i\rangle}-\omega_{|i\pm 1\rangle}=\omega_{Sep}\). Consider this system being coupled to an external EM field with a frequency \(\omega_{EM}=\omega_{Sep}-\Delta\) where \(\Delta\) is the detuning. The Rabi frequency for this system is \(\Omega_{R}=\frac{\mu_{0}g_{F}B_{\perp}}{\hbar}\)[10] which is the magnetic dipole coupling between adjacent states.
### Rabi oscillations in the five-level system
The methodology presented for the three-level system is followed here to derive analytical expressions for the five-level Rabi and Ramsey signals. However, as a
Figure 3: Five-level atom interacting with an oscillating EM field where the energy separation between adjacent states is equal to \(\hbar\omega_{Sep}\).
precursor for the five-level Rabi signal, [10] via [18, 11], shows that the analytical solutions for the transition probabilities of a spin-\(F\) system with \(2F+1\) sub-levels can be obtained via expressions for \(C_{1}(t)\) and \(C_{2}(t)\) in Equation 1 for the two-level system as below;
\[\Psi_{m_{F}} =\sqrt{\frac{(2F)!}{(F+m_{F})!(F-m_{F})!}}C_{1}(t)^{F-m_{F}}C_{2}(t )^{F+m_{F}}\qquad m_{F}\in\{-F\to F\}\] \[|\Psi_{|-2\rangle}|^{2} =(|C_{1}|^{2})^{4},\] \[|\Psi_{|-1\rangle}|^{2} =4(|C_{1}|^{2})^{3}(1-|C_{1}|^{2}), \tag{12}\] \[|\Psi_{|0\rangle}|^{2} =6(|C_{1}|^{2})^{2}(1-|C_{1}|^{2})^{2},\] \[|\Psi_{|+1\rangle}|^{2} =4(|C_{1}|^{2})(1-|C_{1}|^{2})^{3},\] \[|\Psi_{|+2\rangle}|^{2} =(1-|C_{1}|^{2})^{4},\]
where \(C_{1}(t),C_{2}(t)\) are defined in Equation 1 and \(|C_{1}|^{2}=|C_{1}(t)|^{2}=\frac{\Omega_{R}^{2}}{\Omega_{G}^{2}}\sin^{2}(\frac {\Omega_{G}t}{2})\).
Based on this approach, the evolution of the fractional population of each level when the BEC starts in \(|+2\rangle\) can be derived as shown in Equation 12. It should be noted that this treatment can be applied to arbitrary values of \(F\) and is only valid for linearly shifted Zeeman levels coupled via magnetic dipole transitions. Figure 4**a)** presents the on-resonance (\(\Delta=0\)) analytical solutions for Equation 12 [10] showing the fractional population of each level with respect to the Rabi pulse area. The red line denotes the population variation of the state \(|+2\rangle\) and the blue line of the state \(|+1\rangle\). The analytical solution shows that the two levels create an equal population fraction of \(0.41\) at \(\sin\left(\frac{\Omega_{R}t}{2}\right)=\frac{1}{\sqrt{5}}\), where \(\frac{\Omega_{R}t}{2}=0.46\,\mathrm{rad}\) or \(2.68\,\mathrm{rad}\). At this time \(18\,\%\) of the atoms from the total cloud are in the \(|0\rangle\) and \(|-1\rangle\) states. This is reflected in Figure 4**b)** which represents the fraction of atom numbers in state \(|+1\rangle\) and \(|+2\rangle\) with respect to the combined \(|+1\rangle-|+2\rangle\) system. The red solid line with circles is the fractional population of \(|+2\rangle\) and the blue solid line with squares is the fractional population of \(|+1\rangle\). The black dashed line shows the dynamics of the combined \(|+1\rangle-|+2\rangle\) system where the atoms move to other states of the five-level system. Figure 4**b)** indicates that any population mix where \(|+2\rangle\in\{100\,\%\to 50\,\%\}\) and \(|+1\rangle\in\{0\,\%\to 50\,\%\}\) is possible with minimal loss of atoms in the range \(\frac{\Omega_{R}t}{2}\in\{0\,\mathrm{rad}\to 0.46\,\mathrm{rad}\}\). However, going beyond this point the superposition leads to a fast decay of the signal where the atoms quickly move out of the desired \(|+1\rangle-|+2\rangle\) states which recovers back when \(\frac{\Omega_{R}t}{2}=2.68\,\mathrm{rad}\). Between \(\frac{\Omega_{R}t}{2}\in\{2.68\,\mathrm{rad}\to 3.14\,\mathrm{rad}\}\) the other half of the combination of superpositions from \(|+2\rangle\in\{50\,\%\to 100\,\%\}\) and \(|+1\rangle\in\{50\,\%\to 0\,\%\}\) are available.
We will use the interaction Hamiltonian \(\hat{H}_{I}\) for a five-level system in the form of Equation 13 which takes into account the Clebsch-Gordon coefficients for the
corresponding \(F=2\) hyperfine manifold [10].
\[\hat{H}_{I}=\hbar\begin{bmatrix}2\Delta&\Omega_{R}&0&0&0\\ \Omega_{R}&\Delta&\sqrt{\frac{3}{2}}\Omega_{R}&0&0\\ 0&\sqrt{\frac{3}{2}}\Omega_{R}&0&\sqrt{\frac{3}{2}}\Omega_{R}&0\\ 0&0&\sqrt{\frac{3}{2}}\Omega_{R}&-\Delta&\Omega_{R}\\ 0&0&0&\Omega_{R}&-2\Delta\end{bmatrix} \tag{13}\]
where \(\hbar\) is the reduced Planck's constant, \(\Delta\) is the detuning of the external laser field from the energy separation between adjacent states and \(\Omega_{R}=\frac{\mu_{0}q_{F}B_{\perp}}{\hbar}\)[10] is the resonant Rabi frequency.
Again, using the standard method of solving the eigenvalue-eigenvector problem we solve for the normalised eigenvalues \(\lambda_{i}\) and the eigenvectors \(V_{i}\). Once the eigenvalues and eigenvectors are obtained, the unitary time evolution operator \(\hat{U}\) for the general case is found via \(\hat{U}=\sum_{i=1}^{n}e^{-\frac{i\lambda_{i}t}{\hbar}}\left|V_{i}\right\rangle \left\langle V_{i}\right|\). The resulting bare matrix is sizeable beyond several pages. To simplify, the relation of \(\Delta=\alpha\Omega_{R}\) (leading to \(\alpha=\frac{\Delta}{\Omega_{R}}\)) is introduced where \(\Omega_{G}=\sqrt{1+\alpha^{2}}\Omega_{R}\). With this the evolution operator \(\hat{U}\) takes the reduced form of
\[\hat{U}=\begin{bmatrix}a_{11}&a_{12}&a_{13}&a_{14}&a_{15}\\ a_{12}&a_{22}&a_{23}&a_{24}&-a_{14}^{*}\\ a_{13}&a_{23}&a_{33}&-a_{23}^{*}&a_{13}^{*}\\ a_{14}&a_{24}&-a_{23}^{*}&a_{22}^{*}&-a_{12}^{*}\\ a_{15}&-a_{14}^{*}&a_{13}^{*}&-a_{12}^{*}&a_{11}^{*}\end{bmatrix}, \tag{14}\]
where \(a_{ij}^{*}\) is the complex conjugate and \(a_{ij}=f(\alpha,\Omega_{R},t)\) which take the following form;
Figure 4: Resonant Rabi oscillations in the \(F=2\) system according to Equation 12. **a)** Evolution of populations of the five states with time, **b)** population fraction in state \(\left|+1\right\rangle\) (blue solid line with squares) and in state \(\left|+2\right\rangle\) (red solid line with circles) relative to the combined population of the two states \(\left|+1\right\rangle\) and \(\left|+2\right\rangle\). The black dashed line shows the combined population of states \(\left|+1\right\rangle-\left|+2\right\rangle\).
\[\begin{array}{l}a_{11}=\frac{\left(8\alpha^{2}+4\right)\cos(\Omega_{G}t)-8i \alpha\sqrt{\alpha^{2}+1}\sin(\Omega_{G}t)\big{(}\big{(}2\alpha^{2}+1\big{)} \cos(\Omega_{G}t)+1\big{)}+\big{(}8\big{(}\alpha^{4}+\alpha^{2}\big{)}+1\big{)} \cos(2\Omega_{G}t)+3}{8(\alpha^{2}+1)^{2}}\\ a_{12}=\frac{\left(4\alpha^{2}+3\right)\alpha\cos(2\Omega_{G}t)+i\big{(}-2 \sqrt{\alpha^{2}+1}\sin(\Omega_{G}t)\big{(}-2\alpha^{2}+\big{(}4\alpha^{2}+1 \big{)}\cos(\Omega_{G}t)+1\big{)}+3i\alpha\big{)}-4\alpha^{3}\cos(\Omega_{G}t) }{4(\alpha^{2}+1)^{2}}\\ a_{13}=-\frac{\sqrt{\frac{3}{2}}\sin^{2}\left(\frac{1}{2}\Omega_{G}t\right) \big{(}-2i\alpha\sqrt{\alpha^{2}+1}\sin(\Omega_{G}t)+\big{(}2\alpha^{2}+1 \big{)}\cos(\Omega_{G}t)+1\big{)}}{\left(\alpha^{2}+1\right)^{2}}\\ a_{14}=\frac{2\sin^{3}\left(\frac{1}{2}\Omega_{G}t\right)\left(\alpha\sin\left( \frac{1}{2}\Omega_{G}t\right)+i\sqrt{\alpha^{2}+1}\cos\left(\frac{1}{2} \Omega_{G}t\right)\right)}{\left(\alpha^{2}+1\right)^{2}}\\ a_{15}=\frac{\sin^{4}\left(\frac{1}{2}\Omega_{G}t\right)}{\left(\alpha^{2}+1 \right)^{2}}\\ a_{22}=\frac{\left(\alpha^{2}+2\cos(\Omega_{G}t)-1\right)\left(-2i\alpha\sqrt{ \alpha^{2}+1}\sin(\Omega_{G}t)+\left(2\alpha^{2}+1\right)\cos(\Omega_{G}t)+1 \right)}{2(\alpha^{2}+1)^{2}}\\ a_{23}=-\frac{\sqrt{6}\sin\left(\frac{1}{2}\Omega_{G}t\right)\left(\alpha^{2}+ \cos(\Omega_{G}t)\right)\left(\alpha\sin\left(\frac{1}{2}\Omega_{G}t\right)+i \sqrt{\alpha^{2}+1}\cos\left(\frac{1}{2}\Omega_{G}t\right)\right)}{\left( \alpha^{2}+1\right)^{2}}\\ a_{24}=-\frac{\sin^{2}\left(\frac{1}{2}\Omega_{G}t\right)\left(3\alpha^{2}+2 \cos(\Omega_{G}t)+1\right)}{\left(\alpha^{2}+1\right)^{2}}\\ a_{33}=\frac{\left(1-2\alpha^{2}\right)^{2}+12\alpha^{2}\cos(\Omega_{G}t)+3 \cos(2\Omega_{G}t)}{4(\alpha^{2}+1)^{2}}\end{array} \tag{15}\]
The population at the end of the Rabi pulse can be obtained when this \(\hat{U}\) is applied to the starting state of \(\left|+2\right\rangle\)\(F=\begin{bmatrix}0&0&0&0&1\end{bmatrix}^{T}\) where the population at the end of the pulse is
\[P=\begin{bmatrix}\left|\Psi_{\left|-2\right\rangle}\right|^{2}\\ \left|\Psi_{\left|-1\right\rangle}\right|^{2}\\ \left|\Psi_{\left|0\right\rangle}\right|^{2}\\ \left|\Psi_{\left|+1\right\rangle}\right|^{2}\\ \left|\Psi_{\left|+2\right\rangle}\right|^{2}\end{bmatrix}=\begin{bmatrix} \frac{\sin^{8}\left(\frac{1}{2}\Omega_{G}t\right)}{\left(\alpha^{2}+1\right) ^{4}}\\ \frac{2\left(2\alpha^{2}+\cos(\Omega_{G}t)+1\right)\sin^{6}\left(\frac{1}{2} \Omega_{G}t\right)}{\left(\alpha^{2}+1\right)^{4}}\\ \frac{3\left(2\alpha^{2}+\cos(\Omega_{G}t)+1\right)^{2}\sin^{4}\left(\frac{1}{ 2}\Omega_{G}t\right)}{2(\alpha^{2}+1)^{4}}\\ \frac{\left(2\alpha^{2}+\cos(\Omega_{G}t)+1\right)^{3}\sin^{2}\left(\frac{1}{ 2}\Omega_{G}t\right)}{2(\alpha^{2}+1)^{4}}\\ \frac{\left(2\alpha^{2}+\cos(\Omega_{G}t)+1\right)^{4}}{16(\alpha^{2}+1)^{4}} \end{bmatrix} \tag{16}\]
where \(\Delta=\alpha\Omega_{R}\) and \(\Omega_{G}=\sqrt{1+\alpha^{2}}\Omega_{R}\).
When the equal splitting condition is applied where \(|\Psi_{\left|+2\right\rangle}|^{2}=|\Psi_{\left|+1\right\rangle}|^{2}\) and solved for \(t\), the equal splitting occurs at \(t=\frac{4\left(\tan^{-1}\left(\sqrt{\frac{-\alpha^{2}+2\sqrt{5}\sqrt{4-\alpha^{2 }}+9}{\alpha^{2}+1}}\right)+\pi c_{1}\right)}{\Omega_{G}}\) where \(c_{1}\) is an integer. The first equal splitting occurs at \(c_{1}=0\) where \(t_{Split}=\frac{4\tan^{-1}\left(\sqrt{\frac{-\alpha^{2}+2\sqrt{5}\sqrt{4- \alpha^{2}}+9}{\alpha^{2}+1}}\right)}{\Omega_{G}}\). When this condition is applied to \(\hat{U}\) in Equation 14, the splitting matrix takes the form
\[\hat{U}_{G}^{Split}=\begin{bmatrix}A_{11}&A_{12}&A_{13}&A_{14}&A_{15}\\ A_{12}&A_{22}&A_{23}&A_{24}&-A_{14}^{*}\\ A_{13}&A_{23}&A_{33}&-A_{23}^{*}&A_{13}^{*}\\ A_{14}&A_{24}&-A_{23}^{*}&A_{22}^{*}&-A_{12}^{*}\\ A_{15}&-A_{14}^{*}&A_{13}^{*}&-A_{12}^{*}&A_{11}^{*}\end{bmatrix},\hat{U}_{ \Omega_{R}=0}^{Free}=\begin{bmatrix}e^{-2it\Delta}&0&0&0&0\\ 0&e^{-it\Delta}&0&0&0\\ 0&0&1&0&0\\ 0&0&0&e^{it\Delta}&0\\ 0&0&0&0&e^{2it\Delta}\end{bmatrix} \tag{17}\]
where \(\hat{U}_{G}^{Split}\) is achieved by substituting \(t=t_{Split}\) to Equation 14 where the full expressions are where \(A_{ij}=f(\alpha)\) and the full expressions are as follows;
\[\begin{array}{l}A_{11}=\frac{4\left(2\left(\alpha^{8}-2\alpha^{6}-5\alpha^{4}+2 \right)+5i\alpha\left(\alpha^{2}-2\right)\left(\alpha^{2}+1\right)^{3/2}\sin \left(\Omega_{G}t_{Split}\right)\right)}{25\left(\alpha^{2}+1\right)^{2}}\\ A_{12}=\frac{8}{25}\frac{\alpha\left(\alpha^{2}+1\right)\left(\alpha^{2}-3 \right)+25i\left(-\frac{\left(\alpha^{2}-1\right)\left(\alpha^{2}+1\right)^{5 /2}\left(\alpha^{2}-2\sqrt{5}\sqrt{4-\alpha^{2}}-9\right)\left(\alpha^{2}- \sqrt{5}\sqrt{4-\alpha^{2}}-4\right)\sqrt{\frac{-\alpha^{2}+2\sqrt{5}\sqrt{4- \alpha^{2}}+9}{\alpha^{2}+1}}}{\left(\sqrt{5}\sqrt{4-\alpha^{2}}+5\right)^{4}} \right)}{\left(\alpha^{2}+1\right)^{2}}\\ A_{13}=\frac{1}{25}\sqrt{6}\left(2\alpha^{2}+\frac{5i\alpha\sin\left(\Omega_{G} t_{Split}\right)}{\sqrt{\alpha^{2}+1}}-4\right)\\ A_{14}=\frac{2\sin^{3}\left(\frac{\Omega_{G}^{4}t_{Split}}{2}\right)\left( \alpha\sin\left(\frac{\Omega_{G}t_{Split}}{2}\right)+i\sqrt{\alpha^{2}+1} \cos\left(\frac{\Omega_{G}t_{Split}}{2}\right)\right)}{\left(\alpha^{2}+1 \right)^{2}}\\ A_{15}=\frac{1}{25}\\ A_{22}=\frac{1}{25}\left(-2\alpha^{2}-\frac{5i\alpha\sin\left(\Omega_{G}t_{Split }\right)}{\sqrt{\alpha^{2}+1}}+4\right)\\ A_{23}=\frac{3}{25}\sqrt{\frac{3}{2}}\left(-2\alpha-\frac{5i\sin\left(\Omega_{G} t_{Split}\right)}{\sqrt{\alpha^{2}+1}}\right)\\ A_{24}=-\frac{11}{25}\\ A_{33}=\frac{1}{25}\end{array} \tag{18}\]
One can easily obtain the expression for the system wavefunction at the end of the Ramsey sequence by applying \(\hat{U}_{G}^{Split},\hat{U}_{\Omega_{R}=0}^{Free}\) to Equation 8 which then can be converted to populations and Ramsey signal via \(\langle\hat{F}_{Z}\rangle=\hbar\sum_{m_{F}}m_{F}P_{m_{F}}\). However, these expressions are omitted due to the immense length. These expressions reduce to a much more convenient form when a specific relation between \(\Delta\) and \(\Omega_{R}\) via \(\Delta=\alpha\Omega_{R}\) is applied. Preliminarily, two conditions of \(\Delta=0\) and \(\Delta=2\Omega_{R}\) will be considered. In the first case, when \(\Delta=0\rightarrow\Omega_{G}=\Omega_{R}\), which is substituted into \(\hat{U}\) in Equation 14 to obtain
\[\hat{U}_{\Delta=0}=\begin{bmatrix}b_{11}&b_{12}&b_{13}&b_{14}&b_{15}\\ b_{12}&b_{22}&b_{23}&b_{24}&-b_{14}^{*}\\ b_{13}&b_{23}&b_{33}&-b_{23}^{*}&b_{13}^{*}\\ b_{14}&b_{24}&-b_{23}^{*}&b_{22}^{*}&-b_{12}^{*}\\ b_{15}&-b_{14}^{*}&b_{13}^{*}&-b_{12}^{*}&b_{11}^{*}\end{bmatrix} \tag{19}\]
where \(b_{ij}=f(\Omega_{R},t)\) and the full expressions are
\[\begin{array}{ll}b_{11}=\cos^{4}\left(\frac{\Omega_{R}t}{2}\right)&b_{22}= \frac{1}{2}\left(\cos\left(\Omega_{R}t\right)+\cos\left(2\Omega_{R}t\right)\right) \\ b_{12}=-\frac{1}{4}i\left(2\sin\left(\Omega_{R}t\right)+\sin\left(2\Omega_{R }t\right)\right)&b_{23}=-\frac{1}{2}i\sqrt{\frac{3}{2}}\sin\left(2\Omega_{R}t \right)\\ b_{13}=-\frac{1}{2}\sqrt{\frac{3}{2}}\sin^{2}\left(\Omega_{R}t\right)&b_{24}= \frac{1}{2}\left(\cos\left(2\Omega_{R}t\right)-\cos\left(\Omega_{R}t\right)\right) \\ b_{14}=i\sin^{2}\left(\frac{\Omega_{R}t}{2}\right)\sin\left(\Omega_{R}t\right)&b_{ 33}=\frac{1}{4}\left(3\cos\left(2\Omega_{R}t\right)+1\right)\\ b_{15}=\sin^{4}\left(\frac{\Omega_{R}t}{2}\right)\end{array} \tag{20}\]
The population \(P_{\Delta=0}\) at the end of the Rabi pulse for a specific starting state can be obtained by applying \(\hat{U}_{\Delta=0}\) to the starting state. For the case of starting from the
top state \(|{+2}\rangle\)\(F=\begin{bmatrix}0&0&0&0&1\end{bmatrix}^{T}\), the population at the end of the Rabi pulse is
\[P_{\Delta=0}=\begin{bmatrix}\left|\Psi_{|-2\rangle}\right|^{2}\\ \left|\Psi_{|-1\rangle}\right|^{2}\\ \left|\Psi_{|0\rangle}\right|^{2}\\ \left|\Psi_{|+1\rangle}\right|^{2}\\ \left|\Psi_{|+2\rangle}\right|^{2}\end{bmatrix}=\begin{bmatrix}\sin^{8}\left( \frac{\Omega_{R}t}{2}\right)\\ \sin^{4}\left(\frac{\Omega_{R}t}{2}\right)\sin^{2}\left(\Omega_{R}t\right)\\ \frac{3}{8}\sin^{4}\left(\Omega_{R}t\right)\\ \frac{1}{16}\left(2\sin\left(\Omega_{R}t\right)+\sin\left(2\Omega_{R}t\right) \right){}^{2}\\ \cos^{8}\left(\frac{\Omega_{R}t}{2}\right)\end{bmatrix} \tag{21}\]
These analytical formulae for the population after the Rabi pulse are compared with expressions in Equation 12 based on the two-level formalism as shown in Figure 5. The figure shows an exact overlap validating the expressions derived in Equation 21.
As shown above, during resonant Rabi interactions the atoms starting in state \(|{+2}\rangle\) leave the state and occupy state \(|{+1}\rangle\). As the EM pulse continues, this effect pushes atoms towards state \(|{-2}\rangle\) via states \(|{0}\rangle\) and \(|{-1}\rangle\). Due to resonance, all atoms occupy \(|{-2}\rangle\) and the system undergoes full inversion after which the atoms recover back to the initial state \(|{+2}\rangle\) and continue to produce Rabi oscillations.
Moving on, the speciality of \(\Delta=2\Omega_{R}\) is that the two states of interest \(|{+1}\rangle\) and \(|{+2}\rangle\) do not cross when \(\Delta\) increases beyond this point eliminating the possibility of creating a superposition with equal splitting. When \(\Delta=2\Omega_{R}\rightarrow\Omega_{G}=\sqrt{5}\Omega_{R}\) is substituted into \(\hat{U}\) in Equation 14 the \(\hat{U}_{\Delta=2\Omega_{R}}\) we obtain
\[\hat{U}=\begin{bmatrix}c_{11}&c_{12}&c_{13}&c_{14}&c_{15}\\ c_{12}&c_{22}&c_{23}&c_{24}&-c_{14}^{*}\\ c_{13}&c_{23}&c_{33}&-c_{23}^{*}&c_{13}^{*}\\ c_{14}&c_{24}&-c_{23}^{*}&c_{22}^{*}&-c_{12}^{*}\\ c_{15}&-c_{14}^{*}&c_{13}^{*}&-c_{12}^{*}&c_{11}^{*}\end{bmatrix}, \tag{22}\]
Figure 5: Resonant (\(\Delta=0\)) Rabi oscillations of populations in the five-level system using the analytical form of Equation 21 (solid lines) and of the \(C_{1},C_{2}\) representation of Equation 12 (open circles). Colours for states follow red - \(|{+2}\rangle\), blue - \(|{+1}\rangle\), green - \(|{0}\rangle\), magenta - \(|{-1}\rangle\) and black - \(|{-2}\rangle\).
where \(c_{ij}=f(\Omega_{R},t)\) and the full expressions are
\[\begin{array}{l}c_{11}=\frac{1}{200}\left(-16i\sqrt{5}\sin\left(\sqrt{5}\Omega_ {R}t\right)-72i\sqrt{5}\sin\left(2\sqrt{5}\Omega_{R}t\right)+36\cos\left(\sqrt{5 }\Omega_{R}t\right)+161\cos\left(2\sqrt{5}\Omega_{R}t\right)+3\right)\\ c_{12}=\frac{1}{100}\left(-17i\sqrt{5}\sin\left(2\sqrt{5}\Omega_{R}t\right)+14i \sqrt{5}\sin\left(\sqrt{5}\Omega_{R}t\right)-32\cos\left(\sqrt{5}\Omega_{R}t \right)+38\cos\left(2\sqrt{5}\Omega_{R}t\right)-6\right)\\ c_{13}=\frac{1}{25}\sqrt{\frac{3}{2}}\sin^{2}\left(\frac{1}{2}\sqrt{5}\Omega_ {R}t\right)\left(4i\sqrt{5}\sin\left(\sqrt{5}\Omega_{R}t\right)-9\cos\left( \sqrt{5}\Omega_{R}t\right)-1\right)\\ c_{14}=\frac{2}{25}\sin^{3}\left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)\left(2 \sin\left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)+i\sqrt{5}\cos\left(\frac{1}{2} \sqrt{5}\Omega_{R}t\right)\right)\\ c_{15}=\frac{1}{25}\sin^{4}\left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)\\ c_{22}=\frac{1}{50}\left(2\cos\left(\sqrt{5}\Omega_{R}t\right)+3\right)\left(- 4i\sqrt{5}\sin\left(\sqrt{5}\Omega_{R}t\right)+9\cos\left(\sqrt{5}\Omega_{R}t \right)+1\right)\\ c_{23}=\frac{1}{50}\sqrt{\frac{3}{2}}\left(-8i\sqrt{5}\sin\left(\sqrt{5}\Omega_ {R}t\right)-i\sqrt{5}\sin\left(2\sqrt{5}\Omega_{R}t\right)+12\cos\left(\sqrt{5 }\Omega_{R}t\right)+2\cos\left(2\sqrt{5}\Omega_{R}t\right)-14\right)\\ c_{24}=\frac{1}{50}\left(11\cos\left(\sqrt{5}\Omega_{R}t\right)+\cos\left(2 \sqrt{5}\Omega_{R}t\right)-12\right)\\ c_{33}=\frac{1}{100}\left(48\cos\left(\sqrt{5}\Omega_{R}t\right)+3\cos\left(2 \sqrt{5}\Omega_{R}t\right)+49\right)\end{array} \tag{23}\]
The population \(P_{\Delta=2\Omega_{R}}\) at the end of the Rabi pulse for a specific starting state can be obtained by applying \(\hat{U}_{\Delta=2\Omega_{R}}\) to the starting state. For the case of starting from the top state when \(F=\left[\begin{matrix}0&0&0&0&1\end{matrix}\right]^{T}\), the population at the end of the Rabi pulse is
\[P_{\Delta=2\Omega_{R}}=\begin{bmatrix}\left|\Psi_{|-2}\right|^{2}\\ \left|\Psi_{|-1}\right|^{2}\\ \left|\Psi_{|0}\right|^{2}\\ \left|\Psi_{|+1}\right|^{2}\\ \left|\Psi_{|+2}\right|^{2}\end{bmatrix}=\begin{bmatrix}\frac{1}{625}\sin^{8} \left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)\\ \frac{2}{625}\sin^{6}\left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)\left(\cos \left(\sqrt{5}\Omega_{R}t\right)+9\right)\\ \frac{3\sin^{4}\left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)\left(\cos\left( \sqrt{5}\Omega_{R}t\right)+9\right)^{2}}{\frac{1250}{\frac{\sin^{2}\left(\frac{1 }{2}\sqrt{5}\Omega_{R}t\right)\left(\cos\left(\sqrt{5}\Omega_{R}t\right)+9 \right)^{3}}{\frac{1250}{\frac{\left(\cos\left(\sqrt{5}\Omega_{R}t\right)+9 \right)^{3}}{\frac{1250}{\frac{\left(\cos\left(\sqrt{5}\Omega_{R}t\right)+9 \right)^{4}}{10000}}}}}}}}}\end{bmatrix} \tag{24}\]
Rabi oscillations of the populations show non-resonant behaviour when the initially populated state \(|+2\rangle\) exhibits variations in the range 1 - 0.41 and state \(|+1\rangle\)- in the range 0 - 0.41 as shown in Figure 6.
All other states show significantly smaller variations.
Figure 6: Non-resonant (\(\Delta=2\Omega_{R}\)) Rabi oscillations of populations in the five-level system using the analytical form of Equation 24 (solid lines) and of the \(C_{1},C_{2}\) presentation of Equation 12 (open circles). Colours for states follow red - \(|+2\rangle\), blue - \(|+1\rangle\), green - \(|0\rangle\), magenta - \(|-1\rangle\) and black - \(|-2\rangle\).
### Ramsey signal for the two interesting cases
Firstly, for the resonant case where \(\Delta=0\), we are interested in the equal splitting of states \(|+2\rangle\) and \(|+1\rangle\). The equal population condition is applied between states \(|+2\rangle\) and \(|+1\rangle\) to obtain an expression for the pulse duration. When \(|\Psi_{|+2\rangle}|^{2}=|\Psi_{|+1\rangle}|^{2}\) is applied to Equation 21, \(-\frac{1}{4}\sin^{2}\left(\Omega_{R}t\right)-\frac{1}{16}\sin^{2}\left(2 \Omega_{R}t\right)-\frac{1}{4}\sin\left(\Omega_{R}t\right)\sin\left(2\Omega_{ R}t\right)+\cos^{8}\left(\frac{\Omega_{R}t}{2}\right)=0\) is achieved where the acceptable terms for \(t\) take the form \(t=\frac{4\left(\pi c_{1}+\tan^{-1}\left(2+\sqrt{5}\right)\right)}{\Omega_{R}}\) or \(t=-\frac{4\left(\pi c_{1}+\tan^{-1}\left(2-\sqrt{5}\right)\right)}{\Omega_{R}}\), where \(c_{1}\) is an integer. The first equal splitting occurs at \(c_{1}=0\) where \(t=-\frac{4\left(\tan^{-1}\left(2-\sqrt{5}\right)\right)}{\Omega_{R}}\). When this condition is applied to \(\hat{U}_{\Delta=0}\) in Equation 19 the desired time evolution operator \(\hat{U}_{\Delta=0}^{Split}\)for equal splitting is obtained.
\[\hat{U}_{\Delta=0}^{Split} = \left[\begin{array}{cccc}\frac{16}{25}&\frac{16i}{25}&-\frac{4 \sqrt{6}}{25}&-\frac{4i}{25}&\frac{1}{25}\\ \frac{16i}{25}&\frac{4}{25}&\frac{6i\sqrt{6}}{25}&-\frac{11}{25}&-\frac{4i}{25} \\ -\frac{4\sqrt{6}}{25}&\frac{6i\sqrt{6}}{25}&\frac{1}{25}&\frac{6i\sqrt{6}}{25} &-\frac{4\sqrt{6}}{25}\\ -\frac{4i}{25}&-\frac{11}{25}&\frac{6i\sqrt{6}}{25}&\frac{4}{25}&\frac{16i}{25} \\ \frac{1}{25}&-\frac{4i}{25}&-\frac{4\sqrt{6}}{25}&\frac{16i}{25}&\frac{16}{25} \end{array}\right],\hat{U}_{\Delta=0}^{Free}=\begin{bmatrix}1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&1&0\\ 0&0&0&0&1\end{bmatrix} \tag{25}\]
where \(\hat{U}_{\Delta=0}^{Split}\) is achieved by substituting \(\Delta=0,\Omega_{G}=\Omega_{R}\) and \(t=-\frac{4\left(\tan^{-1}\left(2-\sqrt{5}\right)\right)}{\Omega_{R}}\) to Equation 14 and \(\hat{U}_{\Delta=0}^{Free}\) is achieved by extrapolating from Equation 7.
Following on from Equation 7, the time evolution operator during free evolution \(\hat{U}_{\Delta=0}^{Free}\) is the identity matrix as shown in Equation 25. The equation for the system wavefunction at the end of the Ramsey sequence can be obtained by applying Equation 8
\[|\Psi_{sys}(t)\rangle = \left[\begin{array}{c}\frac{81}{625}\\ \frac{216i}{625}\\ -\frac{144\sqrt{6}}{625}\\ -\frac{384i}{625}\\ \frac{256}{625}\end{array}\right],\qquad P_{Rsy}=\begin{bmatrix}\frac{65536}{39 0625}\\ \frac{147456}{390625}\\ \frac{124416}{390625}\\ \frac{46656}{390625}\\ \frac{6561}{390625}\end{bmatrix}, \tag{26}\]
where \(|\Psi_{sys}(t)\rangle\) and \(P_{Rsy}\) are respectively the vector state and the populations for the five-level system at the end of the Ramsey sequence.
This can be easily converted to population which takes the form \(P_{Rsy}\) in Equation 26 which shows constant values for the populations of each state at \(P_{Rsy}=\left[\frac{65536}{390625}\frac{147456}{390625}\frac{124416}{390625} \frac{46656}{390625}\frac{6561}{390625}\right]^{T}\). Further, an overall expression for the Ramsey signal can be obtained when the average spin projection for a multilevel system \(\langle\hat{F}_{Z}\rangle=\hbar\sum_{m_{F}}m_{F}P_{m_{F}}\) is applied, where \(P_{m_{F}}\) is the fractional population of the relevant state [10]. This leads to a constant value for the Ramsey signal at \(\frac{\langle\hat{F}_{Z}\rangle}{\hbar}=\frac{-14}{25}\) which is the expected behaviour of the system.
For the second scenario, the detuning \(\Delta=2\Omega_{R}\) was chosen and when equal population condition of \(|\Psi_{|+2\rangle}|^{2}=|\Psi_{|+1\rangle}|^{2}\) is applied to the expressions Equation 24, leads to \(\frac{\left(\cos\left(\sqrt{5}\Omega_{R}t\right)+9\right)^{4}}{10000}-\frac{ \sin^{2}\left(\frac{1}{2}\sqrt{5}\Omega_{R}t\right)\left(\cos\left(\sqrt{5} \Omega_{R}t\right)+9\right)^{3}}{1250}=0\). Here, the acceptable term for
\(t\) takes the form \(t=\frac{4\pi c_{1}+\pi}{\sqrt{5}\Omega_{R}}\), where \(c_{1}\) is an integer. The first equal splitting occurs at \(c_{1}=0\) where \(t=\frac{\pi}{\sqrt{5}\Omega_{R}}\). When this condition is applied to \(\hat{U}_{\Delta=2\Omega_{R}}\) in Equation 22 the desired time evolution operator \(\hat{U}_{\Delta=2\Omega_{R}}^{Split}\) for equal splitting is obtained.
\[\hat{U}_{\Delta=2\Omega_{R}}^{Split}=\left[\begin{array}{ccccc}\frac{16}{25} &\frac{16}{25}&\frac{4\sqrt{6}}{25}&\frac{4}{25}&\frac{1}{25}\\ \frac{16}{25}&-\frac{4}{25}&-\frac{6\sqrt{6}}{25}&-\frac{11}{25}&-\frac{4}{25} \\ \frac{4\sqrt{6}}{25}&-\frac{6\sqrt{6}}{25}&\frac{1}{25}&\frac{6\sqrt{6}}{25}& \frac{4\sqrt{6}}{25}\\ \frac{4}{25}&-\frac{11}{25}&\frac{6\sqrt{6}}{25}&-\frac{4}{25}&-\frac{16}{25} \\ \frac{1}{25}&-\frac{4}{25}&\frac{4\sqrt{6}}{25}&-\frac{16}{25}&\frac{16}{25} \end{array}\right],\hat{U}_{R=0}^{Free}=\left[\begin{array}{ccccc}e^{-2it \Delta}&0&0&0&0\\ 0&e^{-it\Delta}&0&0&0\\ 0&0&1&0&0\\ 0&0&0&e^{it\Delta}&0\\ 0&0&0&0&e^{2it\Delta}\end{array}\right], \tag{27}\]
where \(\hat{U}_{\Delta=2\Omega_{R}}^{Split}\) is achieved by substituting \(\Delta=2\Omega_{R},\Omega_{G}=\sqrt{5}\Omega_{R}\) and \(t=\frac{\pi}{\sqrt{5}\Omega_{R}}\) into Equation 14. Following on from Equation 7, the time evolution operator during free evolution \(\hat{U}_{\Omega_{R}=0}^{Free}\) is extrapolated as shown in Equation 25.
The equation for the system wavefunction at the end of the Ramsey sequence can be obtained by applying Equation 8
\[|\Psi_{sys}(t)\rangle=\left[\begin{array}{cccc}\frac{256}{625}\sin^{4}\left( \frac{\Delta t}{2}\right)\\ \frac{128}{625}\sin^{3}\left(\frac{\Delta t}{2}\right)\left(3\sin\left(\frac{ \Delta t}{2}\right)+5i\cos\left(\frac{\Delta t}{2}\right)\right)\\ \frac{4\sqrt{6}}{625}(5i\sin(\Delta t)-3\cos(\Delta t)+3)^{2}\\ \frac{4\sqrt{6}-2\Delta t}{625}\left(-1+e^{i\Delta t}\right)\left(4+e^{i \Delta t}\right)^{3}\\ \frac{e^{-2i\Delta t}}{625}\left(4+e^{i\Delta t}\right)^{4}\end{array}\right], P_{Rsy}=\left[\begin{array}{cccc}\frac{65536}{390625}\sin^{8}\left(\frac{ \Delta t}{2}\right)\\ \frac{1536}{390625}\sin^{4}\left(\frac{\Delta t}{2}\right)\left(8\cos(\Delta t )+17\right)\\ \frac{1536}{390625}\sin^{2}\left(\frac{\Delta t}{2}\right)\left(8\cos(\Delta t )+17\right)^{2}\\ \frac{1}{390625}(8\cos(\Delta t)+17)^{4}\end{array}\right] \tag{28}\]
where \(|\Psi_{sys}(t)\rangle\) and \(P_{Rsy}\) are respectively the vector state and the populations for the five-level system at the end of the Ramsey sequence.
This can be converted to populations in Equation 28. Further, the Ramsey signal can be obtained when the average spin projection for a multilevel system \(\langle\hat{F}_{Z}\rangle=\hbar\sum_{m_{F}}m_{F}P_{m_{F}}\) is applied [10]. Based on \(\langle\hat{F}_{Z}\rangle\) the Ramsey signal takes the form \(\frac{\langle\hat{F}_{Z}\rangle}{\hbar}=\frac{2}{25}(9+16\cos(\Delta t))\). The importance of \(\Delta=2\Omega_{R}\) scenario is that it denotes greater stability of the Rabi splitting after the pulse duration \(t=\frac{\pi}{\sqrt{5}\Omega_{R}}\) under experimental phase and pulse uncertainties. This is visible in Figure 6 showing minimal population variations of states \(|+1\rangle\) and \(|+2\rangle\) in the vicinity of equal splitting due to the flatness of the curves. Figure 7 shows the population variation and the Ramsey signal for \(\Delta=2\Omega_{R}\). A decreased interference fringe contrast is noticeable compared to a near-resonant case but we anticipate the measured interference signal will be more stable in the presence of magnetic and frequency noise.
### Generalised Ramsey signal for the five-level system
As shown under Equation 14, the general form of the unitary time evolution operator \(\hat{U}\) can be derived which uses the important relation of \(\Delta=\alpha\Omega_{R}\) and the population at the end of the first splitting pulse is shown in Equation 16. Now, we look to expand the expression for \(\hat{U}\) (Equation 14) by generalising the variations in the pulse duration. This is important as it allows to account for the experimental uncertainty in pulse which leads to variations in the population splitting at both the EM pulses of the Ramsey sequence.
To do so, the pulse duration can be defined as \(t=\frac{\beta}{\Omega_{R}}\), where \(\beta\) is a phase relating to the pulse area defining the splitting of the system. This results in the following unitary time evolution operator
\[\hat{U}_{\alpha\beta}^{GSplit}=\begin{bmatrix}B_{11}&B_{12}&B_{13}&B_{14}&B_{15} \\ B_{12}&B_{22}&B_{23}&B_{24}&-B_{14}^{*}\\ B_{13}&B_{23}&B_{33}&-B_{23}^{*}&B_{13}^{*}\\ B_{14}&B_{24}&-B_{23}^{*}&B_{22}^{*}&-B_{12}^{*}\\ B_{15}&-B_{14}^{*}&B_{13}^{*}&-B_{12}^{*}&B_{11}^{*}\end{bmatrix}, \tag{29}\]
where \(\hat{U}_{\alpha\beta}^{GSplit}\) is achieved by substituting \(t=\frac{\beta}{\Omega_{R}}\) into Equation 14, where the full expressions are
\[\begin{array}{l}B_{11}=\frac{\left(8\alpha^{2}+4\right)\cos\left(\sqrt{ \alpha^{2}+1}\beta\right)-8i\alpha\sqrt{\alpha^{2}+1}\sin\left(\sqrt{\alpha^{2 }+1}\beta\right)\left(\left(2\alpha^{2}+1\right)\cos\left(\sqrt{\alpha^{2}+1} \beta\right)+1\right)+\left(8\left(\alpha^{4}+\alpha^{2}\right)+1\right)\cos \left(2\sqrt{\alpha^{2}+1}\beta\right)+3}{8\left(\alpha^{2}+1\right)^{2}}\\ B_{12}=\frac{\left(4\alpha^{2}+3\right)\alpha\cos\left(2\sqrt{\alpha^{2}+1} \beta\right)+i\left(-2\sqrt{\alpha^{2}+1}\sin\left(\sqrt{\alpha^{2}+1}\beta \right)\left(\left(4\alpha^{2}+1\right)\cos\left(\sqrt{\alpha^{2}+1}\beta \right)-2\alpha^{2}+1\right)+3i\alpha\right)-4\alpha^{3}\cos\left(\sqrt{ \alpha^{2}+1}\beta\right)}{4\left(\alpha^{2}+1\right)^{2}}\\ B_{13}=-\frac{\sqrt{\frac{3}{2}}\sin^{2}\left(\frac{1}{2}\sqrt{\alpha^{2}+1} \beta\right)\left(-2i\alpha\sqrt{\alpha^{2}+1}\sin\left(\sqrt{\alpha^{2}+1} \beta\right)+\left(2\alpha^{2}+1\right)\cos\left(\sqrt{\alpha^{2}+1}\beta \right)+1\right)}{\left(\alpha^{2}+1\right)^{2}}\\ B_{14}=\frac{2\sin^{3}\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)\left( \alpha\sin\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)+i\sqrt{\alpha^{2}+1 }\cos\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)\right)}{\left(\alpha^{2 }+1\right)^{2}}\\ B_{15}=\frac{\sin^{4}\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)}{\left( \alpha^{2}+1\right)^{2}}\\ B_{22}=\frac{\left(2\cos\left(\sqrt{\alpha^{2}+1}\beta\right)+\alpha^{2}-1 \right)\left(-2i\alpha\sqrt{\alpha^{2}+1}\sin\left(\sqrt{\alpha^{2}+1}\beta \right)+\left(2\alpha^{2}+1\right)\cos\left(\sqrt{\alpha^{2}+1}\beta\right)+1 \right)}{2\left(\alpha^{2}+1\right)^{2}}\\ B_{23}=-\frac{\sqrt{6}\sin\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)\left( \cos\left(\sqrt{\alpha^{2}+1}\beta\right)+\alpha^{2}\right)\left(\alpha\sin \left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)+i\sqrt{\alpha^{2}+1}\cos \left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)\right)}{\left(\alpha^{2}+1 \right)^{2}}\\ B_{24}=-\frac{\sin^{2}\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right)\left(2 \cos\left(\sqrt{\alpha^{2}+1}\beta\right)+3\alpha^{2}+1\right)}{\left(\alpha^ {2}+1\right)^{2}}\\ B_{33}=\frac{12\alpha^{2}\cos\left(\sqrt{\alpha^{2}+1}\beta\right)+3\cos\left(2 \sqrt{\alpha^{2}+1}\beta\right)+\left(1-2\alpha^{2}\right)^{2}}{4\left(\alpha^ {2}+1\right)^{2}}\end{array} \tag{30}\]
Figure 7: Population variations and the interference signal \(\frac{\left(\hat{P}_{Z}\right)}{\hbar}\) after the Ramsey sequence for the case of \(\Delta=2\Omega_{R}\) in the five-level system. **a)**: The variation of the population of each state where the red solid line denotes \(\left|+2\right\rangle\) and the blue solid line denotes \(\left|+1\right\rangle\). **b)** The variation of the Ramsey signal based on \(\frac{\left\langle P_{Z}\right\rangle}{\hbar}=\sum_{m_{F}}m_{F}P_{m_{F}}\) where \(P_{m_{F}}\) the is population fraction of state \(\left|m_{F}\right\rangle\).
The wavefunction at the end of the Ramsey sequence can be obtained when \(\hat{U}^{GSplit}_{\alpha\beta}\) is applied to Equation 8, which can be converted to the population, which are omitted due to the immense lengths of the expressions. However, an expression for the Ramsey signal can be obtained when these populations are subjected to \(\frac{\langle\hat{F}_{Z}\rangle}{\hbar}=\sum_{m_{F}}m_{F}P_{m_{F}}\), which results in a complete analytical expression for the average spin at the end of Ramsey interference in the five-level model:
\[\frac{\langle\hat{F}_{Z}\rangle}{\hbar} = \frac{4\sqrt{\alpha^{2}+1}\alpha^{2}\cos\left(\sqrt{\alpha^{2}+1} \beta\right)+\sqrt{\alpha^{2}+1}\cos\left(2\sqrt{\alpha^{2}+1}\beta\right)+ \left(2\alpha^{4}+1\right)\sqrt{\alpha^{2}+1}}{\left(\alpha^{2}+1\right)^{5/2}} \tag{31}\] \[- \frac{4\sin^{2}\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right) \left(\sqrt{\alpha^{2}+1}\left(\left(2\alpha^{2}+1\right)\cos\left(\sqrt{ \alpha^{2}+1}\beta\right)+1\right)\right)}{\left(\alpha^{2}+1\right)^{5/2}} \cos(\Delta\mathrm{t})\] \[+ \frac{4\sin^{2}\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right) \left(2\left(\alpha^{3}+\alpha\right)\sin\left(\sqrt{\alpha^{2}+1}\beta\right) \right)}{\left(\alpha^{2}+1\right)^{5/2}}\sin(\Delta\mathrm{t}),\]
where \(\alpha=\frac{\Delta}{\Omega_{R}}\) and \(\beta=\Omega_{R}t\).
By using the trigonometric conversion of \(a\cos(\theta)+b\sin(\theta)=A\cos(\theta-\phi)\), Equation 31 can be further simplified to the rather elegant form of:
\[\frac{\langle\hat{F}_{Z}\rangle}{\hbar} = \frac{4\sqrt{\alpha^{2}+1}\alpha^{2}\cos\left(\sqrt{\alpha^{2}+1} \beta\right)+\sqrt{\alpha^{2}+1}\cos\left(2\sqrt{\alpha^{2}+1}\beta\right)+ \left(2\alpha^{4}+1\right)\sqrt{\alpha^{2}+1}}{\left(\alpha^{2}+1\right)^{5/2}} \tag{32}\] \[- \frac{4\sin^{2}\left(\frac{1}{2}\sqrt{\alpha^{2}+1}\beta\right) \left(\sqrt{\left(\alpha^{2}+1\right)}\left(\cos\left(\sqrt{\alpha^{2}+1} \beta\right)+2\alpha^{2}+1\right)\right)}{\left(\alpha^{2}+1\right)^{5/2}}\cos (\Delta\mathrm{t}-\phi).\]
where \(\tan(\phi)=\frac{-2\left(\alpha^{3}+\alpha\right)\sin\left(\sqrt{\alpha^{2}+1 }\beta\right)}{\sqrt{\alpha^{2}+1}\left(\left(2\alpha^{2}+1\right)\cos\left( \sqrt{\alpha^{2}+1}\beta\right)+1\right)}\).
The prominent feature is that Equation 32 reduces to \(\frac{\langle\hat{F}_{Z}\rangle}{\hbar}=A-B\cos(\Delta t-\phi)\) when values for \(\alpha\) and \(\beta\) are substituted. Further, the unique relation of \(A+B=2\) is also reported. As a special note, \(\beta\) scans only within the first half of one Rabi cycle in the above analytical description. However, we scan the Rabi signal beyond one cycle in experiments; therefore, \(t=\frac{\left(2\pi+\beta\right)}{\Omega_{R}}\) or \(t=\frac{\left(4\pi-\beta\right)}{\Omega_{R}}\) should be used when converting a fitted \(\beta\) to experimental results.
## 4 Discussion and conclusions
Here we have explored the analytical description of three- and five-level systems via unitary time evolution operators and obtained analytical expressions for describing the Ramsey interferometric signal for a typical Ramsey sequence. Several interesting Rabi oscillations for the three-level system are shown in Figure 1. Further, the behaviour of Rabi oscillations of the five-level system is verified via the expansion of expressions from the two-level model as shown in Figures 5 and 6. Several special cases and examples of how these analytical expressions can be used to obtain population variations along with the averaged Ramsey signal at the end of the Ramsey sequence are also presented as
shown in Figures 2 and 7. Finally, a generalised equation for the average Ramsey signal at the standard Ramsey sequence for the five-level system is presented in Equation 32 where the splitting condition is also generalised expanding the applicability.
A limitation of this analysis is that both splitting pulses of the Ramsey sequence are considered to be equal. However, this analysis can be expanded to Ramsey sequences with unequal splitting pulses by following the derivation methodology presented here. This means that a separate unitary time evolution operator \(\hat{U}\) is to be derived for the second pulse. From this the analytical expression for the system wavefunction at the end of the Ramsey sequence can be obtained via Equation 8. From this the populations of each state can be obtained from which the analytical expression for Ramsey interference can be obtained via \(\langle\hat{F}_{Z}\rangle=\hbar\sum_{m_{F}}m_{F}P_{m_{F}}\) where \(P_{m_{F}}\) is the fractional population of the relevant \(m_{F}\) state [10]. Similarly, this same methodology can be used to obtain analytical expressions for Ramsey sequences with spin-echo techniques such as for work reported in [19, 20, 21]. For this, a new unitary time evolution operator \(\hat{U}^{\pi}\) is to be derived for the \(\pi-\) pulse. Once Equation 8 is expanded to \(|\Psi_{sys}(t)\rangle=\hat{U}^{Split}.\hat{U}^{Free}.\hat{U}^{\pi}.\hat{U}^{ Free}.\hat{U}^{Split}.\,|\Psi(0)\rangle\); the analytical expression for the system wavefunction is obtained. From here, the analytical expression for the Ramsey interference is obtained by converting the system wavefunction to the multilevel populations.
All in all, a comprehensive analysis of the three- and five-level systems via the unitary time evolution operator under the equal-Rabi condition for a Ramsey sequence with equal splitting pulses is discussed.
|
2303.05392 | Automatically Summarizing Evidence from Clinical Trials: A Prototype
Highlighting Current Challenges | We present TrialsSummarizer, a system that aims to automatically summarize
evidence presented in the set of randomized controlled trials most relevant to
a given query. Building on prior work, the system retrieves trial publications
matching a query specifying a combination of condition, intervention(s), and
outcome(s), and ranks these according to sample size and estimated study
quality. The top-k such studies are passed through a neural multi-document
summarization system, yielding a synopsis of these trials. We consider two
architectures: A standard sequence-to-sequence model based on BART and a
multi-headed architecture intended to provide greater transparency to
end-users. Both models produce fluent and relevant summaries of evidence
retrieved for queries, but their tendency to introduce unsupported statements
render them inappropriate for use in this domain at present. The proposed
architecture may help users verify outputs allowing users to trace generated
tokens back to inputs. | Sanjana Ramprasad, Denis Jered McInerney, Iain J. Marshal, Byron C. Wallace | 2023-03-07T17:30:48Z | http://arxiv.org/abs/2303.05392v1 | # Automatically Summarizing Evidence from Clinical Trials:
###### Abstract
We present _TrialsSummarizer_, a system that aims to automatically summarize evidence presented in the set of randomized controlled trials most relevant to a given query. Building on prior work Marshall et al. (2020), the system retrieves trial publications matching a query specifying a combination of condition, intervention(s), and outcome(s), and ranks these according to sample size and estimated study quality. The top-\(k\) such studies are passed through a neural multi-document summarization system, yielding a synopsis of these trials. We consider two architectures: A standard sequence-to-sequence model based on BART Lewis et al. (2019), and a multi-headed architecture intended to provide greater transparency to end-users. Both models produce fluent and relevant summaries of evidence retrieved for queries, but their tendency to introduce unsupported statements render them inappropriate for use in this domain at present. The proposed architecture may help users verify outputs allowing users to trace generated tokens back to inputs. The demonstration video is available at: [https://vimeo.com/735605060](https://vimeo.com/735605060) The prototype, source code, and model weights are available at: [https://sanjanaramprasad.github.io/trials-summarizer/](https://sanjanaramprasad.github.io/trials-summarizer/).
## 1 Introduction
Patient treatment decisions would ideally be informed by all available relevant evidence. However, realizing this aim of evidence-based care has become increasingly difficult as the medical literature (already vast) has continued to rapidly expand Bastian et al. (2010). Well over 100 new RCT reports are now published every day Marshall et al. (2021). Language technologies -- specifically automatic summarization methods -- have the potential to provide concise overviews of all evidence relevant to a given clinical question, providing a kind of _systematic review_ on demand Wang et al. (2022); DeYoung et al. (2021); Wallace et al. (2021).
We describe a demonstration system, _TrialsSummarizer_, which combines retrieval over clinical trials literature with a summarization model to provide narrative overviews of current published evidence relevant to clinical questions. Figure 1 shows an illustrative query run in our system and the resultant output. A system capable of producing _accurate_ summaries of the medical evidence on any given topic could dramatically improve the ability of caregivers to consult the whole of the evidence base to inform care.
However, current neural summarization systems are prone to inserting inaccuracies into outputs Kryscinski et al. (2020); Maynez et al. (2020); Pagnoni et al. (2021); Ladhak et al. (2021); Choubey et al. (2021). This has been shown specifically to be a problem in the context of medical literature summarization Wallace et al. (2021); Otmakhova et al. (2022), where there is a heightened need for factual accuracy. A system that produces plausible but often misleading summaries of comparative treatment efficacy is useless without an efficient means for users to assess the validity of outputs.
Motivated by this need for transparency when summarizing clinical trials, we implement a summarization architecture and interface designed to permit interactions that might instill trust in outputs. Specifically, the model associates each token in a generated summary with a particular source "aspect" extracted from inputs. This in turn allows one to trace output text back to (snippets of) inputs, permitting a form of verification. The architecture also provides functionality to "in-fill" pre-defined _template summaries_, providing a compromise between the control afforded by templates and the flexibility of abstractive summarization. We realize this functionality in our system demonstration.
## 2 Related Work
The (lack of) factuality of neural summarization systems is an active area of research (Chen et al., 2021; Cao et al., 2020; Dong et al., 2020; Liu et al., 2020; Goyal and Durrett, 2021; Zhang et al., 2021; Kryscinski et al., 2020; Xie et al., 2021). This demo paper considers this issue in the context of a specific domain and application. We also explored controllability to permit interaction, in part via templates. This follows prior work on hybrid template/neural summarization (Hua and Wang, 2020; Mishra et al., 2020; Wiseman et al., 2018).
We also note that this work draws upon prior work on visualizing summarization system outputs (Vig et al., 2021; Strobelt et al., 2018; Tenney et al., 2020) and biomedical literature summarization (Plaza and Carrillo-de Albornoz, 2013; Demner-Fushman and Lin, 2006; Molla, 2010; Sarker et al., 2017; Wallace et al., 2021). However, to our knowledge this is the first working prototype to attempt to generate (draft) evidence reviews that are both interpretable and editable on demand.
## 3 System Overview
Our interface is built on top of Trialstreamer Marshall et al. (2020), an automated system that identifies new reports of randomized controlled trials (RCTs) in humans and then extracts and stores salient information from these in a database of all published trial information. Our system works by identifying RCT reports relevant to a given query using a straightforward retrieval technique (Section 3.1), and then passing the top-\(k\) of these through a multi-document summarization model (Section 3.2). For the latter component we consider both a standard sequence-to-sequence approach and a _aspect structured_ architecture (Section 3.3) intended to provide greater transparency.
### Retrieving Articles
Trialstreamer Marshall et al. (2020); Nye et al. (2020) monitors research databases -- specifically, PubMed1 and the World Health Organization International Clinical Trials Registry Platform -- to automatically identify newly published reports of RCTs in humans using a previously validated classifier (Marshall et al., 2018).
Footnote 1: [https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/)
Articles describing RCTs are then passed through a suite of machine learning models which extract key elements from trial reports, including: sample sizes; descriptions of trial populations, interventions, and outcomes; key results; and the reliability of the evidence reported (via an approximate risk of bias score; Higgins et al. 2019). This extracted (semi-)structured information is stored in the Trialstreamer relational database.
Extracted free-text snippets describing study populations, interventions, and outcomes (PICO elements) are also mapped onto MeSH terms,2 using a re-implementation of MetaMap Lite (Demner-Fushman et al., 2017).
Footnote 2: MeSH — short for Medical Subject Headings — is a controlled vocabulary maintained by the National Library of Medicine (NLM).
To facilitate search, users can enter MeSH terms for a subset of populations, interventions, and outcomes, which is used to search for matches over the articles and their corresponding extracted key data in the database. Matched studies are then ranked as a score function of sample size \(s\) and risk of bias score \(\mathrm{rob}\): \(\mathrm{score}=s/\mathrm{rob}\); that is, we prioritize retrieval of large, high-quality trial reports.
The novelty on offer in this system demonstration is the inclusion of a _summarization_ component, which consumes the top-\(k\) retrieved trials (we use \(k\)=5 here) and outputs a narrative summary of this evidence in the style of a systematic review abstract (Wallace et al., 2021). By combining this summarization module with the Trialstreamer database, we can provide real-time summarization of all trials that match a given query (Figure 1).
Figure 1: An example query (regarding use of _status_ to reduce risk of _stroke_) and output summary provided by the system. In this example, the summary accurately reflects the evidence, but this is not always the case.
### Summarizing Trials
We consider two realizations of the summarization module. We train both models on a dataset introduced in prior work which comprises collections of RCT reports (PICO elements extracted from abstracts) as inputs and Authors' Conclusions sections of systematic review abstracts authored by members of the Cochrane Collaboration as targets Wallace et al. (2021) (see Section 4).
As a first model, we adopt BART Lewis et al. (2019) with a Longformer Beltagy et al. (2020) encoder to accommodate the somewhat lengthy multi-document inputs. As inputs to the model we concatenate spans extracted from individual trials containing salient information, including populations, interventions, outcomes, and "punchlines." The latter refers to extracted snippets which seem to provide the main results or findings, e.g., "There was a significant increase in mortality..."; see Lehman et al. (2019) for more details. We enclose these spans in special tags. e.g., <population>Participants were diabetics... </population>. As additional supervision we run the same extraction models over the targets and also demarcate these using the same set of tags.
An issue with standard sequence-to-sequence models for this task is that they provide no natural means to assess the provenance of tokens in outputs, which makes it difficult to verify the trustworthiness of generated summaries. Next we discuss an alternative architecture which is intended to provide greater transparency and controllability.
### Proposed Aspect Structured Architecture to Increase Transparency
We adopt a multi-headed architecture similar to Goyal et al. (2021), which explicitly generates tokens corresponding to the respective aspects (Figure 2). We assume inputs are segmented into texts corresponding to a set of \(K\) fields or aspects. Here these are descriptions of trial populations, interventions, and outcomes, and "punchline" snippets reporting the main study findings. We will denote inputs for each of the \(K\) aspects by \(\{x^{a_{1}},...,x^{a_{K}}\}\), where \(x^{a_{k}}\) denotes the text for aspect \(k\) extracted from input \(x\). Given that this is a multi-document setting (each input consists of multiple articles), \(x^{a_{k}}\) is formed by _concatenating aspect texts across all documents_ using special tokens to delineate individual articles.
We encode aspect texts separately to obtain aspect-specific embeddings \(x^{a_{k}}_{\text{enc}}\). We pass these (respectively) to aspect-specific decoders and a shared language model head to obtain vocabulary distributions \(\hat{o}^{a_{k}}_{t}\). All model parameters are shared save for the last two decoder layers which comprise aspect-specific parameters. Importantly, the representation for a given aspect is _only based on the text associated with this aspect_ (\(x^{a_{k}}\)).
We model the final output as a _mixture_ over the respective aspect distributions: \(\hat{o}_{t}=\sum_{k=1}^{K}{{{z}_{t}^{a_{k}}(\hat{o}^{a_{k}}_{t})}}\). Mixture weights \(z_{t}=z_{t}^{a_{1}},\ldots,z_{t}^{a_{K}}\) encode a soft selection over aspects for timestep \(t\) and are obtained as a dot product between each penultimate representation of the decoder \(y_{t}^{a_{k}}\) (prior to passing them through a language model head) and a learnable parameter, \(W_{z}\in R^{D}\). The \(K\) logits \(\tilde{z}_{t}^{a_{k}}\) are then normalized via a Softmax before multiplying with the aspect-specific vocabulary distributions \(\hat{o}^{a_{k}}_{t}\)
**Tracing outputs to inputs** This architecture permits one to inspect the mixture weights associated with individual tokens in a generated summary, which suggests which aspect (most) influenced the
Figure 2: Our proposed structured summarization approach entails synthesizing individual aspects (automatically extracted in a pre-processing step), and conditionally generating text about each of these.
output. Further inspection of the corresponding snippets from studies for this aspect may facilitate verification of outputs, and/or help to resolve errors and where they may have been introduced.
**Controlled generation** Neural summarization models often struggle to appropriately _synthesize_ conflicting evidence to arrive at the correct overall determination concerning a particular intervention effectiveness. But while imperfect, summarization models may be useful nonetheless by providing a means to rapidly draft synopses of the evidence to be edited. The multi-headed architecture naturally permits template in-filling, because one can explicitly draw tokens from heads corresponding to aspects of interest. In our demo, we allow users to toggle between different templates which correspond to different conclusions regarding the overall effectiveness of the intervention in question. (It would be simple to extend this to allow users to specify their own templates to be in-filled.)
To in-fill templates we use template text preceding blanks as context and then generate text from the language head corresponding to the designated aspect. To determine span length dynamically we monitor the mixture distribution and stop when the it shifts to the another aspect (Figure 3).
### User Interface
Figure 5 shows the interface we have built integrating the multi-headed architecture. Highlighted aspects in the summary provide a means of interpreting the source of output tokens by indicating the aspects that informed their production. One can in turn inspect the snippets associated with these aspects, which may help to identify unsupported content in the generated summary. To this end when users click on a token we display the subset of the input that most informed its production.
We provide additional context by displaying overviews (i.e., "punchlines") communicating the main findings of the trials. Because standard sequence-to-sequence models do not provide a mechanism to associate output tokens with input aspects, we display all aspects (and punchlines) for all trials alongside the summary for this model.
Capitalizing on the aforementioned in-filling abilities of our model, we also provide pre-defined templates for each possible "direction" of aggregate findings (significant vs. no effect). We discuss the interface along with examples in Section 5.
## 4 Dataset and Training Details
We aim to consume collections of titles and abstracts that describe RCTs addressing the same clinical question to abstractive summaries that synthesize the evidence presented in these. We train all models on an RCT summarization dataset (Wallace et al., 2021) where we extract clinically salient elements -- i.e., our aspects -- from each of the (unstructured) inputs as a pre-processing step using existing models (Marshall et al., 2020).
**Training** We use the Huggingface Transformers library (Wolf et al., 2020) to implement both models. We initialize both models to _bart-base_(Lewis et al., 2019). We fine-tune the models with a batch size of 2 for 3 epochs, using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 3e-5.
**Inference** We use beam search with a beam size of 3. We set the min and max length of generated text to be 10 and 300, respectively.
## 5 Case Study: Verification and Controllability
To demonstrate the potential usefulness of the interface (and the architecture which enables it), we walk through two case studies. We highlight the type of interpretability for verification our proposed approach provides, also demonstrate the ability to perform controllable summarization to show how this might be useful. The queries used in these case studies along with the investigation were performed by a co-author IJM, a medical doctor with substantial experience in evidence-based medicine. We also compare the models and report automatic scores for ROUGE and factuality in the Appendix section A and find that the two models perform comparably.
Figure 3: **Template generation. To in-fill, we force generation from a specific head and monitor the model’s mixture distribution to decide when to stop.**
Model InterpretabilityAs an example to highlight the potential of the proposed architecture and interface to permit verification, we consider a query regarding the effect of Oseltamivir as an intervention for patients infected with influenza. The standard architecture produces a summary of the top most relevant RCTs to this query shown in Figure 4. This comprises two claims: (1) The intervention has been shown to reduce the risk of adverse events among adults and children, and, (2) There is no consensus as to the most effective dosage. One can inspect the inputs to attempt to verify these. Doing so, we find that reported results do tend to indicate a reduced risk of adverse events and that adolescents and adults were included in some of these studies, indicating that the first claim is accurate. The second claim is harder to verify on inspection; no such uncertainty regarding dosage is explicitly communicated in the inputs. Verifying these claims using the standard seq2seq architecture is onerous because the abstractive nature of such models makes it difficult to trace parts of the output back to inputs.
Figure 4: Example output and interface using a standard BART (Lewis et al., 2019) model.
Figure 5: Qualitative example where the structured summarization model (and associated interface) permits token-level verification of the summary generated regarding the use of oseltamivir on influenza-infected patients. This approach readily indicates support for the claim that it is “effective” (top; yellow) and for the description of the population as individuals at risk of “complications” (bottom; purple).
Therefore, verification requires reading through entire inputs to verify different aspects.
The multi-headed architecture allows us to provide an interactive interface intended to permit easier verification. In particular, associating each output token with a particular aspect provides a natural mechanism for one to inspect snippets of the inputs that might support the generated text. Figure 5 illustrates this for the aforementioned Oseltamivir and flu example. Here we show how the "effective" token in the output can be clicked on to reveal the aspect that influenced its production (Figure 2), in this case tracing back to the extracted "punchlines" conveying main study findings. This readily reveals that the claim is supported. Similarly, we can verify the bit about the population being individuals at risk of complications by tracing back to the population snippets upon which this output was conditioned.
ControllabilityAs mentioned above, another potential benefit of the proposed architecture is the ability to "in-fill" templates to imbue neural generative models with controllability. In particular, given that the overall (aggregate) treatment efficacy is of primary importance in this context, we pre-define templates which convey an effect direction. The idea is that if upon verification one finds that the model came to the wrong aggregate effect direction, they can use a pre-defined template corresponding to the correct direction to generate a more accurate summary on-demand.
We show an example of a summary generated by the structured model in the top part of Figure 6. By using the interpretability features for verification discussed above, we find that the model inaccurately communicates that the intervention Chloroquine is effective for treating COVID-19. However, with the interactive interface we are able to immediately generate a new summary featuring the corrected synthesis result (direction), as depicted in the bottom of Figure 6, without need for manual drafting.
We provide additional case studies in Appendix Section B.
## 6 Conclusions
We have described TrialsSummarizer, a prototype system for automatically summarizing RCTs relevant to a given query. Neural summarization models produce summaries that are readable and (mostly) relevant, but their tendency to introduce unsupported or incorrect information into outputs means they are not yet ready for use in this domain.
We implement a multi-headed architecture intended to provide greater transparency. We provided qualitative examples intended to highlight
Figure 6: Inaccurate summaries generated by the structured model regarding the effect of Chloroquine on patients with COVID-19 (top). Template-controlled summary using the structured model (bottom).
its potential to permit faster verification and controllable generation. Future work is needed to test the utility of this functionality in a user trial, and to inform new architectures that would further increase the accuracy and transparency of models for summarizing biomedical evidence.
### Limitations and Ethical Issues
LimitationsThis work has several limitations. First, as stated above, while the prospect of automatic summarization of biomedical evidence is tantalizing, existing models are not yet fit for the task due to their tendency to introduce factual errors. Our working prototype serves in part to highlight this and motivate work toward resolving issues of reliability and trustworthiness.
In this demo paper we have also attempted to make some progress in mitigating such issues by way of the proposed structured summarization model and accompanying interface and provided qualitative examples highlighting its potential, but really a formal user study should be conducted to assess the utility of this. This is complicated by the difficulty of the task: To evaluate the factuality of automatic summaries requires deep domain expertise and considerable time to read through constituent inputs and determine the veracity of a generated summary.
Another limitation of this work is that we have made some ad-hoc design decisions in our current prototype system. For example, at present we (arbitrarily) pass only the top-5 (based on trial sample size and estimated reliability) articles retrieved for a given query through the summarization system. Future work might address this by considering better motivated methods to select which and how many studies ought to be included.
EthicsAccurate summaries of the biomedical evidence have the potential to ultimately improve patient care by supporting the practice of evidence-based medicine. However, at present such models bring inherent risks. In particular, one may be tempted to blindly trust model outputs; given the limitations of current summarization technologies, this would be ill-advised.
Our prototype demonstration system is designed in part to highlight existing challenges that must be solved in this space before any model might actually be adopted (and beyond this, we emphasize that need for verification of outputs, which has been the focus of the present effort). In the interface we indicate with a hard-to-miss warning message that this system should only be used for research purposes and these summaries are unreliable and _not to be trusted_.
## Acknowledgements
This work was supported in part by the National Institutes of Health (NIH) under award R01LM012086, and by the National Science Foundation (NSF) awards 1901117 and 2211954. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the NSF.
|
2303.15181 | DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation | In this paper, we present a new text-guided 3D shape generation approach
DreamStone that uses images as a stepping stone to bridge the gap between text
and shape modalities for generating 3D shapes without requiring paired text and
3D data. The core of our approach is a two-stage feature-space alignment
strategy that leverages a pre-trained single-view reconstruction (SVR) model to
map CLIP features to shapes: to begin with, map the CLIP image feature to the
detail-rich 3D shape space of the SVR model, then map the CLIP text feature to
the 3D shape space through encouraging the CLIP-consistency between rendered
images and the input text. Besides, to extend beyond the generative capability
of the SVR model, we design a text-guided 3D shape stylization module that can
enhance the output shapes with novel structures and textures. Further, we
exploit pre-trained text-to-image diffusion models to enhance the generative
diversity, fidelity, and stylization capability. Our approach is generic,
flexible, and scalable, and it can be easily integrated with various SVR models
to expand the generative space and improve the generative fidelity. Extensive
experimental results demonstrate that our approach outperforms the
state-of-the-art methods in terms of generative quality and consistency with
the input text. Codes and models are released at
https://github.com/liuzhengzhe/DreamStone-ISS. | Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, Chi-Wing Fu | 2023-03-24T03:56:23Z | http://arxiv.org/abs/2303.15181v3 | # ISS++: Image as Stepping Stone for Text-Guided 3D Shape Generation
###### Abstract
In this paper, we present a new text-guided 3D shape generation approach (ISS++) that uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data. The core of our approach is a _two-stage feature-space alignment strategy_ that leverages a pre-trained single-view reconstruction (SVR) model to map CLIP features to shapes: to begin with, map the CLIP image feature to the detail-rich 3D shape space of the SVR model, then map the CLIP text feature to the 3D shape space through encouraging the CLIP-consistency between rendered images and the input text.
Besides, to extend beyond the generative capability of the SVR model, we design a text-guided 3D shape stylization module that can enhance the output shapes with novel structures and textures. Further, we exploit pre-trained text-to-image diffusion models to enhance the generative diversity, fidelity, and stylization capability. Our approach is generic, flexible, and scalable, and it can be easily integrated with various SVR models to expand the generative space and improve the generative fidelity. Extensive experimental results demonstrate that our approach outperforms the state-of-the-art methods in terms of generative quality and consistency with the input text. Codes and models are released at [https://github.com/luzheng/ISS-Image-as-Stepping-Stone-for-Text-Guided-3D-Shape-Generation](https://github.com/luzheng/ISS-Image-as-Stepping-Stone-for-Text-Guided-3D-Shape-Generation).
Text to 3D shape generation, CLIP, 3D shape stylization, Score Distillation Sampling
## 1 Introduction
3D shape generation has many practical applications, such as in CAD, 3D games, animations, and more. Among different ways to generate 3D shapes, a user-friendly method is to generate shapes from text descriptions. This enables users to easily generate 3D shapes using natural language along with many applications in AR/VR and 3D printing. However, text-guided shape generation presents significant challenges owing to the difficulty of collecting paired text-shape data, the substantial semantic gap between texts and shapes, and the topological complexity of 3D shapes.
Previous research [4, 10, 18] typically requires paired text-shape data for this challenging task. However, it is already non-trivial to collect 3D shapes, let alone manually annotate text-shape pairs which brings further complexities. Currently, the largest paired text-shape dataset available [4] only includes two categories, tables and chairs, significantly limiting the applicability of current methods.
Recently, several annotation-free approaches have been proposed for text-to-shape generation without requiring paired text-shape data. These approaches, such as CLIP-Forge [35], Dream Fields [11], CLIP-Mesh [20], and DreamFusion [25], utilize the large-scale language-vision models, e.g., CLIP [27], and text-to-image generation models, such as Imagen [34], for training. However, generating high-quality 3D shapes from unpaired text-shape data remains challenging for several reasons. First, due to the scarcity of 3D datasets, they can only generate a very limited range of shape categories. For instance, CLIP-Forge [35] is struggling to generate shapes outside the ShapeNet dataset. Second, without injecting any text-related shape priors, it is also difficult to produce 3D structures that match the input texts. For example, CLIP Mesh [20] and Dream Fields [11] often generate 3D shapes incompatible with given texts (see Figure 2 (b)) even with minutes or hours of test-time optimization for each shape instance. Third, the visual quality of the generated shapes is not satisfactory. As shown in Figure 2 (b), CLIP-Forge [35] produces low-resolution outputs (\(64^{3}\)) without textures, the results generated by Dream Fields and CLIP-Mesh typically look surrealistic (rather than real), and the 3D topology and surface quality of DreamFusion still have a large room for improvement.
Going beyond existing approaches, we present a novel text-guided 3D shape generation method without requiring paired text-shape data. We propose to _leverage 2D Image as a Stepping Stone_ to implicitly bridge the shape and text modalities and exploit diffusion models for enhanced diversity, quality, and generative
Fig. 1: Generative results of our ISS++. The input text follows the prompt template ‘A[shape] imlates a [style]”.
scope, namely ISS++. Specifically, we use the pre-trained vision-language model CLIP to train a mapper that maps CLIP image features to a pre-trained 3D shape space. In inference, this mapper maps the CLIP text features to the target shape space, as shown in Figure 2 (a) stage 1. However, there exists a gap between the CLIP image and text features. As a result, the CLIP text feature might not be mapped to a desired shape feature. To tackle this issue, we further fine-tune the mapper to improve the text-shape consistency. We do this by adopting a training objective encouraging CLIP consistency between the input text description and rendered images. This fine-tuning stage is depicted in Figure 2 (a) as stage 2. To enhance generative diversity, we leverage an off-the-shelf diffusion model for mapping the CLIP text feature to CLIP image feature [28] and sample multiple generated CLIP image feature that matches the text feature during inference. The two-stage feature-space alignment can generate plausible shapes from texts.
Furthermore, to extend beyond the generative space of pre-trained SVR models, we design CLIP-guided shape stylization and Score Distillation Sampling (SDS)-guided refinement modules that allow for the generation of new and visually appealing textures and structures during testing. Specifically, the CLIP-guided shape stylization module updates the decoder of the SVR model by optimizing the CLIP consistency between the rendered images from the generated shape and the target style description. Although this strategy help expands the models' generative capability toward open-world style descriptions, it suffers from generating local detailed structure due to the global guidance of CLIP features; see Figure 2 "ISS". To that end, in order to generate fine-grained structures and high-fidelity textures, we explore leveraging pre-trained diffusion models and marry Score Distillation Sampling (SDS) [25] with our two-stage feature-space alignment framework. This involves utilizing SDS to provide a loss function for updating our decoder. This allows us to generate high-fidelity novel structures and textures and even create imaginary shapes by incorporating the semantic attributes of the target style into the shape; see Figure 1 and Figure 2 "ISS++". This also extends the generation capability of our ISS++ to unseen categories out of the image dataset. Besides, by leveraging the 3D shape prior of the two-stage feature-space alignment, our model outperforms [25] in terms of surface quality and topology faithfulness, while typically requiring much fewer training iterations.
Finally, our approach can be compatible with various SVR models [2, 7, 23]. For instance, we can adopt SS3D [2] to generate shapes using single-view in-the-wild images, which expands the generative capability of our approach beyond the 13 categories of ShapeNet that can be generated by [35]. Besides, our approach can also work with the very recent approach GET3D [7] to generate high-quality 3D shapes from text; as shown in our results in Section 4.
In summary, our approach expands the boundary of 3D shape generation from texts in the following aspects. First, we cast the challenging text-guided shape generation task to be a single-view reconstruction (SVR) task, which is more approachable. Second, our approach is efficient. It can create plausible 3D shapes in only 85 sec. with two-stage feature-space alignment and high-quality and stylized 3D shapes with Score Distillation Sampling in less than 30 min. compared to 72 min. of Dream Fields [11] and 90 min. of DreamFusion [25] (using the Stable DreamFusion version due to the lack of public code)1. Furthermore, the generation capabilities of our approach outperform the generation capabilities of the state-of-the-art approaches; see Figure 2 (b). Finally, our approach is generic, scalable, and compatible with a wide spectrum of SVR methods.
Footnote 1: We use the latest version of an available public implementation of DreamFusion, Stable-Dreamfusion [38], with the commit ’0994686e’ updated on Feb 7, 2023, as the official code has not been released.
**Different from Our Conference Paper.** This manuscript extends ISS [17], a spotlight paper to be presented soon at International Conference on Learning Representations 2023 in the following folds. First, we extend ISS with a diffusion prior [28] to generate more diversified 3D shapes while ensuring their consistency with the given text. Then, we propose an SDS-guided refinement module to further improve the fidelity of the generated shapes. Further than that, our SDS-guided stylization enables the generation of imaginary 3D shapes complementing our previous CLIP-guided stylization [17]. Last, we conduct more experiments on 3D shape generation and shape stylization, and compare ISS++ with the latest works CLIP-Mesh [20] and DreamFusion [25]. Our experimental results, both quantitative and qualitative, demonstrate that our approach is able to surpass the state-of-the-art methods in text-guided 3D shape generation.
## 2 Related Works
**Text-Guided Image Generation.** Text-guided image synthesis has been intensively studied in recent years [13, 14, 24, 26, 30, 31, 33, 36, 37, 40, 42, 43, 44, 45, 46, 47]. Leveraging auto-regressive and diffusion models, recent works achieve impressive performance on text-guided image generation [6, 22, 28, 29, 34] to produce images of any category. To eliminate the need for text data, Wang et al. [41] and Zhou et al. [48] explore the text-free text-to-image generation leveraging CLIP.
Beyond text-guided image generation, it is more challenging to create 3D shapes from the text. First, unlike paired text-image
Fig. 2: The proposed “Image as Stepping Stone” framework. Our two-stage feature-space alignment shown in (a) bridges the text space (the CLIP text feature) and the 3D shape space (the SVR feature) to generate 3D shapes from the text shown in (b), outperforming the existing works, without requiring paired text-shape data.
data that can be collected from the Internet, it is much more laborious and challenging to acquire large-scale paired text-shape data. Second, the text-to-shape generation aims to predict the complete 3D structures beyond a single 2D view in text-guided image generation. Third, there are more complex spatial structures and topologies in 3D shapes beyond 2D images with regular pixel grids, making it even more challenging to generate 3D shapes from a piece of text.
**Text-Guided 3D Shape Generation.** In this research field, some approaches require paired text-shape data, such as [4, 10, 18]. However, to avoid the need for paired data, recent works such as CLIP-Forge [35], Dream Fields [11], CLIP-Mesh [20], and Dream-Fusion [25] leverage pre-trained vision-language models or text-to-image models. Despite their advancements, these approaches still have limitations, as discussed in Section 1. Moreover, some works use CLIP to manipulate 3D shapes/NeRF using text [19, 5, 12, 39] and generate 3D avatars [9]. In contrast, our approach presents a new framework for text-guided 3D shape generation without the need for paired text-shape data, using the newly proposed two-stage feature-space alignment. Our experimental results demonstrate superior fidelity and text-shape consistency beyond existing methods.
**Differentiable Rendering.** As a powerful technique, differentiable rendering enables 3D models to be optimized using 2D images. There are numerous applications, such as generating 3D shapes from 2D images or reconstructing 3D objects from multiple 2D views. By modeling the rendering process as a differentiable function, gradients can be computed with respect to the input parameters of the function, allowing for efficient optimization using gradient-based techniques. This has led to significant advances in fields such as computer vision and computer graphics. Recent works [21, 7, 23] leverage differentiable rendering for 3D shape generation using 2D images. In this work, we derive 2D images of the generated 3D shape using differentiable rendering and use a pre-trained large-scale image-language model CLIP to encourage consistency between 2D images and input texts. Thanks to differentiable rendering, we can update the generated 3D shapes indirectly using the rendered images.
**Single-View Reconstruction.** This work is also related to single-view reconstruction (SVR). SVR has recently been explored with voxels [49], meshes [1], and implicit fields [2, 23]. In this work, we leverage an SVR model to bridge the image and shape modalities, thus allowing us to use 2D images as a stepping stone to produce 3D shapes from texts. Moreover, our approach is flexible since we map the features in the latent space implicitly rather than explicitly.
## 3 Methodology
### _Overview_
To generate 3D shape \(S\) from text \(T\) without relying on paired text-shape data, we map the CLIP features to a latent shape feature space of a pre-trained SVR model, leveraging the joint text-image feature embeddings from CLIP and also the 3D shape prior learned by the SVR model. Here, we leverage multi-view RGB/RGBD images and the corresponding camera poses for training, without needing the paired text-shape data. The framework has four components: (1) image encoder \(E_{\text{S}}\) to map the input image \(I\) to shape space \(\Omega_{\text{S}}\) of the SVR model, (2) pre-trained CLIP image and text encoders \(E_{\text{I}}\) and \(E_{\text{T}}\) that map image \(I\) and text \(T\) to CLIP feature spaces \(\Omega_{\text{I}}\) and \(\Omega_{\text{T}}\), (3) mapper \(M\) consisting of 12 fully-connected and Leaky-ReLU layers to map CLIP image features to the latent shape space \(\Omega_{\text{S}}\) of SVR, and (4) decoder \(D\) that generates the 3D shape \(S\). The proposed approach uses DVR [23] as the SVR model in the experiments unless specified otherwise.
Generally speaking, we present a novel two-stage feature-space alignment approach to bridge the image, text, and shape modalities. First, we train the mapper \(M\) to bridge the CLIP image space \(\Omega_{\text{I}}\) and the shape space \(\Omega_{\text{S}}\), as shown in Figure 3(a). Afterward, at test time, \(M\) is fine-tuned to further bridge the CLIP text space \(\Omega_{\text{T}}\) and \(\Omega_{\text{S}}\), as shown in Figure 3(b). Finally, we can optionally improve the texture and structure generation capability of our model by fine-tuning the decoder \(D\) (as shown in Figure 6).
In Section 3.2, we begin by presenting two empirical studies that investigate the properties of the CLIP feature space. We then introduce our two-stage feature-space alignment approach in Section 3.3. Following that, in Section 3.4, we present our method for text-guided shape refinement and stylization. Finally, in Section 3.5, we discuss that our approach is compatible with different SVR models and how we can extend our method to generate a wide range of categories and high-quality shapes.
### _Empirical Studies and Motivations_
Prior works on text-guided 3D shape generation mainly use CLIP without analyzing its workings and limitations. To gain a better understanding of the CLIP feature space and its suitability for text-guided 3D shape generation, we conduct two empirical studies.
#### 3.2.1 Whether the CLIP feature is suitable for 3D shape generation?
In the first empirical study, we investigate whether the CLIP image feature space \(\Omega_{\text{I}}\) has enough representative capability for 3D shape generation by attempting to generate shapes from \(\Omega_{\text{I}}\). To do so, we train the SVR model by adopting the CLIP image encoder \(E_{\text{I}}\) to replace the original SVR image encoder \(E_{\text{S}}\). At the same time, we optimize the decoder \(D\) using the same loss function as DVR [23] with \(E_{\text{I}}\) frozen. This design is inspired by the motivation that we can generate 3D shapes from the text by adopting the CLIP text encoder \(E_{\text{T}}\) to replace \(E_{\text{I}}\) in inference. To evaluate the 3D shape generative capability \(E_{\text{S}}\) and \(E_{\text{I}}\), we measure the 3D mIoU of their generated shapes and ground truths (Figure 4 (b)). The result indicates that the representative capability of CLIP image encoder \(E_{\text{I}}\) is inferior to \(E_{\text{S}}\) due to its inferior capability to capture input image details that are necessary for 3D shape generation. This result is easy to understand since CLIP image encoder \(E_{\text{I}}\) has been optimized to extract semantic-aligned features with the paired text data in the training of CLIP, instead of being encouraged to capture image details. As a result in Figure 4 (a), image details that are necessary for 3D reconstruction, such as textures, are overlooked by \(E_{\text{I}}\). In contrast, \(E_{\text{S}}\) in the SVR model is trained for 3D generation and is encouraged to capture the necessary image details. These results indicate that we can generate shapes from \(\Omega_{\text{S}}\) instead of \(\Omega_{\text{I}}\) to improve the generative quality. To do so, we design a mapper \(M\) to map from CLIP image feature space \(\Omega_{\text{I}}\) to shape space \(\Omega_{\text{S}}\) to enable the generation from \(\Omega_{\text{S}}\).
#### 3.2.2 Does the CLIP image and text feature gap affects 3D shape generation?
The second investigation aims to analyze the gap between the normalized CLIP text feature \(f_{\text{T}}\in\Omega_{\text{T}}\) and image feature \(f_{\text{I}}\in\Omega_{\text{I}}\)
as shown in Figure 2 (a) and examine how this gap affects text-guided 3D shape generation. Specifically, we measure the cosine distance between \(f_{\text{I}}\) and \(f_{\text{T}}\) based on the text and rendered images of the randomly selected 300 text-shape pairs from the text-shape dataset [4] as follows:
\[d=1-\text{cosine\_similarity}(f_{\text{I}},f_{\text{T}}). \tag{1}\]
The result \(d(f_{\text{T}},f_{\text{I}})=0.783\pm 0.004\) through three repetitions of the experiment suggests that there is still a certain gap between the paired text and image features. Additionally, the angle between the two features is around \(\arccos(1-0.783)=1.35\) radians in this text-shape dataset [4]. The above result implies that the generated 3D shape may not be consistent with the input text if we simply replace \(f_{\text{I}}\) with \(f_{\text{T}}\) in inference. As demonstrated in Figure 4 (c), this simple strategy results in a cosine distance 0.45 to \(f_{\text{S}}\in\Omega_{\text{S}}\), much larger than \(d(M(f_{\text{I}}),f_{\text{S}})=0.21\). This finding is consistent with the results reported in [15]. To address this issue, we propose to fine-tune \(M\) into \(M^{\prime}\) at test time, aiming at producing a feature \(M^{\prime}(f_{\text{T}})\) that has a smaller distance to \(f_{\text{S}}\) compared with \(M(f_{\text{T}})\).
### _Two-Stage Feature-Space Alignment_
Based on these findings, we propose a two-stage feature-space alignment approach that connects the image space \(\Omega_{\text{I}}\) and shape space \(\Omega_{\text{S}}\) in the first stage and further connects the text space \(\Omega_{\text{T}}\) to shape space \(\Omega_{\text{S}}\) in the second stage, with the image space \(\Omega_{\text{I}}\) as a stepping stone.
#### 3.3.1 Stage-1: CLIP image-to-shape alignment
Figure 3 (a) illustrates the stage-1 alignment. On the one hand, the shape space \(\Omega_{\text{S}}\) is able to capture richer image details compared with CLIP image space \(\Omega_{\text{I}}\). On the other hand, \(\Omega_{\text{I}}\) helps to enable the text input thanks to its joint text-image embedding with \(\Omega_{\text{T}}\). Inspired by the above two motivations, we design a CLIP2Shape mapper \(M\) consisting of \(12\) fully-connected layers to map \(f_{\text{I}}\) to \(\Omega_{\text{S}}\). To optimize \(M\), we use \(L_{2}\) regression loss between the mapped CLIP image feature \(M(f_{\text{I}})\) and pre-trained SVR feature encoder \(f_{\text{S}}=E_{\text{S}}(I)\) as shown in Equation (2):
\[\mathcal{L}_{M}=\sum_{i=1}^{N}||E_{\text{S}}(I_{i})-M(f_{\text{I},i})||_{2}^{2} \tag{2}\]
where \(N\) and \(f_{\text{I},i}\) indicates the total number of images for training and the normalized CLIP feature of \(I_{i}\), respectively.
In addition, we incorporate a fine-tuning module for decoder \(D\) to encourage it to generate 3D shapes with a white background. This module helps the model to focus on object-centric features while ignoring the background (see Figure 5). Specifically, we propose a novel background loss \(\mathcal{L}_{\text{bg}}\) in Equation (3) below, which enhances the model's ability to capture foreground objects and prepares it for the second-stage alignment.
\[\mathcal{L}_{\text{bg}}=\sum_{p}||D_{c}(p)-1||_{2}^{2}\mathbbm{1}(F\cap\text {ray}(o,p)=\emptyset) \tag{3}\]
where \(p\) means a query point coordinate, \(D_{o}(p)\) and \(D_{c}(p)\) are the occupancy and color prediction of \(p\), respectively. \(F=\{p:D_{o}(p)>t\}\) indicates the object region, where \(D_{o}(p)\) is greater than a pre-defined threshold \(t\). \(F\cap\text{ray}(o,p)=\emptyset\) means the background region where a ray connecting camera center \(o\) and \(p\) does not intersect the object. Besides, \(\mathbbm{1}\) is the indicator function and \(\mathbbm{1}(F\cap\text{ray}(o,p)=\emptyset)=1\) if \(p\) is in the background region. To summarize, \(\mathcal{L}_{\text{bg}}\) is designed to encourage the background region to be predicted as the white color (value 1) and assist the model in better capturing the generated shape. Besides, the same set of loss
Fig. 4: Results of empirical studies on CLIP feature spaces.
Fig. 3: Overview of our two-stage feature-space alignment. (a) In the first stage, we align the CLIP image feature space \(\Omega_{\text{I}}\) and the shape space \(\Omega_{\text{S}}\) of a pre-trained single-view reconstruction (SVR) model with a CLIP2Shape mapper \(M\), which maps images to shapes while keeping \(E_{\text{S}}\) and \(E_{\text{I}}\) frozen. Then we fine-tune the decoder \(D\) using \(L_{\text{bg}}\) to encourage the background color to be white. During training, we stop gradients of the SVR loss \(L_{D}\) and the background loss \(L_{\text{bg}}\) to eliminate their effects on \(M\). (b) In the second stage, we introduce a fast-time optimization by fixing the decoder \(D\) and fine-tuning the mapper \(M\) to \(M^{\prime}\), further encouraging the CLIP consistency between the rendered images of the generated shape and the input text \(T\).
functions \(\mathcal{L}_{\text{D}}\) from DVR [23] is still adopted for maintaining the capability to generate 3D shapes of the SVR model.
Hence, the total loss in stage 1 is \(\lambda_{M}\mathcal{L}_{M}\) for mapper \(M\) and \(\lambda_{\text{bg}}\mathcal{L}_{\text{bg}}+\mathcal{L}_{D}\) for decoder \(D\), where \(\lambda_{\text{bg}}\) and \(\lambda_{M}\) indicate loss weights. The stage-1 alignment is trained with multi-view RGB/RGBD images and provides a good starting point for the stage 2 per-text optimization.
#### 3.3.2 Stage-2: text-to-shape alignment
After bridging image and shape modalities, we further try to bridge the text and shape modalities by proposing a fast test-time optimization in stage 2 that seeks to minimize the gap between the CLIP features of the input text \(T\) and image \(I\), as discussed in the second empirical study. By doing so, we can encourage the generated shape \(S\) to be more consistent with the input text. Since we cannot directly optimize the similarity between text and shape features, reducing the semantic gap between \(f_{\text{T}}\) and \(f_{\text{I}}\) provides an effective way to align the two modalities and improves the overall performance of the model.
As illustrated in Figure 3 (b), the stage-2 alignment starts by replacing \(E_{\text{I}}\) with \(E_{\text{T}}\) to extract CLIP text feature \(f_{\text{T}}\), given the input text \(T\). We then fine-tune the mapper \(M\) using a CLIP consistency loss to reduce the gap between the input text \(T\) and \(m\) rendered images \(\{R_{i}\}_{i=1}^{m}\) captured from random camera viewpoints of the output shape \(S\). The CLIP consistency loss is defined in Equation 4. By minimizing this loss, we encourage the output shape to be consistent with the input text.
\[\mathcal{L}_{\text{C}}=\sum_{i=1}^{m}\left\langle f_{\text{T}}\cdot\frac{E_{ \text{I}}(R_{i})}{||E_{\text{I}}(R_{i})||}\right\rangle \tag{4}\]
where \(\left\langle\cdot\right\rangle\) indicates the inner-product.
In stage-2 alignment, we continue to use \(\mathcal{L}_{\text{bg}}\) to improve the model's object awareness. Figures 5 (a) and (b) indicate that the model can find a rough shape that fits the input text in about five iterations when \(\mathcal{L}_{\text{bg}}\) is used. On the other hand, without \(\mathcal{L}_{\text{bg}}\), the model fails to produce a reasonable output because the same color predicted in both the object and background regions impedes the model's ability to perceive the object.
Our stage-1 alignment has already narrowed the semantic gap between text space \(\Omega_{\text{T}}\) and shape space \(\Omega_{\text{S}}\) with \(M\). Hence, the stage-2 alignment just requires fine-tuning \(M\) using a CLIP consistency loss with the input text for only 20 iterations. This fine-tuning takes around 85 seconds on one GeForce RTX 3090 Ti GPU, which is significantly faster than Dream Fields [11] and DreamFusion [25], taking 72 minutes and 90 minutes, respectively. After stage-2 alignment, a plausible result can be obtained readily, shown as "result" in Figure 5 (b). Our two-stage feature-space alignment is a novel approach that can generate 3D shapes from text efficiently, which significantly reduces the test time compared to previous methods.
#### 3.3.3 Diversified 3D shape generation
Generally speaking, 3D shape generation from text is a one-to-many task, meaning that multiple plausible shapes can correspond to the same piece of text. To account for this, instead of using a single objective using \(f_{\text{T}}\) to construct \(\mathcal{L}_{\text{C}}\), we propose to sample features from a pre-trained text-to-image diffusion model [28] which can generate features \(f_{\text{T}\to\text{I}}\) in the CLIP image feature space from a single input text CLIP feature \(f_{\text{T}}\). At each time, we obtain one text-to-image feature by sampling a random noise and obtain \(f_{\text{T}\to\text{I}}\) which is further combined with the original text feature \(f_{\text{T}}\) to construct \(\mathcal{L}_{\text{C}}\) as
\[\mathcal{L}_{\text{C}}=\sum_{i=1}^{m}\left\langle(\tau f_{\text{T}\to\text{I }}+(1-\tau)f_{\text{T}})\cdot\frac{E_{\text{I}}(R_{i})}{||E_{\text{I}}(R_{i} )||}\right\rangle. \tag{5}\]
where \(f_{\text{T}\to\text{I}}\) is the predicted \(f_{\text{I}}\) from \(f_{\text{T}}\) by diffusion prior [28] with sampled random noise and \(\tau\) is a hyperparameter that balances diversity and text-shape consistency; a larger \(\tau\) leads to more diverse shapes, while a smaller \(\tau\) encourages more consistency between the text and shape. By sampling multiple random noises which deliver multiple \(f_{\text{T}\to\text{I}}\) and constructing different consistency objective \(\mathcal{L}_{C}\), our model can be optimized to generate diverse results at the test time; see Figure 3 (b) "diffusion prior". This allows our model to create diverse 3D shapes for the same piece of input text. Besides, by exploiting the prior from diffusion models, our model can also better mitigate the effect of the semantic gap between \(f_{\text{T}}\) and \(f_{\text{I}}\) in the stage-2 alignment; see the discussion in Section 3.2.2. This is achieved by encouraging \(f_{\text{I}}\) of the rendered images to be consistent with the blended features of the sampled text-to-image feature \(f_{\text{T}\to\text{I}}\) and the input text \(f_{\text{T}}\), rather than just the input text feature \(f_{\text{T}}\) itself.
### _Text-Guided 3D Shape Stylization_
While the two-stage feature-space alignment can generate plausible 3D shapes as shown in Figures 3 (b) and 5 (b), its generative space and quality are still limited by the pre-trained SVR model in use. For instance, DVR [23] cannot generate shapes beyond the synthetic patterns in ShapeNet dataset. Further, to enable the model to generate a broader range of structures and textures, we introduce text-guided stylization and refinement modules to enable our approach to create shapes out of the SVR generative space with delicate structures and textures; see Figures 6 and 2 "ISS++".
#### 3.4.1 CLIP-guided stylization
First, we introduce CLIP-guided stylization to stylize 3D shapes beyond the generative space of the adopted SVR model.
The top branch of Figure 6 (a) shows how we apply this method for texture stylization. To begin with, we duplicate \(D\), except for the output layer, to create two networks: \(D_{o}\) for occupancy prediction and \(D_{c}\) for color prediction. Then we decompose the output layer to be \(1\) and \(3\) channels for occupancy and color prediction, respectively, and place them on the top of \(D_{o}\) and \(D_{c}\)
Fig. 5: Generating shapes from text with and without our background loss \(\mathcal{L}_{\text{bg}}\). Input text: A red car.
To further create new structures for shape stylization, we incorporate a shape-and-texture stylization strategy in addition to texture stylization, as depicted in the bottom branch of Figure 6 (a). To do so, we further optimize \(D\) by adopting the CLIP consistency loss in Equation 4. Besides, to preserve 3D prior learned in the two-stage feature-space alignment, we additionally propose a 3D prior loss \(\mathcal{L}_{P}\) as shown in Equation (6).
\[\mathcal{L}_{\text{P}}=\sum_{p}|D_{o}(p)-D^{\prime}_{o}(p)| \tag{6}\]
where \(D_{o}(p)\), \(D^{\prime}_{o}(p)\) indicate the initial occupancy prediction from \(D\) and the optimized \(D\) in the stylization training process of the query point \(p\), respectively.
To enhance the network's object awareness in the stylization process, we introduce a background augmentation technique. As illustrated in Figure 7 (a), when the shape is in white, it can blend into the white background, making it difficult for the model to capture the object boundaries and resulting in textures that are poorly aligned with the table. Similarly, in Figure 7 (c), the generated texture is adversely harmed by the background color which is black, leading to inferior stylization results. In our background augmentation strategy, we propose to substitute the background color as a random RGB value for each training iteration. In this way, the object region is easily distinguishable during training, as depicted in Figure 7 (b, d), leading to an improvement in texture-shape consistency and stylization quality.
#### 3.4.2 SDS-guided refinement and stylization
The CLIP-guided stylization helps generate 3D shapes outside the scope of the SVR model's generative space. Yet, the quality of the generated shapes is still bounded by the adopted SVR model with detailed structures missing. To further enhance the quality of the generated shapes, we introduce a novel SDS-guided refinement and stylization technique to decorate the 3D shapes with intricate details and textures, as illustrated in Figures 1, 2 "ISS++", and Figures 6 (b).
The proposed SDS-guided refinement module is inspired by [25] and aims to improve the generative quality of the pre-trained SVR model. Given a pre-trained text-guided image generation diffusion model \(\phi\) and an input text \(T\), we adopt Score Distillation Sampling (SDS) approach to fine-tune \(D\) by encouraging the rendered image \(R\) to be closer to the generated image of \(\phi\) given input \(T\). As shown in Figure 6 (b), we use \(\theta\) to denote parameters in the decoder \(D(p;\theta)\), \(p\) to represent the query points, and \(R(D(p;\theta))\) to indicate the rendered image from a randomly chosen viewpoint. Specifically, we randomly sample a time step \(t\) and add noise to \(R\) to produce \(z_{t}\): \(z_{t}=\sqrt{\alpha_{t}}R+\sqrt{1-\bar{\alpha}_{t}}\epsilon\). The text \(T\) and \(z_{t}\) are fed into the pre-trained diffusion model which predicts the noise \(\hat{\epsilon}_{\phi}(z_{t};T,t)\). The predicted noise is compared with the added noise \(\epsilon\) to construct the \(\mathcal{L}_{\text{ads}}\). The procedure for calculating the gradient \(\nabla_{\theta}\mathcal{L}_{\text{sds}}\) is illustrated below.
\[\nabla_{\theta}\mathcal{L}_{\text{sds}}(\phi,R(D(p;\theta)))= \mathbb{E}_{t,\epsilon}[\frac{\partial(\hat{\epsilon}_{\phi}(z_{t};T,t)- \epsilon)}{\partial\theta}] \tag{7a}\] \[=\mathbb{E}_{t,\epsilon}[(\hat{\epsilon}_{\phi}(z_{t};T,t)- \epsilon)\frac{\partial\hat{\epsilon}_{\phi}(z_{t};T,t)}{\partial z_{t}}\frac{ \partial z_{t}}{\partial R}\frac{\partial R}{\partial\theta}]\] (7b) \[\triangleq\mathbb{E}_{t,\epsilon}[w(t)(\hat{\epsilon}_{\phi}(z_{ t};T,t)-\epsilon)\frac{\partial R}{\partial\theta}] \tag{7c}\]
where \(w(t)=\partial z_{t}/\partial R=\sqrt{\alpha_{t}}I\) is a weighting function, and the term \(\frac{\partial\hat{\epsilon}_{\phi}(z_{t};T,t)}{\partial z_{t}}\) can be omitted indicated by [25]. The gradient \(\nabla_{\theta}\mathcal{L}_{\text{sds}}\) will encourage the parameters \(\theta\) to be updated so that the model can produce rendered images \(R\) moving toward the high-density region of the score function. This means that the rendered image \(R\) will be encouraged to be realistic and match the text, which in turn will help update the parameters \(\theta\).
Fig. 6: Our text-guided 3D shape refinement and stylization framework. (a) CLIP-guided stylization. (b) SDS-guided refinement and stylization.
Fig. 7: Text-guided 3D shape stylization with and without our background augmentation.
The SDS-guided refinement further enhances the surface details of the generated shapes while preserving the overall topology learned by the two-stage feature-space alignment. With much fewer training iterations than DreamFusion, ISS++ is able to generate 3D shapes with comparable or even higher fidelity, as shown in Section 4. Besides, our ISS++ helps mitigate the "multi-face Janus problem" of DreamFusion [25], which refers to the situation that the generated shapes have multiple, often disconnected, faces. This can occur due to the lack of constraints on the topology of the generated shapes without 3D priors. In contrast, our ISS++ leverages the 3D shape prior learned in the two-stage feature-space alignment to encourage consistency in the shape topology and achieve faithful and coherent shapes. Further, SDS enables ISS++ to generate a broader range of 3D shapes out of the image dataset; we will provide more results in Section 4.
Furthermore, this module also enables text-guided stylization to complement the CLIP-guided stylization presented in Section 3.4.1. Specifically, given a 3D shape \(S\) generated by our two-stage feature-space alignment and a text prompt \(T\) that describes the target style, the SDS-guided stylization procedure can incorporate the semantic attributes of \(T\) into \(S\), as illustrated in Figure 6 (b).
#### 3.4.3 Discussions on the different stylization approaches
We have presented three text-guided 3D shape stylization alternatives: texture stylization, shape-and-texture stylization, and SDS-guided stylization. Each method has its own pros and cons. First, texture stylization mainly changes the texture style of the generated shape and preserves its own structure and functionality. Besides, it can handle abstract text descriptions ("sunset") as shown in Figure 18 (a). However, texture stylization may result in shape-texture misalignment if the given shape and texture have misaligned structures (see Figure 18: "peach chair"). Second, beyond the texture stylization, the shape-and-texture stylization can also create novel and imaginary structures, giving rise to more plausible generative results. Third, SDS-guided stylization is capable of producing stylized 3D shapes that better capture the semantic concepts of the given style with better fidelity. However, it may sacrifice the functionality of the generated shapes; see Figure 19 ("a chair imitating shell")
In summary, there is a trade-off between preserving the functionality of 3D shapes and capturing the target style. To address this, we offer three options for users to choose from. Texture stylization is a good choice if the shape functionality is a top priority. Shape-and-texture stylization can encourage better consistency between texture and shape and is capable of generating novel structures. SDS-guided stylization can produce stylized 3D shapes with a higher fidelity according to the target style but at the expense of sacrificing their functionalities. We hope our exploration will inspire more research efforts in the future for simultaneously achieving functionality preservation and style creation.
### _Compatibility with Different SVR Models_
In addition to DVR, our two-stage feature-space alignment can work with a variety of SVR models. For instance, it can be easily integrated with advanced methods, SS3D [2] and GET3D [7], two recent generative models for 3D shape generation. SS3D is capable of generating 3D shapes for a wide range of categories and GET3D can generate striking 3D shapes in superior quality. By replacing \(E_{\text{S}}\) and \(D\) in Figure 3 with the encoder and decoder of SS3D or GET3D, our model can be integrated with them and produce shapes of more categories or higher qualities. During training, we can adopt a similar pipeline depicted in Figure 3 to enable text-to-shape generation. For SS3D, in stage-1 training, we use their training objectives to replace \(\mathcal{L}_{\text{D}}\) (see Section 3.3), which uses single-view in-the-wild images beyond the ShapeNet categories without their poses. For GET3D, we first generate paired image-shape data by rendering images from its generated 3D shapes for training our two-stage feature-space alignment pipeline. In a nutshell, our approach is scalable and compatible with various SVR models and can potentially benefit from other new approaches in the future.
## 4 Experiments
### _Dataset_
To train our ISS++ framework, we use both synthetic and real-world datasets, ShapeNet [3] (13 categories) and CO3D [32] (50 categories), respectively. We further extend the generative capability beyond the above categories by adopting SS3D [2] and fine-tuning our model using SDS. For conducting quantitative and qualitative evaluations, we create a test set with four pieces of texts per category in the ShapeNet dataset.
### _Implementation Details_
To train the two-stage feature-space alignment model, we first train the stage-1 mapping with the learning rate of \(1e^{-4}\) for 400 epochs. Then at test time, we further train the stage-2 alignment for 20 iterations. On average, this process takes around 85 seconds using one GeForce RTX 3090 Ti GPU. Optionally, we further refine \(S\) with SDS loss for about \(40\) epochs or text-guided stylization for about \(30-50\) epochs. Our hyperparameters, including \(\lambda_{M}\), t, \(\lambda_{bg}\), \(m\), and \(\tau\), are set empirically to 0.5, 0.5, 10, 10, and 0.5, respectively, based on a small validation set.
### _Metrics_
#### 4.3.1 Metric for shape generation quality
For quantitative evaluation, we compute the Frechet Inception Distance (FID) [8] between a set of five rendered images from different camera viewpoints for each shape and a set of ground-truth images from ShapeNet. We use the official model with InceptionNet pre-trained on ImageNet for FID evaluation, as it is a widely adopted metric for evaluating the realism and quality of generative models. We do not train an FID model on ShapeNet, as the size of the dataset is too small to train an effective FID model like that trained on ImageNet. Additionally, we randomly sample 2,600 images from the ShapeNet dataset as ground truth images for FID evaluation, rather than using images from ImageNet, to more accurately evaluate the similarity of the generated shapes and the ShapeNet ground truth.
Besides adopting FID, we also utilize the metric Frechet Point Distance (FPD) proposed in [18] to measure the shape generation quality without texture. To evaluate FPD, We first extract 3D point clouds from the generated shapes without color (see Figure 8) and then evaluate them. It is worth mentioning that Dream Fields [11] does not generate 3D shapes directly, so we could not evaluate it using FPD in this aspect.
#### 4.3.2 Human perceptual evaluation setup
Further, we conduct a human perceptual evaluation to assess the consistency between the generated shapes and the input text. To begin with, we collect the generated results. For each input text, we create 14 results in total from the four existing works, eight baseline methods, and our predecessor work ISS [17] and our ISS++; see Section 4.4 and Section 4.5 for details of each approach. Then, we invite 10 volunteers with normal vision to participate in the evaluation, including 3 females and 7 males whose ages are in the range of 19 to 58. The generated results are shown to the participants in random order without any hint on how they are created. Then the volunteers are asked to give a score to indicate whether the candidate shape matches the input text, where 1 means a perfect match, 0.5 means a partial match, and 0 indicates a poor match. At last, we sum up the total score \(s\) for each approach from all participants and calculate \(s/n\) as the metric "Consistency Score", where \(n=10\) means the number of collected samples.
### _Comparisons with Existing Works_
In this section, we conduct qualitative and quantitative comparisons of four state-of-the-art works [11, 20, 25, 35], our predecessor work ISS [17], and our ISS++. For DreamFusion [25], since there are no official codes available, we use the latest version of a third-party implementation called Stable-DreamFusion [38]. For other works, we use their official codes on GitHub to generate shapes on our text set.
#### 4.4.1 Quantitative comparisons
According to the quantitative comparisons presented in Table I, our result "ISS++" outperforms all the existing works by a considerable margin in terms of all the evaluation metrics, as shown in Table I. Specifically, the superior performance on FID and FPD demonstrates that our generative results have better quality in terms of the texture and 3D topology. In addition, the higher Consistency Score indicates that ISS++ can generate shapes with better consistency with the input text. The results of "A/B/C Test" and "A/B Test" will be discussed later in Section 4.5.3.
#### 4.4.2 Qualitative comparisons
**Comparison with state of the arts.** Then we compare the generative results of our ISS++ with four existing works and our predecessor work ISS [17]. The qualitative comparisons are shown in Figure 9. We observe that CLIP-Forge [35] can only produce low-resolution shapes without color and texture, and some of its generated shapes are not well aligned with the input text, for instance, "a watercraft". Dream Fields [11] fails to generate desired shapes in most evaluated cases. Moreover, CLIP-Mesh [20] is unable to generate fine-grained topology in some cases such as "a black airplane with white wings". Besides, Stable-Dreamfusion [38] has inferior performance in terms of surface quality ("a black airplane with white wings"), topology faithfulness ("a cupboard"), and generative efficiency. Despite that our predecessor work ISS [17] can produce 3D shapes with better topology faithfulness and less time-consuming, the details of the results are still far from satisfactory, e.g., the rearview mirror on "a red car". In contrast, our ISS++ outperforms all the existing works by a large margin in terms of generative quality, consistency with the input text, and details on the generated shape, as shown in Figure 9.
**Comparison with DreamFusion.** To provide a further comparison with the most recent work DreamFusion [25], we show additional generative results from Stable-DreamFusion [38] and our ISS++ in Figure 10. Unlike Stable-DreamFusion, which optimizes the shape directly using SDS without a 3D prior, our ISS++ utilizes the 3D prior learned by our two-stage feature-space alignment, improving the generative performance in terms of avoiding failure modes (e.g., "a race car in the color of yellow"), enhancing the surface quality (e.g., "an ambulance"), and improving the 3D topology faithfulness (e.g., "a swivel chair with wheels"). In addition, our ISS++ mitigates the "multi-face Janus problem" in Stable-DreamFusion, where the generated shapes, e.g., the monitors in Figure 11, can have multiple frontal views when viewed from different viewpoints. On the contrary, our ISS++ is able to generate faithful 3D shapes leveraging the 3D prior learned in our two-stage feature-space alignment, see Figure 11 "ISS++".
**Generalization ability to novel categories.** Another notable advantage of our ISS++ is its ability to generate 3D shapes in novel categories beyond the training data. As depicted in Figure 12, starting from a randomly chosen shape "a red car" from our two-stage feature-space alignment, ISS++ is capable of deforming it into various 3D shapes (Figure 12) in a broad range of categories. It is worth noting that the quality of generated shapes can benefit from 3D priors of unrelated categories. For instance, a "bird" can be generated from using a "car" as prior. This might be caused by the smoothness priors enforced by the initialization model, which is further used by the subsequent SDS process to produce high-quality surface. This demonstrates the generalization ability of our method in generating diverse and plausible novel 3D shapes, even for input texts beyond the training categories.
### _Ablation Studies_
#### 4.5.1 Baseline setups
In addition, we develop several baselines to evaluate the effectiveness of different components in our model.
* \(E_{\text{I}}+D\): This is the baseline where we get the CLIP image feature \(f_{\text{I}}\) using \(E_{\text{I}}\), and optimize \(D\) to generate 3D shapes from \(f_{\text{I}}\) without using the two-stage feature-space alignment.
* w/o stage 1: This baseline involves ablating stage-1 alignment and optimizing stage-2 alignment with a randomly initialized \(M\).
* w/o stage 2: In this baseline, we directly generate the shape with the mapper \(M\) after stage 1, without performing the stage-2 optimization.
* w/o \(\mathcal{L}_{\text{bg},1}\): This baseline involves removing \(L_{\text{bg}}\) in stage-1 alignment.
* w/o \(\mathcal{L}_{\text{bg},2}\): This baseline involves removing \(L_{\text{bg}}\) in stage-2 alignment.
* w/o \(\mathcal{L}_{\text{bg}}\): This baseline involves removing \(\mathcal{L}_{\text{bg}}\) in both stages.
* GLIDE+DVR: This baseline involves using a recent zero-shot text-to-image generation method GLIDE [22] to first generate
Fig. 8: Visualization of point clouds of different methods for FPD evaluation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method Type} & \multirow{2}{*}{Method} & \multirow{2}{*}{FID (\(\downarrow\))} & \multirow{2}{*}{Consistency Score (\%) (\(\uparrow\))} & \multirow{2}{*}{FPD (\(\downarrow\))} & \multicolumn{2}{c}{A/B/C Test} & \multicolumn{1}{c}{A/B Test} \\ & & & & & (for two-stage alignment) & (for ISS++) \\ \hline \multirow{4}{*}{Existing works} & CLIP-Forge [35] & 162.87 & 41.83 \(\pm\) 17.62 & 37.43 & 8.90 \(\pm\) 4.12 & N.A. \\ & Dream Fields [11] & 181.25 & 25.38 \(\pm\) 12.33 & N.A. & N.A. & N.A. \\ & CLIP-Mesh [20] & 188.09 & 40.27 \(\pm\) 8.82 & 40.27 & N.A. & N.A. \\ & Dreamfusion [25]* & 159.04 & 38.36 \(\pm\) 9.12 & 36.44 & N.A. & 7.00 \(\pm\) 2.64 \\ \hline \multirow{4}{*}{Ablation studies} & \(E_{\text{1}}\)+\(D\) & 181.88 & 20.97 \(\pm\) 13.59 & 38.61 & N.A. & N.A. \\ & w/o stage 1 & 222.96 & 1.92 \(\pm\) 2.22 & 79.41 & N.A. & N.A. \\ & w/o stage 2 & 202.33 & 29.52 \(\pm\) 14.86 & 41.71 & N.A. & N.A. \\ & w/o \(\mathcal{L}_{\text{bg},1}\) & 149.45 & 29.45 \(\pm\) 14.67 & 40.85 & N.A. & N.A. \\ & w/o \(\mathcal{L}_{\text{bg},2}\) & 156.52 & 31.55 \(\pm\) 8.87 & 38.31 & N.A. & N.A. \\ & w/o \(\mathcal{L}_{\text{bg}}\) & 178.34 & 30.96 \(\pm\) 15.49 & 40.98 & N.A. & N.A. \\ \hline \multirow{2}{*}{Text2Image+SVR} & GLIDE [22]+DVR [23] & 212.41 & 8.85 \(\pm\) 7.94 & 41.33 & N.A. & N.A. \\ & LAFITE [48]+DVR [23] & 135.01 & 52.12 \(\pm\) 11.05 & 37.55 & 11.70 \(\pm\) 4.11 & N.A. \\ \hline Our earlier work & ISS [17] & 124.42 \(\pm\) 5.11 & 60.0 \(\pm\) 10.94 & 35.67 \(\pm\) 1.09 & **21.70 \(\pm\) 5.19** & N.A. \\ \hline Ours & ISS++ & **114.34** & **70.77 \(\pm\) 8.38** & **30.92** & N.A. & **31.80 \(\pm\) 7.53** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparisons with existing works and our baselines. *: We use Stable-Dreamfusion [38] for implementation
Fig. 11: Stable-Dreamfusion suffers from the “multi-face Janus problem”, and our ISS++ mitigates this issue by leveraging the 3D prior.
Fig. 10: Results of Stable-Dreamfusion and our ISS++.
Fig. 9: Qualitative comparisons with existing works.
image \(I\) from \(T\), and then using DVR [23] to generate \(S\) from \(I\).
* LAFITE+DVR: In this baseline, we train a recent text-guided image generation approach LAFITE [48] on ShapeNet dataset, produce an image \(I\) from \(T\), and then generate \(S\) from \(I\) using DVR [23].
The first six baselines are designed to evaluate the effectiveness of modules in our framework and the last two baselines utilize advanced text-guided 2D image generation methods to first generate images and then use an SVR model to generate shapes. Note that we still adopt DVR as the SVR model for fair comparisons.
#### 4.5.2 Quantitative and qualitative comparisons
The qualitative results of baseline methods are shown in Figure 13. We summarize our key observations as below:
* \(E_{\text{I}}+D\): As seen in column (a) of Figure 13, the generated results from CLIP space \(\Omega_{\text{I}}\) have inferior texture and shape structure fidelity due to the inferior ability of \(E_{\text{I}}\) in capturing image details.
* w/o stage 1: Figure 13 (b) shows that the produced shapes are almost the same for any given text without adopting stage-1 alignment. This happens because \(M\) maps text feature \(f_{\text{T}}\) to nearly the same feature even with stage-2 alignment enabled. This demonstrates the necessity of stage-1 alignment to provide good initialization for stage-2 test-time optimization.
* w/o stage 2: Figure 13 (c) indicates that the model may fail to align \(f_{S}\) and \(f_{\text{T}}\) well without stage 2. This can be further illustrated in Figure 14 (a). Without using stage 2, the model fails to generate a reasonable shape with text as input but successes in generating 3D shapes from a single image. After applying stage 2, a plausible phone can be produced using the text (see "stage 2 output").
* w/o \(\mathcal{L}_{\text{bg},1}\), w/o \(\mathcal{L}_{\text{bg},2}\), w/o \(\mathcal{L}_{\text{bg}}\): Columns (d, e, f) of Figure 13 show that stage-2 alignment cannot work properly without \(\mathcal{L}_{\text{bg}}\) in either stage-1 or stage-2 alignment or both due to the lack of foreground awareness. Even though stage-1 alignment has already encouraged the background to be white, we still need this loss in stage 2 to obtain satisfying results.
* GLIDE+DVR: The performance of GLIDE+DVR (see Figure 14 (b)) is poor because of the large domain gap between the training data of DVR and the images generated by GLIDE [22].
* LAFITE+DVR: In Figure 13 (h), some shapes produced by this baseline do not match the given texts because of the semantic gap between \(f_{\text{I}}\) and \(f_{\text{T}}\) (e.g., "a wooden boat"). Also, the appearance can be coarse (Figure 14 (b)) because of the error accumulation of the isolated two steps, i.e., LAFITE (Figure 14 (b) "image from LAFITE") and DVR (Figure 14 (b) "shape from LAFITE image"). Despite these shortcomings, generating images and shapes in a subsequent manner remains a strong baseline that is a valuable direction for future research.
* Two-stage alignment: Column (i) of Figure 13 shows that our two-stage feature space alignment can generate plausible shapes and textures consistent with text descriptions, beyond all the above baselines. However, the generative details are still not very satisfying.
* Ours (ISS++): Column (j) of Figure 13 demonstrates the superior capability of ISS++ to generate shapes and textures with a remarkable level of detail, outperforming all the baselines by a substantial margin.
#### 4.5.3 A/B/C test and A/B test
We conduct an A/B/C test and an A/B test with 10 volunteers. For fair comparisons, the A/B/C test is designed to evaluate the approaches without SDS refinement, _i.e._, our two-stage feature-space alignment and two baselines have a higher the highest performance: CLIP-Forge [35] and "LAFITE+DVR". In addition, A/B test aims to compare the approaches trained with SDS, including our ISS++ with DreamFusion [25]. In this test, the results of the three approaches (per input text, a total of 52 texts) were displayed in a random order, and the participants were asked to choose their favorite one.
The results of the A/B/C test, shown in Table I "A/B/C Test", demonstrate that our two-stage feature-space alignment is the most preferred approach, outperforming CLIP-Forge by 143.8% (computed as \((21.70-8.90)/8.90\)) and "LAFITE+DVR" by 85.5% (computed as \((21.70-11.70)/11.70\)). In addition, the result of "A/B test" in Table I shows that our ISS++ outperforms Stable-Dreamfusion by 354.3% (computed as \((31.80-7.00)/7.00\)) in terms of user preference.
### _More Analysis of Generative Results_
Moreover, we evaluate the novelty and diversity of generated shapes, as well as the scalability of the proposed two-stage feature-space alignment. Additionally, we will showcase further text-guided stylization results of ISS++ and demonstrate how our method can generalize to a broad range of categories and produce high-fidelity shapes.
**Generation novelty of two-stage feature space alignment.** Our two-stage feature-space alignment has the ability to produce shapes that are novel and not present in the training data. Figure 15 shows that given an input text, our model first generates the 3D shape in (a), and then uses it to retrieve the top three closest shapes (b,c,d) in
Fig. 12: With a randomly selected shape as initialization (“a red car”), ISS++ can generate a wide range of 3D shapes beyond the training categories.
the entire training set based on the cosine similarity between CLIP features \(f_{\text{f}}\) of rendered images. The result shows that our generated shapes after two-stage feature space alignment are different from the retrieved shapes, indicating that our two-stage feature space alignment method is able to generate novel shapes even without any stylization process. It is unsurprising since our two-stage feature space alignment shares the generative space with the adopted SVR model and has the potential to create all shapes that the adopted SVR model can generate.
**Generation diversity.** In Figure 16 and Table II, we compare the diversified generation results of ISS++ and our previous work ISS [17] both qualitatively and quantitatively. Remember that ISS [17] is also able to generate diversified shapes by randomly perturbating \(f_{\text{f}}\) as initialization and \(f_{\text{T}}\) as the ground truth to derive diversified features. The model can then converge to different shapes for different noise perturbations. To evaluate the generative diversity, we generate another two shapes per input text for both ISS [17] and ISS++, then use FID [8] and FPD [18] for the fidelity and diversity evaluation. The results in Table II and Figure 16 demonstrate that our ISS++ can generate more diversified shapes with better text-shape consistency than ISS [17].
**Generation fidelity of two-stage feature space alignment.** To evaluate the ability of our two-stage feature space alignment to generate realistic 3D shapes, we train DVR [23] on the real-world CO3D dataset, and adopt the learned feature space for text-guided shape generation without using paired data. As depicted in Figure 17, our model can produce real-world shapes with a high degree of fidelity. To the best of our knowledge, this is the first work to investigate text-guided shape generation on real-world datasets and generate realistic 3D shapes.
**Generation beyond the capability of the SVR model.** The text-guided stylization module enables our model to create 3D shapes beyond the pre-trained SVR model. As shown in Figure 1, Figure 6, Figure 18, and Figure 19, novel structures and textures matching text descriptions can be created. As shown in Figure 18 (a), the CLIP-guided texture stylization can hallucinate both realistic ("mahogany chair") and fantasy ("glacier chair") vivid textures on the chair. Further, in Figure 18 (b), our shape-and-texture stylization successfully creates novel textures and imaginary shapes not present in the training dataset. In addition, as shown in Figure 1 and Figure 19, our ISS++ is capable of generating aesthetically pleasing stylized shapes with intricate details and textures, such as the "rabbit lamp" and "banana chair". These results showcase the ability of our model to generate visually appealing and complex shapes from text descriptions.
**Generality and scalability of two-stage feature space alignment on other SVR models.** The generality and scalability of two-stage feature space alignment are evaluated by replacing DVR [23] with other SVR models, such as SS3D [2] and GET3D [7]. It is worth mentioning that SS3D is good at producing 3D shapes in more
Fig. 19: SDS-guided stylization. Two different views are rendered. The text prompt is “A [shape] simulating a [style].”
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \(d(M(f_{\text{l}}),M(f_{\text{r}}))\) & \(d(M(f_{\text{l}}),f_{\text{s}}))\) & \(d(M(f_{\text{r}}),f_{\text{s}}))\) & \(d(M^{\prime}(f_{\text{r}}),f_{\text{s}}))\) & \(d(M(f_{\text{r}}),M^{\prime}(f_{\text{r}}))\) \\ \hline mean \(\pm\) std & 0.58 \(\pm\) 0.23 & 0.21 \(\pm\) 0.10 & 0.45 \(\pm\) 0.20 & 0.17 \(\pm\) 0.08 & 0.32 \(\pm\) 0.17 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Mean and standard deviation of distances in the feature space mapping process evaluated on our test set. \(d\) means cosine distance. Almost all distances are consistently reduced after our stage-2 alignment.
Fig. 18: CLIP-Guided stylization. (a) Texture stylization. (b) Shape-and-texture stylization.
Figure 20: Generative results of our approach. With ISS++, we can effectively generate 3D shapes of various categories from texts. Left: ISS [17], Right: SS++.
categories and GET3D is able to generate 3D shapes with higher fidelity. First, Figure 21 shows that our approach, built upon SS3D, can generate shapes of more real-world categories, such as birds. Notably, the shape generated by our model is of better quality than the initial result of the SS3D with an image as input for 3D shape generation from texts. Second, our model is able to fully leverage the generative capabilities of GET3D to produce high-fidelity 3D shapes, as displayed in Figure 22. These results demonstrate that our approach is general and compatible with various advanced SVR models for producing shapes of more categories and higher qualities even without SDS-guided refinement.
**More generative results.** In addition, we showcase a diverse range of 3D shapes that have been effectively generated from texts using our approach in Figure 20.
### _Analysis of Feature Space Mapping_
To better understand how our two-stage feature-space alignment works, we further study the average feature distances at different stages for all samples in our test set as shown in Table III. Please also refer to Figure 4 (c) for the visualized results.
In the stage-1 alignment, we train the maper \(M\) to map the CLIP image feature \(f_{\text{I}}\) to \(M(f_{\text{I}})\) that is close to the target shape \(f_{\text{S}}\) with latent-space regression. Based on the fact that the CLIP model is able to map \(f_{\text{I}}\) and \(f_{\text{T}}\) to a shared embedding space, it is a natural assumption that the maper \(M\) is also able to map \(f_{\text{T}}\) close to the target shape space. Yet, we found that there is a large gap between \(M(f_{\text{T}})\) and \(f_{\text{S}}\) even with the stage-1 alignment. Specifically, the average distance of all samples between \(M(f_{\text{T}})\) and \(M(f_{\text{I}})\) is \(0.58\pm 0.23\), indicating a substantial gap between the CLIP image and text features. Also, the measured average distance between \(M(f_{\text{I}})\) and \(f_{\text{S}}\) is \(0.21\pm 0.10\), while the distance of mapped text and shape is \(d(M(f_{\text{T}}),f_{\text{S}})=0.45\pm 0.20\), indicating a large room for further improvement. Importantly, the above motivates us to adopt an additional stage-2 alignment. It should be noted that since there is no ground truth 3D shape in our task, we manually select a shape from the ShapeNet dataset that matches well with the input text as the ground truth shape.
During the stage-2 alignment, the maper \(M\) is fine-tuned to be \(M^{\prime}\) for each input text to further narrow the gap between \(M^{\prime}(f_{\text{T}})\) and \(f_{\text{S}}\) to be \(0.17\pm 0.08\), which is much smaller than \(0.45\pm 0.20\), i.e., \(d(M(f_{\text{T}}),f_{\text{S}})\) after the stage-1 alignment. This analysis manifests that the stage-2 alignment can effectively reduce the gap between features of the mapped text and the reference shape.
## 5 Limitations
ISS++ has a trade-off between the generative fidelity of 3D shapes within the image dataset and the generation capability for categories outside the image dataset. Although ISS++ can generate shapes outside the dataset with better surface quality (as shown in Figure 12), its out-of-category generative capability
Fig. 21: After training on single images (without camera poses), our approach can generate shapes for a broad range of categories, by adopting [2].
Fig. 22: Results of two-stage feature space alignment built upon GET3D.
Fig. 23: From a randomly chosen shape (a), some of our out-of-category results (ISS++) are inferior to DreamFusion (Stable-Dreamfusion).
may not always outperform DreamFusion [25], as demonstrated in Figure 23. We empirically found that it can be helpful to choose an initialization shape with a similar topology to the desired shape for the SDS procedure. However, there is still a lack of guidance on how to choose a suitable initialization shape for an out-of-category generation.
## 6 Conclusion
In this work, we introduce a novel approach for text-guided 3D shape generation that leverages the image modality as a stepping stone. Our approach eliminates the need for paired text and shape data by using joint text-image features from CLIP and shape priors from a pre-trained single-view reconstruction model. Technically, we have the following contributions. First, our two-stage feature-space alignment reduces the gap between text, image, and shape modalities. Second, the text-guided refinement and stylization techniques effectively and efficiently equip the generated 3D shapes with rich details and diverse styles. Third, our proposed approach is compatible with different single-view reconstruction methods and can be developed to produce shapes in a wide variety of categories and with higher fidelity. Experimental results on ShapeNet, CO3D, and additional categories demonstrate that our approach outperforms SOTA approaches and various baselines.
## Acknowledgements
The work has been supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (Project no. CUHK 14206320), General Research Fund of Hong Kong (No. 17202422), Hong Kong Research Grant Council - Early Career Scheme (Grant No. 27209621), General Research Fund (Grant No. 17202422) and National Natural Science Foundation of China (No. 62202151). We would also like to thank Mr. Jingyu Hu from The Chinese University of Hong Kong and Dr. Karsten Kreis from NVIDIA's Toronto AI Lab for insightful discussions and contributions to the ideas presented in this work.
|
2304.11340 | Semantic Specialization for Knowledge-based Word Sense Disambiguation | A promising approach for knowledge-based Word Sense Disambiguation (WSD) is
to select the sense whose contextualized embeddings computed for its definition
sentence are closest to those computed for a target word in a given sentence.
This approach relies on the similarity of the \textit{sense} and
\textit{context} embeddings computed by a pre-trained language model. We
propose a semantic specialization for WSD where contextualized embeddings are
adapted to the WSD task using solely lexical knowledge. The key idea is, for a
given sense, to bring semantically related senses and contexts closer and send
different/unrelated senses farther away. We realize this idea as the joint
optimization of the Attract-Repel objective for sense pairs and the
self-training objective for context-sense pairs while controlling deviations
from the original embeddings. The proposed method outperformed previous studies
that adapt contextualized embeddings. It achieved state-of-the-art performance
on knowledge-based WSD when combined with the reranking heuristic that uses the
sense inventory. We found that the similarity characteristics of specialized
embeddings conform to the key idea. We also found that the (dis)similarity of
embeddings between the related/different/unrelated senses correlates well with
the performance of WSD. | Sakae Mizuki, Naoaki Okazaki | 2023-04-22T07:40:23Z | http://arxiv.org/abs/2304.11340v1 | # Semantic Specialization for Knowledge-based Word Sense
###### Abstract
A promising approach for knowledge-based Word Sense Disambiguation (WSD) is to select the sense whose contextualized embeddings computed for its definition sentence are closest to those computed for a target word in a given sentence. This approach relies on the similarity of the _sense_ and _context_ embeddings computed by a pre-trained language model. We propose a semantic specialization for WSD where contextualized embeddings are adapted to the WSD task using solely lexical knowledge. The key idea is, for a given sense, to bring semantically related senses and contexts closer and send different/unrelated senses farther away. We realize this idea as the joint optimization of the Attract-Repel objective for sense pairs and the self-training objective for context-sense pairs while controlling deviations from the original embeddings. The proposed method outperformed previous studies that adapt contextualized embeddings. It achieved state-of-the-art performance on knowledge-based WSD when combined with the reranking heuristic that uses the sense inventory. We found that the similarity characteristics of specialized embeddings conform to the key idea. We also found that the (dis)similarity of embeddings between the related/different/unrelated senses correlates well with the performance of WSD.
## 1 Introduction
Word Sense Disambiguation (WSD) is the task of choosing the appropriate sense of a word from a given sense inventory using contextual information. WSD has proven its usefulness for Information Retrieval Zhong and Ng (2012) and Machine Translation Campolungo et al. (2022). A series of extensive studies has led supervised WSD task performance to surpass the milestone of 80% accuracy Bevilacqua and Navigli (2020), which is the estimated human performance Navigli (2009).
In contrast, the goal of this study is _knowledge-based WSD_: a variant of WSD that does not rely on supervision data but only on lexical knowledge (e.g., word ontology). This task setting is practically appealing because it does not use a corpus with sense annotations Bevilacqua et al. (2021), which is costly and labor-intensive to prepare.
A promising approach is based on similarity: to select the sense that is the nearest to a target word in the embedding space Wang and Wang (2020). Specifically, a pre-trained language model, typically BERT Devlin et al. (2019), is used to compute _sense embeddings_ for definition sentences. Similarly, a target word is encoded into a _context embedding_ for a given sentence. Then, the model predicts the sense of the target word by finding the most similar sense embedding to the context.
The inherent challenge of the similarity-based approach is how we associate two different representations of word meanings, either by definition sentences or by words in context. Although the BERT embeddings capture the coarse-grained word meanings Reif et al. (2019); Loureiro et al. (2021), there should be room for improvement. Notably, Wang and Wang (2020) proposed SREF, sense embedding adaptation by bringing semantically related senses closer. Extending their work, Wang et al. (2021) proposed COE, context embedding enhancement heuristics during inference using the document-level global contexts of the given sentence, and reported the best performance. Despite being effective, COE cannot be applied to stand-alone texts, e.g., short messages on social media or search queries, limiting its applicability.
Our study aims to improve both accuracy and applicability to stand-alone texts. Specifically, we propose an adaptation method of the sense and context embeddings for the WSD task solely using lexical knowledge. Then, what are good embeddings for WSD? Our key idea is to 1) bring semantically related sense and context embeddings that
convey the same meaning closer, and 2) send unrelated and/or different senses that share the same surface form farther away (Fig. 1-d). We formulate the idea as the Attract-Repel objective and self-training objective. The main novelty is the joint optimization to utilize their complementary nature: the former should improve the distinguishability between senses whereas the latter offers pseudo signals of context-sense associations, which has not been explored in previous methods.
The Attract-Repel objective, inspired by Vulic and Mrksic (2018), injects semantic relation knowledge into the similarity of sense pairs. Specifically, we make semantically related senses more similar while making different and unrelated senses more dissimilar (Fig. 1-a). While SREF performs Attract only, our method utilizes both Attract and Repel.
The self-training objective, inspired by the idea of retraining on the classifier's own predictions instead of annotated senses (Navigli, 2009), updates the similarity of context-sense pairs in a pseudo labeling manner (SS 6.1). Specifically, for each training step and given context, we bring the nearest neighbor sense among candidates closer (Fig. 1-b). We also impose distance constraints during adaptation to control the deviation from BERT embeddings (Fig. 1-c) because excessive deviation may cause an inaccurate nearest neighbor sense selection, which would cause a performance drop.
We call the overall proposed method SS-WSD, Semantic Specialization for WSD, following Vulic and Mrksic (2018). We evaluated SS-WSD using the standard evaluation protocol (Raganato et al., 2017) and confirmed that it outperforms the previous embeddings adaptation method. Furthermore, it achieved state-of-the-art (SoTA) performance when combined with the reranking heuristic that uses a sense inventory (Wang and Wang, 2021), and thus is applicable to stand-alone texts.
The contributions of our study are as follows:
* We proposed SS-WSD, an embedding adaptation method that achieves new SoTA in knowledge-based WSD, regardless of the availability of document-level global contexts.
* We found that the performance gain originates from the joint optimization of Attract-Repel and self-training objectives and the prevention of deviation from the original embeddings.
* Empirically, we found that the similarity of related/different/unrelated senses _relative to_ the similarity of ground-truth context-sense pairs correlates well with the WSD performance.
## 2 Related Work
### Knowledge-based WSD
Knowledge-based WSD is a variant of WSD that does not use a sense annotation corpora such as the SemCor (Miller et al., 1993) but uses lexical resources instead, typically WordNet. The majority vote based on sense frequencies, also known as the WordNet first sense heuristic (Jurafsky and Martin, 2009), is a simple but strong baseline method of this category. Sense definitions and usage examples are also used to measure the similarity of the target word in a sentence. The simplest method is based on word overlap (Lesk, 1986).
One recent direction is the use of BERT as a contextualized encoder. BERT embeddings showed empirical success on the supervised WSD task when used as features. Some analyses reported that BERT embeddings capture the coarse-grained word meanings (Reif et al., 2019; Loureiro et al., 2021). Wang and Wang (2020) proposed a similarity-based method in the embedding space. It chooses the sense which has the most similar embedding, formed from the concatenation of its lemma, definition, and usage examples, to the embedding of a target word. They also proposed the Semantic Relation Enhancement Framework (SREF\({}_{\text{emb}}\)),
Figure 1: Schema of the proposed method. The BERT embeddings representing senses and contexts are adapted by transformation (top). Transformation functions are optimized using Attract-Repel and self-training objectives under distance constraints so that the adapted embeddings are effective for WSD (bottom).
which adapts sense embeddings by weighted averages over semantically related senses, e.g., hyponyms and derivations. \(\texttt{SREF}_{\texttt{emb}}\) is the most high-performing adaptation method so far. We report that our proposed method achieves better performance.
### Heuristics for Knowledge-based WSD
Another recent direction is the heuristics for choosing the most similar sense, which is further divided into those that use the sense inventory information and those that exploit the document-level global contexts of a given sentence. Wang and Wang (2020) proposed the former, the Try-again Mechanism (TaM). It reranks candidates by adding the similarity between the target word and the lexicographer class (supersense) that a candidate sense belongs to. Subsequent studies Wang et al. (2021); Wang and Wang (2021) refined TaM using Coarse Sense Inventory Lacerra et al. (2020). We examine the effectiveness of the proposed method combined with TaM because it can be applied to stand-alone texts.
Wang et al. (2021) proposed contextual information enhancement (CIE), which enhances context embeddings by exploiting the document-level global contexts of a given sentence on evaluation. This idea originally stems from the one-sense-per-discourse hypothesis Gale et al. (1992): that the sense of a word is highly consistent within a document.
### Attract-Repel Framework
The Attract-Repel Framework is used to inject lexical knowledge into embeddings by encouraging similar instances to have closer embeddings while encouraging dissimilar instances to be farther away. Vulic and Mrksic (2018) and Mrksic et al. (2017) reported that updating static word embeddings using lexical knowledge improves the performance of the word-level semantic relation classification task. Our study proposes its application to sense and context embeddings for the WSD task. We also reformulate the original loss function with the contrastive loss, inspired by its success in Computer Vision Chen et al. (2020) and NLP Gao et al. (2021); Wang et al. (2021); Giorgi et al. (2021).
### Supervised WSD
Supervised methods rely on corpora of sense-annotated contexts, such as SemCor, for training models. However, the coverage of words and senses is limited and biased towards more frequent senses Pasini (2020). Recent studies have addressed these limitations by incorporating lexical resources into the methods. Barba et al. (2021) and its subsequent study Barba et al. (2021) reframed WSD as a span extraction task by appending definition sentences of candidate senses to the target context. They reached the SoTA performance among supervised methods.
Similarity-based approaches are also used with supervised methods. Supervised k-nearest neighbors (Sup-kNN) Loureiro and Jorge (2019) defines sense embeddings as the averaged context embeddings of annotated senses. The Bi-Encoder model (BEM) Blevins and Zettlemoyer (2020) jointly fine-tunes two BERT encoders for definition sentences and contexts, ensuring that context embeddings will be closer to the correct sense embeddings. The proposed method is similar in architectural design to BEM, but differs in that we do not fine-tune the BERT encoders. We will compare our results with Sup-kNN and BEM to assess the effect of using no sense annotation and of freezing BERT encoders on performance.
## 3 Semantic Specialization for WSD
### Formalization of WSD
The proposed method adapts BERT embeddings by trainable transformation functions \(H_{s}\) and \(H_{w}\):
\[\mathbf{v}_{w} =H_{w}(\mathbf{\hat{v}}_{w}), \tag{1}\] \[\mathbf{e}_{s} =H_{s}(\mathbf{\hat{e}}_{s}), \tag{2}\]
where the inputs \(\mathbf{\hat{v}}_{w}\) and \(\mathbf{\hat{e}}_{s}\) are the context and sense embeddings computed by a BERT encoder and the outputs \(\mathbf{v}_{w}\) and \(\mathbf{e}_{s}\) are the specialized embeddings.
We train the transformation functions by minimizing the weighted sum of the Attract-Repel objective and the self-training objective on the specialized embeddings. Note that the BERT encoder is frozen (not fine-tuned). We integrate the constraints on the distance between the input and output into the architecture of transformation functions (SS 3.4).
To predict a sense for a given target word \(w\), we look up the candidate senses \(\mathcal{S}_{w}\) and compute their specialized sense embeddings using the learned function \(H_{s}\). Similarly, we compute specialized context embeddings using \(H_{w}\). Then, we select the
nearest neighbor sense \(s^{*}\) using cosine similarity:
\[s^{*} =\operatorname*{arg\,max}_{s^{\prime}\in\mathcal{S}_{w}}\rho_{w,s^{ \prime}}, \tag{3}\] \[\rho_{w,s} =\cos(\mathbf{v}_{w},\mathbf{e}_{s})=\frac{\mathbf{v}_{w}\cdot \mathbf{e}_{s}}{\|\mathbf{v}_{w}\|\|\mathbf{e}_{s}\|}. \tag{4}\]
### Lexical Knowledge in WordNet
We use WordNet (Fellbaum, 1998) as a lexical resource and sense inventory. WordNet mainly consists of synsets, lemmas, and senses. A synset is a group of synonymous words that convey a specific meaning. A lemma presents a canonicalized form of a word and belongs to one or more synsets. A sense is the lemma disambiguated by a sense key, and belongs to a single synset. We use the sense key as the identifier of a sense.
The proposed method makes use of relational knowledge between senses for training the transformation functions. Specifically, for each sense \(s\), we collect three sets of senses: _related_\(\mathcal{S}_{s}^{P}\), _different_\(\mathcal{S}_{s}^{N}\), and _unrelated_\(\mathcal{S}_{s}^{U}\). The _related_ set consists of sense keys of synonyms and semantically related senses (e.g., hyponyms) to the target sense. We followed the definition of related senses used in Wang and Wang (2020) (Appendix A). The _different_ set consists of sense keys sharing the same lemma to the target sense excluding itself. In other words, the different senses correspond to the polysemy of the lemma of the target sense. The _unrelated_ set presents sense keys that are randomly chosen from the sense inventory (see SS 3.5.1 for details). Table 1 shows the statistics of lemmas and senses. See Table 6 (in Appendix A) for examples of the concepts explained in this subsection.
### BERT Embeddings for Sense and Context
For obtaining BERT embeddings, we follow the standard practice of the previous studies (Wang et al., 2020; Bevilacqua and Navigli, 2020; Wang and Wang, 2020). Specifically, we use bert-large-cased1 with special tokens [CLS] and [SEP]. For each subword, we compute a sum over outputs at the last four layers of Transformer blocks.
Footnote 1: We use transformers package (Wolf et al., 2020).
A context embedding is the average of BERT embeddings over constituent subwords. For the computation of sense embeddings, we follow the method that Wang and Wang (2020) used. See Appendix B for details.
### Transformation Functions
The proposed method adapts embeddings by applying the trainable transformation, i.e., the specialization is learned by optimizing the transformation functions. This approach enables the adaptation of context embeddings on the fly during inference, which was not possible in the original approach that directly learns adapted embeddings (Vulic and Mrksic, 2018).
Let \(\mathbf{\hat{v}}_{w}\) and \(\mathbf{\hat{e}}_{s}\) be context and sense BERT embeddings. We transform them independently using residual mapping functions \(F_{w}\) and \(F_{s}\), which are both two-layer feedforward networks, \(\operatorname{FFNN}_{w}\) and \(\operatorname{FFNN}_{s}\). These networks are comprised of a linear layer with a ReLU activation, followed by a linear layer with a sigmoid activation.
\[\mathbf{v}_{w}=H_{w}(\mathbf{\hat{v}}_{w})=\mathbf{\hat{v}}_{w}+ \epsilon\|\mathbf{\hat{v}}_{w}\|F_{w}(\mathbf{\hat{v}}_{w}), \tag{5}\] \[\mathbf{e}_{s}=H_{s}(\mathbf{\hat{e}}_{s})=\mathbf{\hat{e}}_{s}+ \epsilon\|\mathbf{\hat{e}}_{s}\|F_{s}(\mathbf{\hat{e}}_{s}),\] (6) \[F_{w}(\mathbf{\hat{v}}_{w})=2\sigma(\operatorname{FFNN}_{w}( \mathbf{\hat{v}}_{w}))-1,\] (7) \[F_{s}(\mathbf{\hat{e}}_{s})=2\sigma(\operatorname{FFNN}_{s}( \mathbf{\hat{e}}_{s}))-1, \tag{8}\]
where \(\mathbf{v}_{w}\) and \(\mathbf{e}_{s}\) are the specialized embeddings. \(\epsilon\) is the hyperparameter that controls how far away the specialized embeddings can be. Specifically, the L2 distance relative to the original embedding \(\|\mathbf{v}_{w}-\mathbf{\hat{v}}_{w}\|/\|\mathbf{\hat{v}}_{w}\|\) is bounded by \(\epsilon\sqrt{N_{d}}\), where \(N_{d}\) is the dimension size of embeddings2. This is because the residual functions map the inputs to the space \([-1,+1]^{N_{d}}\).
Footnote 2: \(N_{d}=1,024\) for bert-large-cased.
### Objectives
We jointly optimize the Attract-Repel objective for sense pairs and the self-training objective for context-sense pairs by minimizing the weighted sum of the loss functions,
\[L=L^{\mathrm{AR}}+\alpha L^{\mathrm{ST}}, \tag{9}\]
where \(\alpha\) is the hyperparameter that determines the relative importance of the self-training objective.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Element & Noun & Verb & Adj. & Adv. & All \\ \hline \# Lemmas & 117,798 & 11,529 & 21,479 & 4,481 & 155,287 \\ \# Senses & 146,320 & 25,047 & 30,002 & 5,580 & 206,949 \\ Rel. senses & 7.8 & 13.0 & 6.2 & 3.9 & 8.1 \\ Diff. senses & 0.8 & 4.1 & 1.2 & 0.7 & 1.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary statistics of lexical resources by part-of-speech tag. Values in the related and different senses rows indicate the average per sense.
The joint optimization is motivated by the complementary nature of these two objectives. The Attract-Repel objective should improve the separability of similar/different senses but does not contribute to determining which context and sense should be associated. In contrast, the self-training objective provides pseudo-supervision signals for context-sense associations, although the informativeness is, when used alone, limited because it essentially reinforces the similarity to the initial nearest neighbor sense of the target context (SS 3.5.2).
#### 3.5.1 Attract-Repel Objective
We formulate Attract-Repel objective loss \(L^{\mathrm{AR}}\) using contrastive loss: we bring _related_ senses closer while _different_ and _unrelated_ senses farther away3 (SS 3.2). Specifically, for a given minibatch of senses \(\mathcal{S}^{B}\) and a specific sense \(s\in\mathcal{S}^{B}\), we define the subset excluding itself \(\mathcal{S}^{B}\setminus\{s\}\) as the unrelated senses \(\mathcal{S}^{U}_{s}\). Then, we randomly choose a sense \(s_{p}\) from the related senses \(\mathcal{S}^{P}_{s}\). Similarly, we randomly choose up to five senses without replacement \(\tilde{\mathcal{S}}^{N}_{s}\) from different senses \(\mathcal{S}^{N}_{s}\). Finally, \(L^{\mathrm{AR}}\) for the minibatch \(\mathcal{S}^{B}\) is defined as follows:
Footnote 3: In the contrastive learning literature, related, unrelated, and different senses correspond to the positives, weak negatives, and hard negative examples, respectively.
\[L^{\mathrm{AR}}=-\sum_{s\in\mathcal{S}^{B}}\ln\frac{e^{\beta \rho_{s,s_{p}}}}{\sum\limits_{s^{\prime}\in\big{(}\{s_{p}\}\cup\mathcal{S}^{U} _{s}\cup\mathcal{S}^{N}_{s}\big{)}}e^{\beta\rho_{s,s^{\prime}}}}, \tag{10}\] \[\rho_{s,s^{\prime}}=\cos(\mathbf{e}_{s},\mathbf{e}_{s^{\prime}}). \tag{11}\]
We set the scaling parameter \(\beta\) to 64, following the suggestions in metric learning studies (Deng et al., 2019; Wang et al., 2018).
#### 3.5.2 Self-training Objective
We formulate the self-training objective loss \(L^{\mathrm{ST}}\) so that we bring the contexts and nearest neighbor senses closer. In the self-training process, we label a word in context with the sense whose embedding is the closest to that of the word. Specifically, let \(\mathcal{W}^{B}\) denote a minibatch of words. For a word \(w\in\mathcal{W}^{B}\), we obtain a set of candidate senses4\(\mathcal{S}_{w}\). Then, \(L^{\mathrm{ST}}\) for the minibatch \(\mathcal{W}^{B}\) is defined as,
Footnote 4: Querying WordNet for a tuple of lemma and part-of-speech tag returns the candidate senses.
\[L^{\mathrm{ST}}=\sum_{w\in\mathcal{W}^{B}}(1-\max_{s\in\mathcal{ S}_{w}}\rho_{w,s}), \tag{12}\] \[\rho_{w,s}=\cos(\mathbf{v}_{w},\mathbf{e}_{s}). \tag{13}\]
Note that the nearest neighbor sense for the same context changes during training as we update parameters of the transformation functions for embeddings. Our intention is to bootstrap the performance, which was impossible in the "static counterpart", e.g., pseudo-labeling with the WordNet first sense heuristic. That is also a motivation of introducing the distance constraint in Eq. 5 and 6: we were concerned about the performance drop when a large deviation occurs in the semantic specialization. We report empirical evidence that the constraint improves the performance (SS 6.3).
In principle, the training data can be any corpus annotated with lemmas and part-of-speech tags. Nevertheless, we used the SemCor (Miller et al., 1993) corpus with the sense annotations removed. This is because using these de-facto standard corpora contributes to better reproducibility and fairer comparisons.
### Try-again Mechanism (TaM) Heuristic
We examine the effectiveness of the proposed method when combined with TaM. Specifically, we employ the variant (Wang and Wang, 2021)5 that utilizes Coarse Sense Inventory (CSI) (Lacerra et al., 2020) because of its simplicity. In essence, TaM reranks candidate senses by updating similarities under the assumption that the context should be also similar to the coarse semantic category that the candidate sense belongs to. Let \(s_{1}\) and \(s_{2}\) be the top two nearest neighbors for the target word \(w\) and \(\mathcal{S}^{\mathrm{CSI}}_{s}\) be the set of senses6 belonging to the same CSI class as \(s\) belongs to. Then, we refine the similarity \(\rho^{+}_{w,s}\) for each \(s\in\{s_{1},s_{2}\}\),
Footnote 5: We followed author’s implementation: [https://github.com/lwmlyy/SACE](https://github.com/lwmlyy/SACE)
Footnote 6: \(\mathcal{S}^{\mathrm{CSI}}_{s}\) will be the empty set if \(s\) doesn’t exist in the CSI because it does not cover all synsets.
\[\rho^{+}_{w,s}=\rho_{w,s}+\max_{s^{\prime}\in\mathcal{S}^{\mathrm{CSI}}_{s}} \rho_{w,s^{\prime}}. \tag{14}\]
Finally, we choose the sense from \(s_{1}\) and \(s_{2}\) with highest similarity using \(\rho^{+}_{w,s}\), i.e., we use the refined similarity \(\rho^{+}_{w,s}\) instead of \(\rho_{w,s}\) (Eq. 3).
## 4 Experiment Settings
### Training
We used WordNet senses for optimizing the Attract-Repel objective and the sense-annotated words in the SemCor corpus for the self-training objective. Note that we solely use lemmas and part-of-speech
tags and disregard the sense annotations. The number of senses in WordNet is 206,949, and the number of words in the corpus is 226,036. We independently sampled minibatches \(N_{B}\) for each objective. For the Attract-Repel objective, we iterate over all sense keys in the WordNet with 15 epochs7. For hyperparameter optimization, we disabled TaM heuristics and used the evaluation set of SemEval-2007 as the development set, following the standard practice Pasini et al. (2021). See Appendix C for details of the hyperparameter search. We set \(N_{B}=256\), \(\alpha=0.2\), and \(\epsilon=0.015\). We used the Adam optimizer with learning rate \(0.001\).
Footnote 7: In each epoch, we discarded the remaining examples in the self-training objective trainset once all sense keys have been traversed.
### Evaluation
For evaluation, we used the WSD unified evaluation framework Raganato et al. (2017)8. We used the nearest neighbor sense as the prediction (Eq. 3). For the evaluation metric, we adopt the micro-averaged F1 score9 that is commonly used in the literature. Unless otherwise specified, we run the training process five times with different random seeds, and report the mean and standard deviations.
Footnote 8: Available at: [http://lcl.uniroma1.it/wsdeval/](http://lcl.uniroma1.it/wsdeval/)
Footnote 9: Note that F1 score is equal to Precision and Recall Pasini et al. (2021) because proposed method predicts a single sense.
### Baselines
We compare the proposed method in two experimental configurations: _Intrinsic_ and _With Heuristics_. For the _Intrinsic_ configuration, we compare it with the methods that do not use any heuristic. Specifically, we choose PlainBERT and \(\texttt{SREF}_{\texttt{emb}}\)Wang and Wang (2020) as baselines. PlainBERT uses BERT embeddings \(\mathbf{\hat{v}}_{w}\) and \(\mathbf{\hat{e}}_{s}\) as is. \(\texttt{SREF}_{\texttt{emb}}\)10 adapts sense embeddings so that it brings semantically related senses closer. For the _With Heuristics_ configuration, we compare the proposed method with the methods that combine heuristics. Specifically, we choose \(\texttt{SREF}_{\texttt{kb}}\)Wang and Wang (2020) and \(\texttt{COE}\)Wang et al. (2021) as baselines. \(\texttt{SREF}_{\texttt{kb}}\) combines \(\texttt{SREF}_{\texttt{emb}}\) with TaM. \(\texttt{COE}\) also utilizes \(\texttt{SREF}_{\texttt{emb}}\), but it employs refined TaM and CIE. \(\texttt{COE}\) is the current SoTA method on knowledge-based WSD.
Footnote 10: We applied their method to PlainBERT, consistent with the proposed method, to ensure a fair comparison of the effect of adaptation.
We also compare with supervised methods which employ the similarity-based approach to assess the effect of not using sense annotations and of freezing BERT encoders. Specifically, we compare with \(\texttt{Sup-kNN}\)Loureiro and Jorge (2019) and \(\texttt{BEM}\)Blevins and Zettlemoyer (2020) (SS 2.4), which both use SemCor as the trainset. \(\texttt{Sup-kNN}\) computes sense embeddings as the context embeddings averaged over the annotated senses. \(\texttt{BEM}\) fine-tunes BERT encoders so that context embeddings and correct sense embeddings are brought closer. We consider \(\texttt{BEM}\) as the de-facto upper bound of similarity-based approach, given its usage of a supervision signal to fine-tune the BERT encoders.
## 5 Experimental Results
Table 2 shows the WSD task performance. In both configuration, the proposed method \(\texttt{SS-WSD}_{\texttt{emb}}\) outperformed all knowledge-based baselines.
In the _Intrinsic_ configuration, \(\texttt{SS-WSD}_{\texttt{emb}}\) outperformed \(\texttt{SREF}_{\texttt{emb}}\) by 3.9pt, which is as much as a 9.3pt improvement over \(\texttt{PlainBERT}\). Looking at the results for each part-of-speech, we observed the largest improvement over \(\texttt{SREF}_{\texttt{emb}}\) for verbs (9.0pt). This result reflects the fact that verbs have the richer supervision signal for the Attract-Repel objective because of the largest number of related and different senses (Table 1) for verbs. This suggests that the richer semantic relation knowledge is, the higher performance the proposed method may achieve.
In the _With Heuristics_ configuration, \(\texttt{SS-WSD}_{\texttt{kb}}\) outperformed \(\texttt{COE}\) by 0.8pt without using the CIE heuristic, which shows an advantage over the baselines regardless of whether the evaluation sentence is a stand-alone text or in a document. The improvement brought by TaM was 2.2pt. Although \(\texttt{SS-WSD}_{\texttt{kb}}\) lagged behind \(\texttt{COE}\) on the SE07 (SemEval-2007) subset, we think this result is understandable because \(\texttt{COE}\) also used SE07 for hyperparameter optimization.
When compared to supervised methods, \(\texttt{SS-WSD}_{\texttt{emb}}\) outperformed \(\texttt{Sup-kNN}\) by 1.4pt, while falling behind \(\texttt{BEM}\) by 4.1pt. The results indicate that the proposed method associates contexts with senses more precisely than the example-based sense embeddings computation using sense-annotated contexts. It also shows the effectiveness of the supervised fine-tuning of BERT encoders in \(\texttt{BEM}\), as evidenced through their ablation study
## 6 Analysis
### Vanilla BERT Embeddings
The proposed method adapts the BERT embeddings (PlainBERT) by transformation. Therefore, its performance is influenced by the ability of PlainBERT to disambiguate senses.
Table 3 shows the WSD task performance using PlainBERT. We also reported the WordNet first sense heuristic (WN1\({}^{\texttt{st}}\)Sense) for reference. We observe that PlainBERT is comparable to WN1\({}^{\texttt{st}}\)Sense, indicating that self-training is a more effective strategy than WN1\({}^{\texttt{st}}\)Sense for obtaining pseudo sense labels.
Fig. 2 shows the distribution of the similarity margin (difference) between the nearest neighbor incorrect sense and ground-truth sense computed by PlainBERT. We used the evaluation set for this analysis. We found that the similarity margin is below 0.05 for approximately 90% of all instances. This indicates that a large deviation from PlainBERT is not necessary for replacing nearest neighbor senses with the ground-truth ones.
### Effect of Objectives
Table 4 shows the performance comparison when we eliminate a specific component from the semantic specialization objectives (SS 3.5). We keep all hyperparameters unchanged.
When we exclude either the Attract-Repel objective or the self-training objective, we see the performance drop by 3.3pt and 4.4pt, respectively. This finding supports the claim that joint optimization is crucial for its complementary nature.
When we remove either the unrelated senses or different senses from the Attract-Repel objective, we also see the performance drop by 5.0pt and 1.4pt, respectively. This result supports the idea that bringing semantically unrelated and different
\begin{table}
\begin{tabular}{l|c c|c c c c c|c c c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{TaM} & \multirow{2}{*}{CIE} & \multicolumn{6}{c|}{By subset} & \multicolumn{3}{c|}{By part-of-speech} & \multirow{2}{*}{All} \\ \cline{3-3} \cline{5-12} & & & SE2 & SE3 & SE07 & SE13 & SE15 & \multicolumn{1}{c|}{Noun} & Verb & Adj. & Adv. \\ \hline Supervised & & & & & & & & & & & \\ \hline Sup-kNN & \(\times\) & \(\times\) & 76.3 & 73.2 & 66.2 & 71.7 & 74.1 & — & — & — & — & 73.5 \\ (Loureiro and Jorge, 2019) & \(\times\) & \(\times\) & 79.4 & 77.4 & 74.5 & 79.7 & 81.7 & 81.4 & 68.5 & 83.0 & 87.9 & 79.0 \\ \hline Knowledge-based, _Intrinsic_ configuration & & & & & & & & & & & & \\ \hline PlainBERT & \(\times\) & \(\times\) & 67.8 & 62.7 & 54.5 & 64.5 & 72.3 & 67.8 & 52.3 & 74.0 & 77.7 & 65.6 \\ SREF\({}_{\texttt{sub}}\) & \(\times\) & \(\times\) & 70.3 & 68.0 & 60.4 & 74.2 & 77.4 & 76.3 & 53.5 & 75.2 & 76.3 & 71.0 \\ (Wang and Wang, 2020) & & & & & & & & & & & & \\ SS-WSD\({}_{\texttt{sub}}\) (Ours) & \(\times\) & \(\times\) & **74.6\({}^{\texttt{*}}\)** & **73.0\({}^{\texttt{*}}\)** & **65.0\({}^{\texttt{*}}\)** & **77.0\({}^{\texttt{*}}\)** & **79.9\({}^{\texttt{*}}\)** & **78.2\({}^{\texttt{*}}\)** & **62.5\({}^{\texttt{*}}\)** & **79.7\({}^{\texttt{*}}\)** & **80.5\({}^{\texttt{*}}\)** & **74.9\({}^{\texttt{*}}\)** \\ & & & & (0.5) & (0.6) & (1.3) & (0.5) & (1.0) & (0.4) & (0.7) & (0.3) & (1.5) & (0.3) \\ \hline Knowledge-based, _With Heuristics_ configuration & & & & & & & & & & & & \\ \hline SREF\({}_{\texttt{sub}}\) & & & & & & & & & & & & \\ (Wang and Wang, 2020) & & & & & & & & & & & & \\ COE & & & & & & & & & & & & \\ (Wang et al., 2021b) & & & & & & & & & & & & \\ SS-WSD\({}_{\texttt{sub}}\) (Ours) & \(\times\) & & 77.7\({}^{\texttt{*}}\) & **75.9\({}^{\texttt{*}}\)** & 66.5 & 78.0 & **81.6** & 79.3 & **65.7\({}^{\texttt{*}}\)** & **84.9\({}^{\texttt{*}}\)** & **84.2\({}^{\texttt{*}}\)** & **77.1\({}^{\texttt{*}}\)** \\ & & & & & (0.5) & (0.6) & (1.0) & (0.5) & (0.9) & (0.3) & (0.8) & (0.4) & (0.8) & (0.3) \\ \hline \hline \end{tabular}
\end{table}
Table 2: WSD performance by subset and part-of-speech tag. SS-WSD\({}_{\texttt{sub},\texttt{kb}}\) are the proposed methods. Numbers in parentheses represent the standard deviation. Asterisks (*) indicate that the difference to the best baseline is statistically significant at \(p<0.05\) by the Student’s \(t\)-test (two-tailed test). Checkmarks (\(\checkmark\)) in the TaM and CIE columns represent the usage of those heuristics. We bolded the best result among knowledge-based methods in each configuration and underlined the objective for hyperparameter tuning. The scores of BEM, Sup-kNN, SREF\({}_{\texttt{kb}}\), and COE are taken from the original papers.
senses farther away contributes to performance. We also find that unrelated senses are more effective than different senses. A possible cause is the number of examples: while the number of unrelated senses is always11 255, the number of different senses is, on average, just 1.3 (see Table 1)12.
Footnote 11: Minibatch size (=256) minus one yields 255.
Footnote 12: In fact, only 38% of all senses have different senses.
Disabling the adaptation of context embeddings (by using identity transformation) caused a performance drop of 3.2pt, indicating that adapting both sense and context embeddings is necessary.
### Effect of Distance Constraint
Fig. 3 shows the performance comparison when we change \(\epsilon\), the hyperparameter that bounds how farther away the specialized embeddings can be, in the interval [0.01,0.02] with a step size of 0.001. We found that performance follows an inverted U-shaped curve along \(\epsilon\), indicating that a sweet spot exists. Briefly, it shows that a severe constraint (small \(\epsilon\)) results in an insufficient update for replacing nearest neighbors with ground-truth senses. In contrast, a looser constraint (large \(\epsilon\)) results in a substantial deviation, eventually making the self-training less effective in the training process. The latter fact supports the claim that controlling the deviation from the original embeddings is necessary.
## 7 Effect of Self-training Dataset Size
Fig. 4 illustrates the impact of varying the number of examples used for the self-training objective on the WSD task performance. It should be noted that 100% in the figure corresponds to using all examples in the SemCor corpus. We found that performance improves as the number of examples increases and reaches a saturation point at 60%, corresponding to 136k examples. While the coverage of words and senses appearing in the contexts also matters, it indicates that the benefits of self-training do not necessarily increase with the scaling to millions of examples.
### Similarity Characteristics
We quantitatively investigate how well the proposed method achieved the key idea (Fig. 1-d): bringing related senses and contexts closer while unrelated and different senses farther away. Specifically, in Table 5, we reported averages of similarity values between related senses \(\rho_{\mathcal{S}^{P}}\), unrelated senses \(\rho_{\mathcal{S}^{U}}\), and different senses \(\rho_{\mathcal{S}^{N}}\), along with averages of similarity values between ground-truth context-sense pairs13\(\rho_{\mathcal{V}\texttt{gt}}\). See Appendix D for formal definitions. We found that the proposed method SS-WSD\({}_{\texttt{emb}}\) brought context-sense pairs closer than PlainBERT (\(\rho_{\mathcal{V}\texttt{gt}}\): \(0.64\to 0.77\)). In contrast, it pushed the unrelated and different senses away: \(\rho_{\mathcal{S}^{U}}\):\(0.77\to 0.64\) and \(\rho_{\mathcal{S}^{N}}\):\(0.87\to 0.75\).
\begin{table}
\begin{tabular}{l r r} \hline \hline \multicolumn{1}{c}{ Ablation} & WSD (All) & \(\Delta\)[pt] \\ \hline \multicolumn{2}{c}{SS-WSD\({}_{\texttt{emb}}\)} & 74.9 & — \\ \hline \multicolumn{2}{l}{-Attract-Repel _objective_} & 71.6 & -3.3 \\ \multicolumn{2}{l}{-Self-training _objective_} & 70.5 & -4.4 \\ \multicolumn{2}{l}{-Unrelated senses \(\mathcal{S}^{U}\)_repelling_} & 69.9 & -5.0 \\ \multicolumn{2}{l}{-Different senses \(\mathcal{S}^{N}\)_repelling_} & 73.5 & -1.4 \\ \multicolumn{2}{l}{-Context _adaptation_} & 71.7 & -3.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of training objective. _Objective_ rows represent the corresponding objective is excluded. _Repelling_ rows represent the corresponding sense pairs are removed from the Attract-Repel objective (Eq. 10). _Adaptation_ rows represent the usage of identity transformation. All differences are statistically significant at \(p<0.05\) by Welch’s \(t\)-test (two-tailed test).
Figure 4: Impact of varying the self-training dataset size from 10% (23k examples) to 100% (224k). The dot and error bar indicates the mean and standard deviation, respectively. The horizontal line represents the performance when utilizing the 100% examples. Asterisks denote that the deviation from the 100% is statistically significant at \(p<0.05\) (*) and \(p<0.005\) (**) by Welch’s \(t\)-test (two-tailed test).
Figure 3: Ablation study of hyperparameter \(\epsilon\) (§ 3.4). Dot and error bar represent the mean and standard deviation, respectively. Horizontal line represents the default setting (\(\epsilon=0.015\)) performance. Asterisks indicate that the difference with respect to the default setting is statistically significant at \(p<0.05\) (*) and \(p<0.005\) (**) by Welch’s \(t\)-test (two-tailed test).
\(0.78\). These results demonstrate that joint optimization of the Attract-Repel and self-training objectives realized the key idea successfully.
Can we expect better performance if we realize the key idea more precisely? We investigated the relationship between these similarity metrics and WSD task performance. Specifically, we subtract \(\rho_{\mathcal{W}^{\text{st}}}\) from each metric in order to capture the closeness of senses _relative to_ the correct context-sense pairs, defining \(\Delta\rho_{*}\) as \(\rho_{*}-\rho_{\mathcal{W}^{\text{st}}}\). For example, \(\Delta\rho_{\mathcal{S}^{N}}=\rho_{\mathcal{S}^{N}}-\rho_{\mathcal{W}^{\text {st}}}\) should be a negative value because the average similarity among different senses \(\rho_{\mathcal{S}^{N}}\) should be smaller than that among correct context-sense pairs \(\rho_{\mathcal{W}^{\text{st}}}\). Therefore, we compute the value \(\overline{\Delta\rho}=\frac{1}{3}(\Delta\rho_{\mathcal{S}^{P}}-\Delta\rho_{ \mathcal{S}^{U}}-\Delta\rho_{\mathcal{S}^{N}})\) to estimate the WSD performance.
Fig. 5 shows that \(\overline{\Delta\rho}\) correlates well with WSD task performance (\(R^{2}=0.85\)). It suggests that if we achieve the key idea more precisely, we may improve the WSD performance. For instance, using a richer lexical relation knowledge, exploitation of the monosemous words, and self-training with confidence thresholding may be promising. We leave it for future work.
## 8 Conclusion
In this paper, we proposed SS-WSD: Semantic Specialization for WSD14. The proposed method learns how to adapt BERT embeddings by transformation and uses the semantic relation knowledge as a supervision signal. The key idea is the desired characteristics of similarities: bringing related senses and the contexts closer while unrelated senses and different senses farther away. We realized it as the joint optimization of the Attract-Repel and self-training objectives while preventing large deviations from original embeddings. Experiments showed that the proposed method outperformed the previous embedding adaptation method. When combined with the reranking heuristic that can be applied to stand-alone texts, it established a new SoTA performance on knowledge-based WSD. The proposed method performs well regardless of the availability of global contexts beyond the target sentence during inference, which the previous study did not achieve. Several analyses showed the effectiveness of the objectives and constraints introduced for specialization. We also found that the closeness of semantically related/different/unrelated senses relative to the closeness of correct context-sense pairs positively correlates with the WSD task performance.
Footnote 14: The source code is available at: [https://github.com/s-mizuki-nlp/semantic_specialization_for_wsd](https://github.com/s-mizuki-nlp/semantic_specialization_for_wsd)
## 9 Future Work
Given that the proposed method only necessitates lexical resources, it has the potential to effectively address the knowledge acquisition bottleneck problem (Pasini, 2020). Thus, we are interested in applying the proposed method to multilingual WSD using multilingual language models as contextualized encoders. One approach is the zero-shot
Figure 5: The relationship between the similarity characteristic metric \(\overline{\Delta\rho}\) and WSD performance in Table 5.
cross-lingual transfer, which involves learning embeddings adaptation using only English lexical resources. Another option is the joint training of all target languages using multilingual lexical resources such as BabelNet [20]. We are also interested in integrating the proposed method into supervised WSD and applying the transfer learning of the specialized embeddings to other NLP tasks.
## 10 Limitations
One limitation of this work is that it is specific to BERT. Although this is in line with the standard practice in previous studies, experimenting with other pre-trained language models is preferred to assess the utility of the proposed method, or to improve the performance further. Another limitation is that it is evaluated on a single dataset and task. While we also followed the de-facto standard protocol, evaluating on rare senses [23] or Word-in-Context task [18, 22] will bring us more comprehensive insights on the effectiveness and applicability.
## 11 Ethics Statement
This work does not involve the presentation of a new dataset, nor the utilization of demographic or identity characteristics in formation. In this work, we propose a method for adapting contextualized embeddings for WSD using lexical resources. The proposed method is not limited to a specific resource, we used WordNet as the source of semantic relation knowledge and sense inventory. Therefore, adapted embeddings and sense disambiguation behavior may reflect the incomplete lexical diversity of WordNet in culture, language [13], and gender [17].
## 12 Acknowledgments
This work was supported by JSPS KAKENHI Grant Number 19H01118. We thank Marco Cognetta for his valuable input and for reviewing the manuscript.
|
2307.08341 | Dynamics of powerful radio galaxies | Analytical models describing the dynamics of lobed radio sources are
essential for interpretation of the tens of millions of radio sources that will
be observed by the Square Kilometre Array and pathfinder instruments. We
propose that historical models can be grouped into two classes in which the
forward expansion of the radio source is driven by either the jet momentum flux
or lobe internal pressure. The most recent generation of analytical models
combines these limiting cases for a more comprehensive description. We extend
the mathematical formalism of historical models to describe source expansion in
non-uniform environments, and directly compare different model classes with
each other, and with hydrodynamic numerical simulations. We quantify
differences in predicted observable characteristics for lobed radio sources due
to the different model assumptions for their dynamics. We make our code for the
historical models analysed in this review openly available to the community. | Ross J. Turner, Stanislav S. Shabala | 2023-07-17T09:27:08Z | http://arxiv.org/abs/2307.08341v1 | # Dynamics of powerful radio galaxies
###### Abstract
Analytical models describing the dynamics of lobed radio sources are essential for interpretation of the tens of millions of radio sources that will be observed by the Square Kilometre Array and pathfinder instruments. We propose that historical models can be grouped into two classes in which the forward expansion of the radio source is driven by either the jet momentum flux or lobe internal pressure. The most recent generation of analytical models combines these limiting cases for a more comprehensive description. We extend the mathematical formalism of historical models to describe source expansion in non-uniform environments, and directly compare different model classes with each other, and with hydrodynamic numerical simulations. We quantify differences in predicted observable characteristics for lobed radio sources due to the different model assumptions for their dynamics. We make our code for the historical models analysed in this review openly available to the community.
galaxies: active; galaxies: jets; radio continuum: galaxies +
Footnote †: journal: Journal of Physics A
## 1 Introduction
The first extragalactic radio sources were identified over seven decades ago by Bolton in the late-1940s [1]. Shortly after, in 1953, the first resolved image was captured of Cygnus A, now known as the archetypal "classical double" [2]. This breakthrough was followed by the pioneering efforts of radio survey groups in Australia [3] and the United Kingdom [4], which conducted the first large-scale radio surveys (for a comprehensive review, refer to 5). The first quasar, 3C273, was discovered a few years later in 1963 [6,7, see 8 for a historical review]. These pivotal developments laid the foundation for observational studies of radio galaxies. Subsequently, building upon Lynden-Bell's (1969) proposal that black holes are responsible for the extreme luminosities observed in quasars, the 1970s saw the development of the first models describing the dynamical evolution of radio galaxies.
While varying in specific details the majority of models in the literature share a similar overarching framework. These models in general consider two initially conical jets composed of particles that have been accelerated to relativistic velocities. The interaction between the jets and the intracluster medium (ICM) surrounding their host galaxy determines the subsequent evolution. Jets which retain sufficient forward ram pressure during their initial propagation phase (typically on galaxy scales) will be collimated by pressure from the ambient medium, or more likely, a build up of plasma shed by the jet in the early stages of lobe formation [10, 11]. Regardless, each collimated jet leads to the formation of a Mach disk (observable as a hotspot) and, as overpressured jet material flows back towards the equatorial plane, the inflation of a plasma lobe observable through synchrotron radiation; such objects are generally classified to have a Fanaroff and Riley Type-II (FR-II) lobe morphology.
On the other hand, jets which suffer substantial entrainment (e.g. from stellar winds, 13,14; or the interstellar medium, 15) will slow down to transsonic speeds (with respect to the internal lobe sound speed, of order 0.1c) and be disrupted. In this scenario, the jet momentum thrust is not important to the evolution of the lobe, and the role of the jet is to simply supply energy to the synchrotron-emitting lobe. The subsequent expansion of
the lobes is determined by solving a set of fluid conservation equations; typically the lobes undergo an initial momentum-dominated supersonic phase, followed by an adiabatic-expansion driven coasting phase, and ultimately rising buoyantly in the later phases of evolution.
A large number of analytical and numerical models describing the evolution of AGN jets and lobes have been published since the first models were introduced over five decades ago. In this review, we summarise the different classes of lobed radio galaxy models, and provide a common framework to facilitate comparison both between the model classes and to more detailed hydrodynamic simulations. The dynamics of jetted Fanaroff and Riley Type-I (FR-I) sources are not considered in this work; we refer the interested reader to the classical work of Bicknell [15].
The review is structured as follows. Section 2 presents early models by Rees [16] and Scheuer [17, Model A], in which the forward thrust of uncollimated jets is balanced by the ram pressure from the ambient medium. We extend the formalism of Scheuer [17] to non-uniform environments, enabling direct comparison with more sophisticated modern analytical models for the first time. While these early models capture the fundamental aspects of jet termination and lobe formation, they neglect jet collimation by sideways ram pressure from the lobe (or ambient medium). In Section 3, we describe the analytical models proposed by Falle [18] and Kaiser and Alexander [19], which link jet collimation to subsequent lobe expansion. The past decade has seen the advent of environment-sensitive radio galaxy models beyond the self-similar solutions of Kaiser and Alexander [19] and related models [e.g. 20; 21]. These models capture the evolution of lobe morphology in realistic environments [22; 23] as well as the transition between jet- and lobe-driven expansion [23; 24]. We describe these models in Section 4. In Section 5, we compare the consistency of predicted radio source dynamics between the main model classes, benchmark their evolutionary tracks against hydrodynamic simulations, and discuss their ability to generate synthetic AGN populations for parameter inversions. We conclude and suggest improvements to implement in next generation of analytical models in Section 6.
## 2 Early Jet-Lobe Models
Rees [16] proposed a model in which conical jets emanating from the central engine of active galactic nuclei (AGNs) are pressure-balanced by the ram pressure of the ambient medium. In this model, the jets are a beam of low-frequency electromagnetic waves (LFEMW), the quantum field equivalent of a pair-plasma. The radiation pressure of this beam upon absorption by the ambient medium is \(p_{rad}=Q/(\Omega R^{2}c)\) for jet kinetic power \(Q\) and beam cross-sectional area \(\Omega R^{2}\) at radius \(R\) from the active nucleus. The pressure is increased if the interaction between the beam and ambient medium results in pair production, leading to a reaction pressure up to double that of the radiation pressure; the
Figure 1: Schematic of the dynamical model for the Scheuer [17] Model A. We show a thin shocked gas shell between the contact discontinuity and bow shock as in Figure 1 of the original paper, however, the shocked gas is not explicitly considered in their model.
exact factor depends on the angle of reflected particles. The pressure contribution from the jet is therefore expressed as:
\[p_{jet}=\frac{\kappa_{1}Q}{\Omega R^{2}c}, \tag{1}\]
where \(1\leqslant\kappa_{1}<2\) is a dimensionless constant describing both the fraction of the beam power that interacts with the ambient medium as radiation or particles, and the angle of reflection of those particles.
The forward jet thrust is balanced by ram pressure from the ambient medium \(p_{ram}=\rho v^{2}\), where \(\rho\) is the gas density of the assumed constant density ambient medium and \(v=dR/dt\) is the advance speed of the jet-head (see Figure 1). Scheuer [17] evaluated the resulting first-order differential equation in \(R\) assuming a constant jet half-opening angle \(\theta_{j}\) (and thus solid angle \(\Omega\)) in their Model A. The jet length is related to source age \(t\), and jet and environment parameters as
\[R(t)=\left(\frac{\kappa_{1}Q}{\Omega\rho c}\right)^{1/4}(2t)^{1/2}. \tag{2}\]
The above approach assumes a constant density environment. However, the ambient medium on scales exceeding several kiloparsecs - typical of extended radio sources - is well represented by a symmetric power-law density profile of the form \(\rho=kr^{-\beta}\), where the density parameter \(k\equiv\rho_{0}r_{0}^{\beta}\) is a constant (e.g. [18; 22]). The ram pressure applied by the ambient medium onto the expanding jet consequently weakens with distance from the central nucleus. In this review, we extend the Scheuer [17] Model A for the _more_ general case of a power-law density profile, yielding
\[R(t)=\left(\frac{\kappa_{1}Q}{\Omega kc}\right)^{1/(4-\beta)}\left(\frac{(4- \beta)t}{2}\right)^{2/(4-\beta)}, \tag{3}\]
which converges to Scheuer's original constant density form when \(\beta=0\), noting that \(\rho=k\) in this limiting case. We use this more complete version of the model in the remainder of this work.
The synchrotron-emitting lobes inflated by the jets are typically assumed to have ellipsoidal morphology, with the ratio of major (aligned with the jet) to minor axes defined by the axis ratio \(A=R/R_{\perp}\); we note that this differs (by a factor of 2) from the _axial_ ratio \(R_{\rm T}=A/2\) of Kaiser and Alexander [19]. Scheuer [17] derive the volume of the ellipsoidal radio lobe associated with their LFEMW jets by considering the work done in inflating the cavity. The total energy in the cavity, \(U\), increases over the time interval \(\delta t\) due to the input kinetic power \(Q\) as
\[\delta U=Q\delta t-p\delta V, \tag{4}\]
where \(\delta V\) is the differential increase in volume, and the lobe pressure is given by (see e.g. [19], their Equation 15)
\[p=\frac{U(\Gamma_{c}-1)(q+1)}{V}, \tag{5}\]
where \(\Gamma_{c}\) is the polytropic index (or adiabatic index for an adiabatic equation of state; EoS) of the lobe plasma, and \(q\ll 1\) is the ratio of energy in the magnetic field to that in the particles. This equation assumes the energy density (and thus pressure) is approximately uniform throughout the lobe, a reasonable assumption given the high (\(\sim\)0.1\(c\)) sound speeds in the lobes.
Equation 4 is a first-order differential equation describing the evolution of the total energy of the cavity. In Appendix A.1, we solve this differential equation assuming the
cavity volume expands with increasing jet length as \(V(R)=\kappa_{2}R^{\alpha}\); here \(\alpha,\kappa_{2}>0\) are constants. This yields an expression for the lobe pressure in terms of the source age,
\[p(t)=\frac{Q(\Gamma_{c}-1)(q+1)}{\kappa_{2}[\alpha(\Gamma_{c}-1)(q+1)+(4-\beta)/ 2]}\left(\frac{\Omega kc}{\kappa_{1}Q}\right)^{\alpha/(4-\beta)}\left(\frac{(4 -\beta)t}{2}\right)^{(4-\beta-2\alpha)/(4-\beta)}, \tag{6}\]
where the constants \(\alpha\) and \(\kappa_{2}\) are evaluated below by considering the lobe volume evolution.
A major limitation of the Scheuer (1997) model concerns the sideways expansion of the lobe, which is assumed to occur at the same velocity at every point on the lobe surface at any given time \(t\). This expansion rate is derived by equating the lobe pressure to the ram pressure presented by the ambient medium as the lobe widens; i.e. \(\rho v_{\perp}^{2}=p(t)\) where the ambient gas density is reasonably approximated as \(\rho\sim kR^{-\beta}\). This sidewards expansion can only commence at locations already reached by the jet material. The half-width of the lobe at some location \(r\) along the jet axis is thus given by
\[R_{\perp}(r)=\int_{t(r)}^{t(R)}v_{\perp}(t^{*})dt^{*}, \tag{7}\]
where \(t(r)\) is the time when the jet-head reached the location \(r\) along the jet axis, and \(t(R)\) is the source age when the jet has its present length \(R\). This integral is evaluated in Appendix A.2.
The lobe volume at the source age \(t\equiv t(R)\) is found by integrating over all locations \(r\) along the jet axis,
\[\begin{split} V(R)\equiv\kappa_{2}R^{\alpha}&=\int_ {0}^{R}\pi R_{\perp}(r)^{2}dr\\ &\propto R^{(14-5\beta-2\alpha)/2},\end{split} \tag{8}\]
Dimensional analysis shows that the only possible solution is for \(\alpha=(14-5\beta)/4\). The lobe expansion is therefore _not_ self-similar (which would require \(V\propto R^{3}\) and hence \(\alpha=3\)) unless \(\beta=0.4\), i.e. in a gently declining density profile representative of cluster cores. For a uniform medium as considered by Scheuer (1997), the exponent converges to \(\alpha=\frac{7}{2}\), while in a steep environment (\(\beta=2\)) representative of the outer regions of groups or clusters, the lobe volume scales linearly with jet length leading to a rapid increase in the lobe axis ratio.
The constant of proportionality, \(\kappa_{2}\), is similarly found by comparing terms not involving \(R\), yielding
\[\kappa_{2}=\frac{16\pi^{1/2}(\Omega c)^{3/4}k^{1/4}}{[(14-5\beta)(18-5\beta)] ^{1/2}\kappa_{1}^{3/4}Q^{1/4}}\left[\frac{(\Gamma_{c}-1)(q+1)}{(14-5\beta)( \Gamma_{c}-1)(q+1)+2(4-\beta)}\right]^{1/2}, \tag{9}\]
which converges to the expression found by Scheuer (1997, their Equation 10) in the limit of a uniform ambient medium and assuming \(\Gamma_{c}=\frac{4}{3}\).
This simple model neglects the sidewards ram pressure of the ambient medium (or lobe at later times) acting on the jet, which will lead to reconfinement shocks and ultimately the collimation of the jet. Scheuer (1997) proposed a second model (their Model B) in which the jet is smoothly compressed into a collimated beam by the ambient medium; however, this assumption leads to unphysically narrow jets and consequently significantly faster jet-head advance speeds, which scale with jet cross-section \(y\) as \(R(t)\propto y^{-4/9}t^{7/9}\). Scheuer (1997) subsequently proposed that, in some sources, the jet may precess on a timescale which is short compared to the evolutionary timescale of the lobe. The time-averaged momentum flux of the jet is effectively spread over a larger cross-sectional area (equivalent to a larger jet opening angle) resulting in a slower growth rate along the jet axis (e.g. 26). We return to this point in Section 5 when we compare the predictions of different models.
## 3 Lobe Expansion Models
The self-similar expansion model for the growth of quasar winds by Dyson _et al._ (2017) spurred a new generation of analytical models based on the adiabatic expansion of the lobe bubble along a power-law ambient gas density profile. In particular, Falle (1986) related the geometry and internal pressure of the expanding lobe to the dynamics of the jet (Section 3.1), enabling the Dyson _et al._ (2017) model to be modified to consider the evolution of radio sources (Section 3.2). The Falle (1986) model forms the basis for several models in the literature including those by Kaiser and Alexander (1993), Blundell and Rawlings (1993), and Manolakou and Kirk (2000).
### Jet collimation
Falle (1986) revisited the dynamical modelling of jet collimation in their 1991 work, considering an initially conical jet that reflects a strong shock off the surrounding medium upon reaching lateral pressure equilibrium. This reconfinement shock bounces between each side of the jet cavity preventing any further decay of the lateral thrust against the surrounding medium; this leads to a constant width, collimated jet with repeated cross-shaped structures of enhanced pressure and synchrotron emissivity. Falle (1986) assumed that lobe formation occurs prior to jet collimation, and thus that it is the lobe pressure which opposes the sidewards component of the jet thrust, not the ambient medium. Alexander (1993) showed that jets may in fact be collimated by the ambient medium prior to the formation of lobes if \(\sqrt{\Gamma_{x}}\sin\theta_{j}M_{x}<1\), where \(\Gamma_{x}\) is the polytropic index of the ambient medium, \(\theta_{j}\) is the half-opening angle of the conical jet, and \(M_{x}\) is the Mach number of the jet-head advance with respect to the sound speed of the ambient medium. Jet-head advance speeds are initially relativistic (e.g. VLBI observations of 28) and thus the external Mach number of the jet is expected to be of order several hundred; only for very small opening angles of \(<1\) degree would the jet be expected to be collimated by the ambient medium rather than the lobe.
The location of the initial reconfinement shock, \(z\), is found by applying the Rankine-Hugoniot jump conditions for a plane-parallel shock to the lateral component of the flow that travels along the jet edge. The pressure of the lobe plasma is related by these conditions to the lateral component of the jet thrust as
\[p(t)=\frac{2}{\Gamma_{j}+1}\rho_{j}(z_{1})v_{j}^{2}\sin^{2}\theta_{j}, \tag{10}\]
where the jet plasma has bulk velocity \(v_{j}\), polytropic index \(\Gamma_{j}\), and density \(\rho_{j}(z_{1})\) at the critical radius \(z_{1}\) at which the jet begins to collimate (i.e. location where the ram pressure first matches the lobe pressure). The density of the jet plasma will remain constant after this point until it reaches the jet-head. We can thus derive an expression for the pressure acting on the contact discontinuity between the jet-head and surrounding shocked gas using the shock jump conditions. That is,
\[\begin{split} p_{h}(t)&=\frac{2}{\Gamma_{j}+1}\rho _{j}(z_{1})v_{j}^{2}\\ &=\frac{p(t)}{\sin^{2}\theta_{j}},\end{split} \tag{11}\]
where the second equality is obtained upon substitution of Equation 10. This expression has the same form as found by Kaiser and Alexander (1993, their Equation 36), and yields comparable pressure ratios to the numerically informed value obtained by Komissarov and Falle (1993).
The lobe and jet-head region are surrounded by a shell of swept-up ambient medium that has been overrun by the bow shock generated by the expanding jet. This shocked gas is in approximate pressure equilibrium with the proximate lobe/jet-head plasma, but
has significantly higher density and thus lower temperature (e.g. [30; 31]); i.e. \(p_{s}(t)\sim p_{h}(t)\), where \(p_{s}(t)\) is the pressure just inside the bow shock along the jet axis. Together with conservation equations, these relationships between the conditions in the lobe cavity, the bow shock and the ambient medium are sufficient for describing the evolution of the expanding radio source.
### Lobe adiabatic expansion
Falle [18] present their model in terms of the volume and pressure of the lobe, however, the surrounding shocked gas shell also receives a non-negligible fraction of the input energy from the central nucleus. We therefore express their equations in a more complete form considering both the lobe and shocked gas shell consistent with later work by Hardcastle [23], Turner and Shabala [32] and Turner _et al._[24]. These models assume that it is primarily the thermal pressure of the lobe plasma that drives the source expansion; this pressure is uniform throughout the lobes due to high internal sound speed. The first law of thermodynamics relates the jet kinetic energy input to the thermal pressure \(p\) in the lobe and shell, and shocked gas volume \(V_{s}\) [see Equation 6 of 10]:
\[V_{s}\frac{dp}{dt}+\Gamma_{c}p\frac{dV_{s}}{dt}=(\Gamma_{c}-1)Q, \tag{12}\]
where \(\Gamma_{c}\) is the adiabatic index of the shocked gas and lobe plasma (\(\frac{5}{3}\) for a non-relativistic fluid), and \(Q\) is the power injected into the shocked shell by the jet. This equation can be rewritten in terms of the pressure at the interface of the shocked shell and ambient medium (along the jet axis) using Equation 11. That is,
\[V_{s}\frac{dp_{s}}{dt}+\Gamma_{c}p_{s}\frac{dV_{s}}{dt}=\frac{(\Gamma_{c}-1)Q}{ \sin^{2}\theta_{j}}. \tag{13}\]
Falle [18] showed that the shocked shell expands in a self-similar manner, leading to a constant scaling between the volume and cube of the jet length. That is,
\[V_{s}(R_{s})=\kappa_{3}R_{s}^{3}. \tag{14}\]
where \(R_{s}\) is the radius of the shocked gas shell along the jet axis, and \(\kappa_{3}\) is a constant of proportionality. Kaiser and Alexander [19], and some subsequent authors (e.g. [33]) modelled the lobe as cylindrical (see Figure 2), with the major axis of the shocked gas shell being a factor of \(A_{s}=1/\sin\theta_{j}\) longer than the minor axis. The volume of the shocked shell is then
\[V_{s}(R_{s})=\pi\sin^{2}\theta_{j}R_{s}^{3}\ \ (\text{cylinder}). \tag{15}\]
Figure 2: Schematic of the dynamical model proposed by Falle [18], and subsequently refined by others including Kaiser and Alexander [19] and Alexander [10]. The bow shock is assumed to expand in a self-similar manner (i.e. constant scaling to the lobe) but the energy associated with the shocked gas is not explicitly considered.
In this approach, expansion in the jet direction is driven by the ram pressure acting on the jet-head region, \(p_{h}(t)\), whilst sideways expansion is driven by shocked shell thermal pressure, \(p(t)\). The more realistic assumption of an ellipsoid shocked shell adds a factor of \(\frac{2}{3}\) to the above equation, however, the sidewards ram pressure then becomes a strong function of distance along the jet axis. The resulting _average_ lobe and shocked shell pressures must be calculated numerically (see Section 4.1) rather than using the simple relation derived in Section 3.1.
The pressure of the shocked shell along the jet axis, \(p_{s}(t)\sim p_{h}(t)\), is related to the density of the ambient medium, \(\rho_{x}=kR_{s}^{-\beta}\) using the Rankine-Hugoniot shock jump conditions,
\[p_{s}(t)=\frac{2}{\Gamma_{x}+1}kR_{s}^{-\beta}\bigg{(}\frac{dR_{s}}{dt}\bigg{)} ^{2}, \tag{16}\]
where \(\Gamma_{x}\) is the adiabatic index of the ambient medium surrounding the shocked gas shell.
Substituting Equations 15 and 16 for the shocked shell volume and pressure along the jet axis into Equation 13 yields a second-order nonlinear differential equation for shell radius,
\[2R_{s}^{3-\beta}\frac{dR_{s}}{dt}\frac{d^{2}R_{s}}{dt^{2}}+(3\Gamma_{c}-\beta )R_{s}^{2-\beta}\bigg{(}\frac{dR_{s}}{dt}\bigg{)}^{3}=\frac{(\Gamma_{c}-1)( \Gamma_{x}+1)Q}{2k\pi\sin^{2}\theta_{j}}. \tag{17}\]
This equation can be solved by trialling another power-law solution, with the exponent again constrained by dimensional analysis. This yields
\[R_{s}(t)=\Bigg{[}\frac{(\Gamma_{c}-1)(\Gamma_{x}+1)(5-\beta)^{3}Q}{18(9\Gamma _{c}-4-\beta)k\pi\sin^{2}\theta_{j}}\Bigg{]}^{1/(5-\beta)}t^{3/(5-\beta)}. \tag{18}\]
Kaiser and Alexander (19) include an additional correction in the denominator of their equivalent expression due to the energy associated with the higher pressure jet-head region; the \((9\Gamma_{c}-4-\beta)\) term in Equation 18 becomes \((9[\Gamma_{c}+(\Gamma_{c}-1)/\sin^{2}\theta_{j}]-4-\beta)\). We note that this correction assumes that the differential increase in volume of the lobe and hotspot are equal as the source grows; this assumption is not particularly realistic for an ellipsoidal lobe geometry.
## 4 Semi-analytic Models
Improved computation has recently enabled a new generation of analytical models with added complexity. These models typically solve systems of differential equations which lack an analytic solution to describe the evolutionary history of the radio source. Below, we summarise the main developments, including atmospheres beyond power-law density profiles (Section 4.1), considering both the ram and thermal pressure contributions to the expansion along the jet axis (Section 4.2), and modelling the relativistic jet in a distinct expansion phase prior to the onset of lobe formation (Section 4.3).
### RAiSE (version 2015)
Turner and Shabala (2015) developed a semi-analytic model, _Radio AGN in Semi-analytic Environments_ (RAiSE), based on the theory of the Falle (18) class of models (Section 3). These authors extended existing analytic approaches by considering piece-wise solutions to the governing differential equations in two dimensions. The RAiSE model included three key improvements over the earlier models: (1) ambient medium consistent with X-ray observations of clusters and semi-analytic galaxy formation models; (2) angular dependence of expansion velocity across the ellipsoidal contact surface; and (3) modelling of the morphological transition from supersonic to subsonic lobe expansion by using complete differential equations rather than limiting cases which yield analytic expressions.
The lobe and shocked shell in their model are constructed from an ensemble of small angular volume elements in assumed pressure equilibrium. Each element of fixed angular width \(d\theta\) is assumed to receive a constant fraction of the jet power as the cavity
expands. This assumption yields self-similar expansion at early times when the shocked shell is expanding in the strong-shock supersonic limit, as in the earlier models of Kaiser and Alexander (1993). The volume of each small angular element of the shocked shell, \([\theta-\delta\theta/2,\theta+\delta\theta/2)\), is given by
\[\delta V_{s}(\theta)=\frac{2\pi R_{\rm s}^{3}(\theta)}{3}\sin\theta\delta\theta, \tag{19}\]
where \(\theta\) is the angle between some location on the surface of the shocked shell and the jet axis, and \(R_{s}(\theta)\) is the radius of the _initially_ ellipsoidal shell at that location (see Figure 3). Importantly, the shocked gas shell does not expand self-similarly as the steepness of the ambient gas density profile encountered by the non-spherical shell will in general differ across its surface, leading to different growth rates; this prediction is consistent with the higher axis ratios observed in the largest radio sources Mullin _et al._ (2016).
The initial radius of each volume element is related to that along the jet axis by a geometric factor as \(R_{\rm s}(\theta,t\to 0)=\eta_{s}(\theta)R_{s}(\theta=0,t\to 0)\), where \(\theta=0\) is aligned along the jet axis and \(\eta_{s}(\theta)\) is defined as
\[\eta_{s}(\theta)=\frac{1}{\sqrt{(\sin^{2}\theta/\sin^{2}\theta_{j})+\cos^{2} \theta}}, \tag{20}\]
where we have assumed the same relationship between the jet half-opening angle, \(\theta_{j}\), and axis ratio of the shocked shell, \(A_{\rm s}\), as discussed in Section 3.
Following Turner and Shabala (2016), and later work by Turner and Shabala (2016) and Turner _et al._ (2016), the adiabatic expansion of each angular volume element is related to the pressure imparted on that element at the surface, \(p_{s}(\theta)\), its volume \(\delta V_{s}(\theta)\), and the fraction of the input jet power associated with that element, \(Q\delta\lambda(\theta)\). The function \(\delta\lambda(\theta)\) is defined in Equation 20 of Turner _et al._ (2016). The first law of thermodynamics in Equation 13 gives
\[\frac{dp_{s}(\theta)}{dt}\delta V_{s}(\theta)+\Gamma_{c}p_{s}(\theta)\frac{d[ \delta V_{s}(\theta)]}{dt}=(\Gamma_{c}-1)Q\delta\lambda(\theta). \tag{21}\]
Away from the contact surface, the pressure in the lobe is calculated as the spatial average of the surface pressures \(p_{s}(\theta)\).
Turner and Shabala (2016) use a similar expression for the pressure at the contact surface to Equation 16, but additionally consider: (1) the orientation of the expanding surface as it impacts the ambient medium; and (2) terms describing evolution in the transonic and
Figure 3: Schematic of the Turner and Shabala (2016) dynamical model for the lobe and shocked shell. This framework is also used by Turner _et al._ (2016) for both their jet- and lobe-dominated expansion phases, albeit the lobe (shown in red) only forms once a critical length-scale is reached. Taken from Figure 1 of Turner _et al._ (2016).
subsonic expansion regimes. That is, for expansion in the supersonic and transonic phases, we have
\[p_{s}(\theta)=\frac{2}{\Gamma_{x}+1}kR_{s}^{-\beta}(\theta)\bigg{(}\frac{\zeta_{s }(\theta)}{\eta_{s}(\theta)}\frac{dR_{s}(\theta)}{dt}\bigg{)}^{2}-\frac{\Gamma_ {x}-1}{\Gamma_{x}+1}(kl)R_{s}^{-\beta}(\theta), \tag{22}\]
where \(\zeta_{s}(\theta)\) is a further geometric factor defined in Equation 13 of Turner _et al._ (2014). The radial temperature profile of the ambient medium is defined by Turner and Shabala (Turner and Shabala, 2014) as \(T=(\bar{m}/k_{b})lr^{-\zeta}\) for Boltzmann constant \(k_{B}\), and average particle mass \(\bar{m}\sim 0.6m_{p}\) (where \(m_{p}\) is the proton mass). Below, we present their results assuming an isothermal medium for consistency with other authors, i.e. adopt \(\zeta=0\). The pressure in the subsonic regime is equal to the ambient pressure (see Equation 4 of Turner and Shabala, 2014).
The second-order non-linear differential equation that results from substituting Equations 19 and 22 into the Equation 21 cannot in general be solved to yield an analytic solution. Turner and Shabala (Turner and Shabala, 2014) instead rewrite the resulting equation as a system of two coupled first-order ordinary differential equations. These differential equations describe the velocity and acceleration at the contact surface of a given volume element \(\delta V_{s}(\theta)\),
\[\dot{R}_{s}(\theta) =v_{s} \tag{23}\] \[\dot{v}_{s}(\theta) =\frac{3(\Gamma_{x}+1)(\Gamma_{c}-1)QR_{s}^{\beta-3}\delta\lambda }{8\pi v_{s}(\zeta_{s}/\eta_{s})^{2}k\sin\theta\delta\theta}+\frac{(\beta-3 \Gamma_{c})v_{s}^{2}}{2R_{s}}\] \[\quad\quad+\frac{(\Gamma_{x}-1)(3\Gamma_{c}-\beta)l}{4R_{s}( \zeta_{s}/\eta_{s})^{2}},\]
where \(v_{s}\), \(R_{s}\), \(\delta\lambda\), \(\zeta_{s}\) and \(\eta_{s}\) are explicit functions of \(\theta\), whilst the properties of the ambient medium, \(k\), \(l\) and \(\beta\), are implicit functions of \(\theta\) as different sections of the contact surface reach a given distance into the spherically symmetric environment at different times. Turner and Shabala (2014) use a standard fourth-order Runge-Kutta method to solve this system of equations, providing the analytic solution for the strong-shock limit as an initial condition.
### Hardcastle model
Hardcastle (2014) improves on the lobe-dominated expansion model of Turner and Shabala (Turner and Shabala, 2014) by explicitly considering the momentum flux of the jet plasma. This model considers expansion for two angles across the surface of the shocked shell, \(\theta=0\) and \(\frac{\pi}{2}\) (see Figure 4). The choice of these two angles is sufficient to model changes to the axis ratio of the lobe and shocked shell in lobed FR-IIs, with the larger number of angles considered by Turner and Shabala (Turner and Shabala, 2014) only important in the transonic and subsonic expansion phases when the lobe deforms from its initial ellipsoidal shape.
The shocked shell pressures are derived from the ram pressure component along the jet axis (Equation 1), and a component due to the internal energy of the relativistic lobe
Figure 4: Schematic of the dynamical model proposed by Hardcastle (2014). We depict a conical jet, noting that in this model the cross-sectional area at the jet-head is related to the lobe volume/radius by a constant scaling factor \(\kappa_{1}\), and hence the dynamics of the jet are not critical to model behaviour.
plasma acting along both axes (Equation 5, with \(U=Qt\) and \(q\ll 1\)). The lobe and shocked gas are assumed to be in pressure equilibrium, as discussed in Section 3. The shocked shell pressures along the major and minor axes are then given by
\[\begin{split} p_{s}(\theta=0)&=\frac{\varepsilon QR_{s }(\theta=0)}{2eV}+\frac{(\Gamma_{c}-1)\xi Qt}{V}\\ p_{s}(\theta=\frac{\pi}{2})&=\frac{(\Gamma_{c}-1 )\xi Qt}{V}.\end{split} \tag{24}\]
where \(V/R_{s}(\theta=0)\) is the cross-sectional area of the jet-head region for a cylindrical lobe of volume \(V\), \(\varepsilon\sim 4\) here acts as a geometric correction factor reflecting more realistic lobe shapes, and \(\xi\sim\frac{1}{2}\) is the fraction of the input jet kinetic power found in hydrodynamic simulations to be stored as internal energy of the relativistic lobe plasma; the remainder is stored as thermal and kinetic energy in the shocked gas shell.
Hardcastle [23] relates the volume of the lobe, \(V\), to that of the shocked shell, \(V_{s}\) (including the interior lobe), by considering the ratio of total internal energies,
\[\frac{V}{V_{s}}=\frac{(\Gamma_{c}-1)\xi Qt}{[\xi\Gamma_{c}+(1-\xi)\Gamma_{s}- 1]Qt+f(N,T,v_{s})}, \tag{25}\]
where \(f(N,T,v_{s})\) is a function describing the internal energy of the ambient medium swept-up by the bow shock. This function depends on the total number of swept-up particles, \(N\), their temperature, \(T\), and their bulk velocity due to the expansion of the shocked shell, \(v_{s}=dR_{s}/dt\)[23, their Equations 5 and 7]. For young sources, when the thermal energy of these particles is lower than the energy supplied by the jet to the forming shocked gas shell, the ratio of lobe to shocked shell volumes tends to \(\xi\) for \(\Gamma_{c}=\Gamma_{s}=\frac{\xi}{3}\), or \(\xi/(2-\xi)\) if the lobe is assumed to have a relativistic plasma (\(\Gamma_{c}=\frac{4}{3}\)). The volume of the shocked gas shell is of course also directly related to the lengths of the major and minor axes; for an ellipsoidal geometry this gives
\[V_{s}=\frac{2\pi R_{s}(\theta=0)R_{s}^{2}(\theta=\frac{\pi}{2})}{3}. \tag{26}\]
We can therefore express the pressure along the major and minor axes of the shocked shell (Equation 24) as a function of _both_ axis lengths upon substitution of Equations 25 and 26.
Hardcastle [23] rewrites the Rankine-Hugoniot shock jump conditions (Equation 22) in terms of the velocities along the major and minor axes of the lobe. However, unlike the previously discussed models, Hardcastle [23] expresses the jump conditions in terms of the sound speed, \(c_{s}\), and adiabatic index, \(\Gamma_{s}\), of the shocked gas shell surrounding the lobe. The expansion rates along the major and minor axes are then given by
\[\begin{split}\frac{dR_{s}(\theta=0)}{dt}&=c_{s} \sqrt{\frac{(\Gamma_{s}+1)[p_{s}(\theta=0)/p_{x}(R_{s}(\theta=0))]-(\Gamma_{s} -1)}{2\Gamma_{s}}}\\ \frac{dR_{s}(\theta=\frac{\pi}{2})}{dt}&=c_{s} \sqrt{\frac{(\Gamma_{s}+1)[p_{s}(\theta=\frac{\pi}{2})/p_{x}(R_{s}(\theta= \frac{\pi}{2}))]-(\Gamma_{s}-1)}{2\Gamma_{s}}},\end{split} \tag{27}\]
where \(p_{x}(r)\) is the spherically symmetric ambient gas pressure profile. This coupled system of non-linear ordinary differential equations can be solved using a standard fourth-order Runge-Kutta method, with initial conditions of \(R_{s}(\theta=0)=ct_{0}\) and \(R_{s}(\theta=\frac{\pi}{2})=ct_{0}\) for some small time \(t_{0}\).
### RAiSE (version 2023)
The RAiSE [22] model discussed in Section 4.1 was first extended to make predictions for the spatial distribution of emission at radio [35] and X-ray wavelengths [32]; and most recently to incorporate important changes to jet and lobe dynamics. The Turner _et al._[24]
model includes: (1) a relativistic jet expansion phase modelled prior to the formation of a lobe; (2) formation of lobes within a surrounding bow shock; and (3) a separation of the ram and thermal components of the jet and lobe pressure.
The Turner _et al._ (2018) model is implemented using the same computational framework as the original RAiSE model discussed in Section 4.1, specifically, using coupled differential equations which are solved for small angular volume elements of the lobe and shocked shell. Below, we present a concise derivation of their solution for the expansion of the relativistic jet (Section 4.3.1), summarise their methodology to model the subsequent lobe formation/inflation (Section 4.3.2), and finally present their method to separate the ram and thermal components of the jet and lobe pressure (Section 4.3.3).
#### 4.3.1 Relativistic jet expansion
The relativistic hydrodynamic conservation equations relate the properties of fluids upstream and downstream of a shock discontinuity via the stress-energy tensor. The conservation equations for a relativistic fluid are expressed in terms of comoving quantities including gas density \(\rho\), gas pressure \(p\), dimensionless specific enthalpy \(h\) (i.e. enthalpy divided by \(c^{2}\)), and the non-zero spatial component of the four-velocity \(u=\gamma v/c\) (hereafter shortened to four-velocity) relative to the shock front. The conservation equations for a relativistic fluid are (e.g. 24, and references therein):
\[\rho\gamma v=\rho_{1}\gamma_{1}v_{1}\ \ \ \text{(continuity)} \tag{28a}\] \[\rho h\gamma^{2}v^{2}+p=\rho_{1}h_{1}\gamma_{1}^{2}v_{1}^{2}+p_{ 1}\ \ \text{(momentum)}\] (28b) \[\rho(h\gamma-1)\gamma v=\rho_{1}(h_{1}\gamma_{1}-1)\gamma_{1}v_{1} \ \ \text{(energy)}. \tag{28c}\]
The fluid downstream of the shock is represented by the subscript '1'; no subscript refers to the upstream fluid.
The conservation of energy expression in Equation 28c is related to the rate of energy input by the jet, \(Q\), by multiplying by the cross-sectional area of the jet (cf. 36, their Equation 26). That is,
\[Q=\rho_{j}(h_{j}\gamma_{j}-1)\gamma_{j}v_{j}c^{2}\Omega r^{2}, \tag{29}\]
where the factor of \(c^{2}\) is added to convert the dimensionless enthalpy, \(h_{j}\), to the specific enthalpy; \(v_{j}\) is the bulk velocity of the jet plasma and \(\gamma_{j}\) is the corresponding Lorentz factor. We can therefore obtain an expression for the density of the jet plasma some distance \(r\) along the jet in terms of the dimensions and energetics of the jet as follows:
\[\rho_{j}(r)=\frac{Q}{\gamma_{j}v_{j}c^{2}(h_{j}\gamma_{j}-1)\Omega r^{2}}, \tag{30}\]
where the density at the jet-head, \(\rho_{j}\equiv\rho_{j}(R_{s})\), is of particular interest for the radio source dynamics.
We now derive the Rankine-Hugoniot jump conditions relating the density and velocity of both the jet plasma and the ambient medium. The bulk velocity of the ambient medium in the observer frame is zero at all times for random particle motions. As a result, the bulk velocity of these particles in the frame of the shock front, \(v_{1}\), is exactly equal to the expansion rate of the shock in the observer frame, \(v_{s}\); i.e. \(v_{1}\equiv-v_{s}\). By contrast, the bulk velocity of the upstream fluid particles in the jet is non-zero, defined as \(v_{j}\) in the observer frame. Following (2018), the conservation of momentum flux equation can therefore be rewritten as
\[\rho_{j}h_{j}\gamma_{j}^{2}\gamma_{s}^{2}(v_{j}-v_{s})^{2}=\rho_{x}h_{x}\gamma _{s}^{2}v_{s}^{2}, \tag{31}\]
where \(h_{j}\) is the dimensionless specific enthalpy of the jet, and \(\rho_{x}\) and \(h_{x}\) are the density and dimensionless specific enthalpy of the (external) ambient medium respectively. Rearranging
yields a relationship between the jet-head advance speed and the bulk velocity of the jet [cf. 37-39, and subsequent authors]:
\[v_{s}\equiv\frac{dR_{s}}{dt}=\frac{v_{j}}{1+[\rho_{j}h_{j}\gamma_{j}^{2}/(\rho_{ x}h_{x})]^{-1/2}}, \tag{32}\]
where the dimensionless quantity \(\eta_{R}=\rho_{j}h_{j}\gamma_{j}^{2}/(\rho_{x}h_{x})\) is a function of properties of the jet and ambient medium. That is,
\[\eta_{R}(r)=\frac{Qh_{j}\gamma_{j}}{kh_{x}v_{j}c^{2}(h_{j}\gamma_{j}-1)\Omega r ^{2-\beta}}. \tag{33}\]
where we have made use of the power-law approximation for the local density of the ambient medium, \(\rho=kr^{-\beta}\), and Equation 30 for the jet plasma density.
The jet length is found by integrating Equation 32 with respect to time, however, an analytical solution is only possible in the limits \(\eta_{R}\to 0\) and \(\eta_{R}\rightarrow\infty\)(e.g. 40,41). Turner _et al._(2014) solve this integral numerically using a fourth-order Runge-Kutta method on the following system of three ordinary differential equations:
\[\dot{R}_{s} =v_{s}\] \[\dot{v}_{s} =\frac{(\beta-2)v_{j}v_{s}}{2R_{s}\eta_{R}^{1/2}[1+\eta_{R}^{-1/2 }]^{2}} \tag{34}\] \[\dot{\gamma}_{s} =\frac{\gamma_{s}^{3}v_{s}\dot{v}_{s}}{c^{2}}.\]
We note that in the interests of clarity we have omitted the transverse density and velocity structures of the flow along the jet from the above derivation; we refer the reader to Section 2.2.2 of Turner _et al._(2014) for a complete description.
#### 4.3.2 Lobe formation
The energy supplied by the central nucleus is initially focussed over a small range of angles within the half-opening angle of the jet. Beyond some lobe formation length-scale the energy must be distributed across the \(2\pi\) steradians of the shocked shell. The source expansion in these two phases is described by the differential equations for the relativistic jet (Section 4.3.1) and lobe and shocked shell (Section 4.1). Turner _et al._(2014) combine these frameworks by modelling the expansion of the radio source as a two-phase fluid, where each angular volume element is assumed to comprise a fraction \(\Lambda(t)\) of lobe plasma at any given time \(t\). Turner _et al._(2014) relate the acceleration of the ellipsoidal bow shock surrounding the lobe to the acceleration in the jet- and lobe-dominated expansion phases, \(\dot{v}_{s,\mathrm{jet}}\) (Equation 34) and \(\dot{v}_{s,\mathrm{b}\mathrm{o}k}\) (Equation 23) respectively, as follows:
\[\dot{v}_{s}(\theta)=[1-\Lambda]\dot{v}_{s,\mathrm{jet}}\eta(\theta)+\Lambda \dot{v}_{s,\mathrm{b}\mathrm{o}k}(\theta), \tag{35}\]
where \(\Lambda\) is the fractional contribution of the lobe plasma to the acceleration of the bow shock at a given time. The other two coupled ordinary differential equations (for velocity and derivative of the Lorentz factor) are identical for both fluids and thus do not require any modification.
Turner _et al._(2014) define the transition from a jet-dominated to a lobe-dominated flow based on the length-scale at which lobe formation commences. This length-scale is calculated by equating the densities of the jet plasma and ambient medium (e.g. 10,11). Turner _et al._(2014) parametrise the transition from jet- to lobe-dominated expansion by using the ratio of these densities,
\[\mathcal{L}(t)=\frac{\rho_{j}}{\rho_{x}}=\ \ \frac{\eta_{R}(R_{s}(\theta=0,t))}{ \gamma_{j}^{2}} \tag{36}\]
where \(\eta_{R}(r)\) is defined in Equation 33 and is evaluated for the length of the jet at time \(t\). Turner _et al._ (2014) use this ratio to calculate the fractional contribution of the lobe to source expansion,
\[\Lambda(t)=e^{-\mathcal{L}^{2}(t)/(2\log 2)}, \tag{37}\]
where \(\Lambda(t)\to 0\) in the jet-dominated expansion phase and \(\Lambda(t)\to 1\) in the lobe-dominated phase.
This two-phase fluid model describes the evolution of the bow shock across the transition from a jet-dominated to lobe-dominated flow. A more complete description requires consideration of lobe formation inside the shock front. We refer the interested reader to Section 2.4.2 of Turner _et al._ (2014) for these details.
#### 4.3.3 Thermal pressure
The Turner _et al._ (2014) relativistic jet model (Section 4.3.1) and their earlier lobe-dominated expansion model (Section 4.1) derive the jet and lobe length evolution by considering conservation of momentum flux (Equation 31), however, the relative magnitudes of the ram and thermal pressure components after the interaction are not explicitly calculated. These pressure components are difficult to separate directly using the conservation equations, however, we know the lobe evolution is driven entirely by the thermal component in the limit \(t\to\infty\). Turner _et al._ (2014) therefore find the thermal pressure at earlier times by iteratively solving (backwards in time) the relevant differential equations with the initial condition at \(t\to\infty\). We refer the interested reader to Section 2.3.4 of Turner _et al._ (2014) for a complete description of the separation of the ram and thermal components of the lobe internal pressure.
## 5 Discussion
In preceding sections, we have presented the theory underpinning the key classes of analytical models describing the dynamics of kiloparsec-scale radio AGN jets and lobes. The same physical principles are considered in each of these models, notably ram pressure against the ambient medium and an adiabatic equation of state, however their implementation between model classes differs greatly - as we discuss in Section 5.1. We compare the accuracy of predictions for each model type relative to the outputs of a three-dimensional relativistic hydrodynamic simulation in Section 5.2. We then assess for which regions of parameter space the different model classes yield comparable results, and conversely those where large differences are expected, in Section 5.3.
### Similarity of key model classes
The four key classes of analytical models examined in this review share common physical principles to explain the dynamics of kiloparsec-scale radio sources (see Table 1). Scheuer (2017) models the forward expansion of the source based on the momentum flux of the jet and invokes internal energy to calculate the sidewards expansion of the lobe. Falle (2018) instead models the forwards expansion by considering the adiabatic expansion of the lobe due to an increase in internal energy while using the jet momentum flux to relate the shape of the lobe to the opening angle of the jet. Meanwhile, the Hardcastle (2018) and Turner _et al._ (2014) models smoothly transition their dynamics between the jet- and lobe-dominated expansion phases predicted by these earlier models.
#### 5.1.1 Early-time evolution
The source length evolution in the Scheuer [17] model, and the (jet-dominated) early-time expansion phases of the Hardcastle [23] and Turner _et al._[24] models, are derived considering the relative amplitudes of the momentum flux of the jet and the thermal pressure of the ambient medium (Equation 28b). The relativistic hydrodynamic equations used in the theory of Turner _et al._[24] can be simplified to obtain the expressions found by the Scheuer [17] class of models. In particular, the jet-head advance speed derived in Equation 32 is integrated to yield the source length as follows:
\[\begin{split} R(t)&=\int_{0}^{t}\frac{dR_{s}}{dt}dt \\ &\approx\left(\frac{Q\hbar_{j}\gamma_{j}v_{j}}{\Omega kh_{x}c^{2}( \hbar_{j}\gamma_{j}-1)}\right)^{1/(4-\beta)}\left(\frac{(4-\beta)t}{2}\right)^ {2/(4-\beta)},\end{split} \tag{38}\]
where the second equality is valid for \(\eta_{R}(r)\ll 1\), which coincides with the formation of lobes on a length-scale of order \(1\,\mathrm{kpc}\)[24]. This equation converges to that proposed by Rees [16] and Scheuer [17, their Model A] by taking their limit of massless particles with velocity \(c\) (i.e. \(h_{j,x}\to 1\) and \(\gamma_{j}\rightarrow\infty\)),
\[R_{s}(t)=\left(\frac{Q}{\Omega kc}\right)^{1/(4-\beta)}\left(\frac{(4-\beta)t }{2}\right)^{2/(4-\beta)}, \tag{39}\]
where for their assumption of a uniform ambient medium we set \(\beta=0\) and \(k=\rho\).
The similarity of the early-time evolution predicted by the Hardcastle [23] model and Scheuer [17] class of models is immediately apparent by comparing their expression for the
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Type** & **Early-time evolution** & **Late-time evolution** \\ \hline Scheuer (1974; Model A) & analytical _(constant density)_ & \begin{tabular}{c} momentum flux \\ _(Bernoulli equation)_ \\ \(R(t)\propto t^{2/(4-\beta)}\) \\ \end{tabular} & – \\ & analytical _(power-law density profile)_ & – & \begin{tabular}{c} internal pressure \\ _(first law of thermodynamics)_ \\ \(R(t)\propto t^{3/(5-\beta)}\) \\ \end{tabular} \\ Hardcastle (2018) & semi-analytic _(spherically symmetric density profile)_ & \begin{tabular}{c} momentum flux \\ _(non-relativistic shock-jump conditions)_ \\ \(R(t)\propto t^{2/(4-\beta)}\) \\ \end{tabular} & \begin{tabular}{c} internal pressure \\ _(first law of thermodynamics)_ \\ \(R(t)\propto t^{3/(5-\beta)}\) \\ \end{tabular} \\ Turner _et al._ (2023; RAiSE) & semi-analytic _(spherically symmetric density profile)_ & \begin{tabular}{c} momentum flux \\ _(relativistic hydrodynamics)_ \\ \(R(t)\propto t^{4}\) \\ \end{tabular} &
\begin{tabular}{c} internal pressure \\ _(first law of thermodynamics)_ \\ \(R(t)\propto t^{3/(5-\beta)}\) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the model assumptions for each of the four key classes of analytical models considered in this review. First column: analytical model. Second column: type of solution (analytical or semi-analytic), together with a note on the assumed ambient gas density profile. The third and fourth column: physical process driving the jet and lobe length expansion at early and late times, respectively. The fundamental equations used to derive the source evolution, and the time dependence of the length expansion, are also listed.
jet-head pressure (Equation 24). In the limit \(t\to 0\), their expression converges to the ram pressure component as follows:
\[\begin{split} p_{s}(\theta=0)&=\frac{\varepsilon QR_{s} (\theta=0)}{2cV}\ \ \ \text{(Hardcastle)}\\ &=\frac{\kappa_{1}Q}{\Omega R_{s}^{2}(\theta=0)c}\ \ \ \text{(Scheuer)},\end{split} \tag{40}\]
where the geometric factor assumed by Hardcastle (23) is identified as \(\varepsilon=2\kappa_{1}/\Omega\) using the terminology of Scheuer (17); in other words, this geometric factor largely corresponds to the solid angle of the jet and the jet-head region. Hardcastle (23) derives the jet-head advance speed using the non-relativistic Rankine-Hugoniot shock jump conditions (Equation 27), largely equivalent to the ram pressure argument employed by Scheuer (17). Because of this, the source length evolution predicted by the Hardcastle (23) model early in the source lifetime matches that of the Scheuer (17) class of models for the same input parameters; i.e. \(R\propto t^{1/2}\) for a constant density ambient medium. By contrast, the jet-dominated expansion phase of the Turner _et al._(24) model yields a very different evolutionary history to these two model types, with \(R\propto t\) prior to lobe formation, as those authors do not take the non-relativistic limit of the hydrodynamic equations.
#### 5.1.2 Late-time evolution
The lobe length expansion in the Falle (18) class of models, and the (lobe-dominated) late-time expansion phases of the Hardcastle (23) and Turner _et al._(24) models, are calculated by considering the internal energy of the relativistic lobe plasma evolving under an adiabatic equation of state. The first law of thermodynamics (Equation 12) applied to a self-similar ellipsoidal shell yields lobe length growth of the form \(R\propto t^{3/5}\) (Equation 18, constant gas density form is stated here for simplicity), consistent with findings for supernova remnants. The Falle (18) class of models use this dependence directly, while the lobe-dominated expansion phase of the Turner _et al._(24) model also permits an evolving lobe axis ratio due to a non-power law ambient medium. The late-time evolution of the Turner and Shabala (22) model (and subsequent versions of RAiSE) ultimately transitions from the supersonic to subsonic regime as the jet-head advance speed slows (\(R\propto t^{1/3}\)). This is not critical when modelling powerful lobed radio sources which drive strong shocks - the focus of this review - but is essential when considering the coasting (inactive) phase of remnant radio sources.
Hardcastle (23) does not make any explicit assumptions about the lobe axis ratio, however, in the limit \(t\to\infty\) their expressions for the expansion rate along both the major and minor axes converge, leading to a spherical lobe (at least in the special case of a constant density ambient medium). The expression for their growth rate at late times (Equation 27) becomes:
\[\frac{dR_{s}}{dt}=\sqrt{\frac{(\Gamma_{s}+1)([\xi\Gamma_{c}+(1-\xi)\Gamma_{s}- 1]Qt+f(N,T,v_{s}))}{2\rho_{x}V_{s}}}, \tag{41}\]
where we have assumed that the pressure along both axes is dominated by the internal energy component (Equation 24), that the sound speed in the shocked gas is comparable to that of the ambient medium (i.e. \(c_{s}\sim\sqrt{\Gamma_{s}p_{x}/\rho_{x}}\)), and applied the relationship for the ratio of the lobe and shocked shell volumes (Equation 25). This differential equation again yields a solution of the form \(R\propto t^{3/5}\) for a constant density ambient medium, if the internal energy contribution from the shocked gas, \(f(N,T,v_{s})\), is ignored. Such an assumption is reasonable for high jet powers or moderate ambient densities, however, the late-time evolution diverges significantly from this prediction for weak jets or high density environments; we examine this point further in Section 5.2.2.
#### 5.1.3 Source morphology
The lobe volume, and hence axis ratio, are calculated using the remaining equations not previously invoked in the calculation of the source length expansion. For example, in the Scheuer (1977) model, source expansion along the jet axis is modelled based on ram pressure arguments, while the sidewards expansion is derived considering the internal energy of the injected lobe plasma; Hardcastle (1973) makes similar arguments, and Turner _et al._ (2014) invoke internal energy to describe the formation and inflation of their lobe within the confines of a surrounding shocked gas shell. By contrast, the self-similar expansion assumed in the Falle (1977) class of models, and the lobe-dominated expansion phase of the Turner _et al._ (2014) model, implicitly sets the lobe volume based on its length, which is calculated from internal energy. These models use ram pressure arguments to relate the internal conditions of the lobe to those of the ambient medium (Equation 11), and to relate the jet half-opening angle to the source axis ratio.
### Comparison to hydrodynamic simulations
To test the analytical models, we compare the dynamics of the four key model classes introduced above to the results of hydrodynamic simulations run using the PLUTO code (Werner _et al._, 2014). Below, we describe the existing simulations run by Yates-Jones _et al._ (2014) for powerful FR-II radio galaxies, and test the analytical models by comparing the predicted evolution of the source length, lobe axis ratio, and jet-head pressure throughout the source lifetime to the simulation results.
#### 5.2.1 Hydrodynamic simulation dynamics
The hydrodynamic simulations of Yates-Jones _et al._ (2014) consider an initially conical jet of half-opening angle \(\theta_{j}=10\) degrees and bulk flow with Lorentz factor \(\gamma_{j}=5\). Their high-powered jet (\(Q=3\times 10^{38}\,\mathrm{W}\)) expands into a spherically symmetric King profile with core density of \(\rho_{c}=2.41\times 10^{-24}\,\mathrm{kg}\,\mathrm{m}^{-3}\), core radius of \(r_{c}=144\,\mathrm{kpc}\), and slope described by the coefficient \(\beta^{\prime}=0.38\) (for details, see 2017). Those authors consider both jets located at the centre of the gas distribution, as well as offset jets. In this review, we will only consider their cluster-centred jet simulation as the theory underpinning the analytical models assumes a spherically symmetric environment. Their simulations result in lobed Fanaroff and Riley (2014) Type-II sources forming on a timescale of \(\sim 1\,\mathrm{Myr}\), and consider the late-time evolution up to an age of 35.1 Myr.
We extract time series for the evolution of the source length, the lobe axis ratio, and the jet-head pressure from the hydrodynamic simulation outputs (for details, see Werner _et al._, 2014). These are critical dynamical quantities in calculation of both the source evolution and the radio-frequency synchrotron emission, and thus should be considered to assess the accuracy of the analytical models. The potentially more informative lobe volume and volume-weighted pressure are poorly constrained prior to lobe formation as large regions near the core remain partially occupied by ambient gas; the calculation of these quantities in the hydrodynamical simulation is highly dependent on the threshold used to separate ambient gas from the jet plasma.
#### 5.2.2 Accuracy of analytical models
The critical intrinsic parameters characterising radio source evolution in analytical models are the jet kinetic power, source age and properties of the ambient medium. The spherically symmetric King profile used by Yates-Jones _et al._ (2014) for the ambient medium in their simulations is readily modelled by both the Hardcastle (1973) and Turner _et al._ (2014) models, but not the older models. The original form of the Scheuer (1977) model assumes a constant density ambient medium, while the Falle (1977) model employs a slightly more general power-law of the form \(\rho=kr^{-\beta}\); in Section 2 we similarly derived the Scheuer (1977) model for a power-law gas density profile. To facilitate a meaningful comparison between all four model classes, we modify the Falle (1977) and Scheuer (1977) model following the approach of Turner and Shabala (2014), by approximating the gas density profile as a
series of contiguous power-laws with piecewise solutions. The implementation of these models for a general ambient medium is available on our GitHub online repository1.
Footnote 1: [https://github.com/rossiurner/analytical_models](https://github.com/rossiurner/analytical_models)
Source length
The source length evolution for the four analytical models is shown in the top panel of Figure 5. The jet power, source age and ambient gas density profile are in all cases identical to the inputs to the hydrodynamic simulation of Yates-Jones _et al._ (2016), however, some of the more minor model parameters are varied to obtain the best representation of each model class. Specifically, the Scheuer (2016) and Falle (2017) model evolutionary tracks are shown for three plausible values of the jet half-opening angle \(\theta_{j}\), while the Hardcastle (2018) model is shown for three values of their equivalent geometric factor \(\varepsilon\). The free parameters in the Turner _et al._ (2016) model have previously been calibrated based on this hydrodynamic simulation, and thus results for only a single set of parameters for this model are shown. The resulting evolutionary tracks are consistent with the discussion in Section 5.1: the Scheuer (2016) and (jet-dominated) early-time Hardcastle (2018) models follow an approximately \(R\propto t^{1/2}\) growth rate (the dependence expected for a flat atmosphere), while the Falle (2017) and (lobe-dominated) late-time Hardcastle (2018) models follow an \(R\propto t^{3/5}\) expansion rate (also see Figure 6). By contrast, the relativistic hydrodynamic equations used by Turner _et al._ (2016) yield quite different evolutionary tracks in the jet-dominated expansion phase, and are more consistent with the hydrodynamic simulation. At late times (\(>10\) Myr), the Turner _et al._ (2016) model predicts slower expansion close to an \(R\propto t^{1/2}\) relationship, converging towards the same limit as the other models at later times. We explore this in more detail in Section 5.3.
Lobe axis ratio
The evolution of the lobe axis ratio is shown in the middle panel of Figure 5. The Turner _et al._ (2016) model captures axis ratio evolution prior to lobe formation as it considers a two-phase fluid with an initially low, but non-zero, fraction of jet plasma in the region between the jet and bow shock. By contrast, the other analytical models initially disagree with the hydrodynamic simulation as they assume a plasma-filled lobe structure throughout the source lifetime. Upon lobe formation, the Hardcastle (2018) model agrees well with the hydrodynamic simulation assuming a compact jet-head region with a geometric factor \(\varepsilon\gtrsim 40\) (corresponding to a jet-head of radius \(12.6\) kpc for a typical \(100\) kpc jet). The Falle (2017) class of models of course yields a constant lobe axis ratio (which is why these models are often referred to as "self-similar") throughout the source evolutionary history, while the Scheuer (2016) model predicts a rapidly increasing axis ratio as the lobe encounters the steeper sections of the ambient gas density profile (\(A\propto t^{(5\beta-2)/(16-4\beta)}\); see Section 2). These last two models do not explicitly separate the lobe and shell material, so for a fairer comparison, we are guided by the numerical results of Turner and Shabala (2018) in assuming that the lobe axis ratio scales to that of the shell as \(A=A_{s}^{1.7}\).
Jet-head pressure
We finally compare the predicted evolution of the jet-head pressure to that measured from the hydrodynamic simulation (Figure 5, bottom panel). The majority of the evolutionary history probed by the hydrodynamic simulation (\(\lesssim 10\) Myr) is associated with a significant ram pressure component at the jet-head, in addition to a thermal component that scales approximately in proportion to the ram pressure component. The Scheuer (2016) and Hardcastle (2018) models directly consider the ram pressure component, and hence predict a power-law evolution of jet head pressure (\(p\propto t^{-1}\) for a constant gas density ambient medium; see Sections 2 and 4.2) which is broadly similar to to the thermal jet head pressure component measured by the hydrodynamic simulation. The self-similar expansion model of Falle (2017) yields a flatter relationship of the form \(p\propto t^{-4/5}\). Turner _et al._ (2016) instead
Figure 5: Comparison of the four classes of analytical models to the hydrodynamic simulation (grey shading) used by Turner _et al._ (2004, their Figure 3) to assess the success of their model near the commencement of lobe formation. Top panel: source length evolution. Middle panel: lobe axis ratio. Bottom panel: jet head pressure. The Scheuer (2007) and Falle (2008) class models are shown for a range of jet half-opening angles \(\theta_{j}\), while the Hardcastle (2008) model is shown for a range of jet-head cross-sections \(\epsilon\). The RAiSE model (2004) is shown for its optimised set of parameters.
explicitly model both the thermal and ram pressure components throughout the source evolutionary history; this model accurately captures the steepening in the rate of change of jet-head pressure during the transition between these two limiting cases.
### Parameter space exploration
In this section, we compare the consistency of observable predictions between the four model classes. We select three parameters for our comparison. Source length and axis ratio are directly measurable model predictions, while synchrotron luminosity integrated over the entire lobe is a quantity which can be approximated from source dynamics - noting that detailed consideration of particle acceleration and loss processes is required for a full calculation (e.g. [44]). These radio source attributes are critical for estimating the jet energy budget through a parameter inversion of observables [22; 45].
We investigate the behaviour of the different model classes for a range of input parameters, specifically the single-jet kinetic power \(Q\), core density \(\rho_{0}\), and scale radius \(r_{c}\) of the ambient gas density profile. The base-case set of parameters is chosen to match the hydrodynamic simulation of Yates-Jones _et al._[31], but with \(\rho_{0}=10^{-23}\,\mathrm{kg\,m^{-3}}\) and \(r_{c}=100\,\mathrm{kpc}\). The jet half-opening angles for the Scheuer [17] and Falle [18] models are set to \(\theta_{j}=10^{\circ}\) and \(\theta_{j}=20^{\circ}\) respectively, as these closely match the hydrodynamic simulation evolution for the three key dynamical parameters of source length, lobe axis ratio and jet-head pressure (cf. Figure 5). For the same reasons, we set the geometric factor to \(\varepsilon=40\) in the Hardcastle [23] model. The evolutionary tracks for each model class are considered both for the base set of parameters and for a factor of ten variation to one of either the jet power (\(Q=3\times 10^{37}\), \(3\times 10^{38}\) or \(3\times 10^{39}\,\mathrm{W}\)), core density (\(\rho_{0}=10^{-24}\), \(10^{-23}\) or \(10^{-22}\,\mathrm{kg\,m^{-3}}\)), or scale radius (\(r_{c}=10\), \(100\) or \(1000\,\mathrm{kpc}\)).
Source length
The source length evolution for the four model classes are shown in Figure 6. Changes in jet power and core density largely result in a constant offset (in logarithmic space) to the source length, consistent with the expected scalings in flat atmospheres of \(R\propto Q^{1/4}\) and \(R\propto\rho_{0}^{-1/4}\) for the jet-dominated [17] model, and \(R\propto Q^{1/5}\) and \(R\propto\rho_{0}^{-1/5}\) for the lobe-dominated [18] model class. The more sophisticated models of Hardcastle [23] and Turner _et al._[24] capture the transition between these limiting cases. For the same input parameters, the Hardcastle [23] and Turner _et al._[24] models predict similar dynamical evolution for the majority of simulated sources. However, the jet-dominant phase lasts longer for high-powered jets and in denser environments in the Turner _et al._[24] model, making this model more sensitive to changes in these parameters than the other models. Variations to the scale radius of the ambient gas density profile yield qualitatively similar behaviour between the model classes, with faster expansion occurring for steeper atmospheres.
### Lobe axis ratio
The evolution in the lobe axis ratio is shown in Figure 7. The Hardcastle [23] and Turner _et al._[24] models both predict the lobe axis ratio evolutionary tracks shift horizontally along the source age axis in response to variations in the jet power and core density, with lobe formation occurring earlier for less powerful jets and/or denser ambient media (e.g. see [24]). The Scheuer [17] model predicts a similar response to changes in the jet power and core density for the approximately constant ambient gas density section of the lobe axis ratio evolutionary tracks (i.e. \(\lesssim 1\,\mathrm{Myr}\)), but is inconsistent at later times once the ambient density profile begins to steepen (see Section 5.2). By contrast, variations to the scale radius produce lobe axis ratio evolution that is largely inconsistent between the different model classes. The self-similar model of Falle [18] yields a constant lobe axis ratio for all input parameters by construction.
Figure 6: Source length evolution predicted by the four classes of analytical models for a range of input intrinsic parameters. The evolutionary tracks for the base set of parameters, i.e. \(Q=3\times 10^{38}\) W, \(\rho_{0}=10^{-23}\) kg m\({}^{-3}\) and \(r_{c}=100\) kpc, are shown as solid lines. The source evolution for either a lower (\(Q=3\times 10^{37}\) W) or higher (\(Q=3\times 10^{39}\) W) jet power is shown in the top panel with dashed and dot-dashed lines respectively. The middle panel shows evolutionary tracks with a lower (\(\rho_{0}=10^{-24}\) kg m\({}^{-3}\)) or higher (\(\rho_{0}=10^{-24}\) kg m\({}^{-3}\)) core densities. The bottom panel plots the evolution for either a lower (\(r_{c}=10\) kpc) or higher (\(r_{c}=1000\) kpc) scale radius of the ambient gas density profile.
Figure 7: Lobe axis ratio evolution predicted by the four classes of analytical models for a range of input intrinsic parameters. Panels and line styles are as in Figure 6.
Figure 8: Lossless synchrotron luminosity at 151 MHz (see text for details). Panels and line styles are as in Figure 6.
Synchrotron luminosity
Radio galaxies are detectable through their synchrotron emission. In analytical radio source models, this emission is typically calculated by assuming a scaling between the lobe pressure and magnetic field, acceleration of emitting particles at the jet termination shock, and subsequent losses due to adiabatic, synchrotron and inverse-Compton losses due to upscattering of cosmic microwave background photons.
A full calculation of the radio-frequency luminosity of each lobe is beyond the scope of this review (see e.g. 22; 23; 44; 45). However, a useful estimate can be obtained by considering only adiabatic losses, which are directly related to the evolution of lobe pressure. The "lossless" luminosity calculated in this approach is related to the lobe volume and pressure as \(L_{\nu}\propto p^{(\alpha+3)/2}V\), where \(\alpha\sim 0.7\) is the spectral index of the non-aged radio spectrum. The total internal energy of the lobe, \(U\sim pV\), represents the fraction of the jet energy that is transferred to the lobe, and hence is very similar in all models for a fixed time and jet power. In this case, the synchrotron luminosity scales with pressure as \(L_{\nu}\propto p^{(\alpha+1)/2}U\).
The evolution in the lossless synchrotron luminosity for each of the model classes is shown in Figure 8 at a rest-frame frequency of 151 MHz. The predictions of the Scheuer (17) model are once again inconsistent with the later models, with radio luminosities up to a factor of 100 greater than the other models at late times due to their higher lobe axis ratios, and subsequently smaller volumes and higher pressures. The remaining three models are consistent at late times - when their lobe pressures are largely derived based on changes in lobe internal energy - for all sets of input jet powers, core densities and scale radii. The Falle (18) and Hardcastle (23) model classes also agree at early times, in contrast to the Turner _et al._(24) model which predicts higher luminosities as a result of significantly higher pressures prior to lobe formation when the system is dominated by the momentum flux of the relativistic jet. At these early times, the luminosities are a factor of 100-1000 higher than in the lobe-only models of Falle (18) and Hardcastle (23).
## 6 Concluding remarks
We have summarised and compared the main classes of analytical models describing the dynamics of kiloparsec-scale lobed radio galaxies. These models can be separated into two main classes, depending on the whether the expansion of the radio source is driven by the momentum flux from the jet, or by the internal lobe pressure. We presented the Scheuer (17) and Falle (18) models, respectively, to describe the general characteristics of other literature models in either of these two classes. We also examined separately the more recent models proposed by Hardcastle (23) and Turner _et al._(24), which combine aspects of both jet momentum flux and lobe pressure.
We compared the different model classes against each other, and with high-resolution hydrodynamic simulations, for a range of realistic input parameters. Our key findings are as follows:
* Jet momentum flux and lobe internal pressure dominate the early- and late-time radio source evolution, respectively. Both must be considered for a complete radio source model describing source dynamics after the lobe formation phase (\(\sim\)1 Myr; Section 5.2.2).
* Realistic ambient gas density profiles (i.e. not constant or power-law) produce radio sources which are inconsistent with self-similar lobe evolution predicted by the Falle (18) class of models (Sections 4.1 and 4.2). This naturally explains the large axis ratios seen in giant radio galaxies.
* Relativistic jet dynamics is important for an accurate description of early source evolution, before the lobe formation phase (Section 5.2.2).
We make three of the four models considered in this review openly available. The code extending the Scheuer (17) and Falle (18) models to general atmospheres, and RAiSE
(version 2023), are available in our GitHub online repository2; 3. Hardcastle (2023) has also made their code available; we refer the interested reader to their paper.
Footnote 2: [https://github.com/rosjturner/analytic_models](https://github.com/rosjturner/analytic_models)
Footnote 3: [https://github.com/rosjturner/RAiSEHD](https://github.com/rosjturner/RAiSEHD)
We conclude with a brief reflection on the next generation of analytical models. Existing analytical models neglect the interaction between the jet and the multi-phase interstellar medium of the host galaxy. Hydrodynamic simulations that model this interaction predict that the jet can spend \(\gtrsim 1\,\mathrm{Myr}\) in the galaxy (e.g. 46). The majority of observed radio sources are compact and short-lived (47; 48), and hence these processes are likely to be relevant to the bulk of the radio source populations. At the other end of the radio galaxy evolution scale, Hardcastle (2023) pointed out that for extreme losses, such as expected in large sources at high redshift, it is possible for the majority or even all of the jet energy to be radiated away. This mechanism can potentially limit the maximum size to which a radio source can grow. Existing analytical models decouple source dynamics from the synchrotron and inverse Compton radiative loss mechanisms, and hence are not currently capable of tackling this issue.
Conceptualisation, R.T. and S.S.; methodology, R.T.; software, R.T.; validation, R.T.; formal analysis, n/a; investigation, R.T.; resources, R.T.; data curation, n/a; writing--original draft preparation, R.T. and S.S.; writing--review and editing, R.T. and S.S.; visualisation, R.T.; supervision, n/a; project administration, n/a; funding acquisition, n/a. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The authors declare no conflict of interest.
## Appendix A Early Jet-Lobe Models
We present a complete derivation for the lobe pressure and volume evolution of the Scheuer (2017) model assuming a power-law ambient density profile (rather than their assumed constant density medium), as outlined in Section 2.
### Lobe pressure
The increase in the total energy of the cavity, \(U\), over the time interval \(\delta t\) due to the input kinetic energy \(Q\) is given in Equation 4. Scheuer (2017) rewrite this first-order differential equation in terms of the jet length by defining a constant scaling between the lobe volume and this length of the form \(V(R)=\kappa_{2}R^{4}\); here \(\alpha,\kappa_{2}>0\) are constants. That is,
\[\delta U=\bigg{[}Q\frac{dt}{dR}-\alpha\frac{U(\Gamma_{c}-1)(q+1)}{R}\bigg{]} \delta R, \tag{1}\]
where the time derivative of Equation 3 gives the jet-head advance speed,
\[\begin{split}\frac{dR}{dt}&=\bigg{(}\frac{\kappa _{1}Q}{\Omega kc}\bigg{)}^{1/(4-\beta)}\bigg{(}\frac{(4-\beta)t}{2}\bigg{)}^{ (\beta-2)/(4-\beta)}\\ &=\bigg{(}\frac{\kappa_{1}Q}{\Omega kc}\bigg{)}^{1/2}R^{(\beta -2)/2}.\end{split} \tag{2}\]
The solution to Equation 1, upon substituting the second expression above for the jet-head advance speed and assuming the initial condition \(U(0)=0\) (i.e. initially zero energy in the lobe), is
\[U(R)=\frac{(Q\Omega kc)^{1/2}}{[\alpha(\Gamma_{c}-1)(q+1)+(4-\beta)/2]\kappa_ {1}^{1/2}}R^{(4-\beta)/2}. \tag{3}\]
The average lobe pressure is meanwhile related to total energy in the lobe cavity and its volume (i.e. Equation 5, or e.g. Equation 15 of Kaiser _et al._[44]). We can therefore rewrite the above expression for the total energy in terms of the lobe pressure, recalling \(V(R)=\kappa_{2}R^{\alpha}\), as
\[p(R)=\frac{(Q\Omega kc)^{1/2}(\Gamma_{c}-1)(q+1)}{[\alpha(\Gamma_{c}-1)(q+1)+( 4-\beta)/2]\kappa_{1}^{1/2}\kappa_{2}}R^{(4-\beta-2\alpha)/2}. \tag{10}\]
This relationship is presented in Equation 6 of Section 2 as a function of the source age upon the further substitution of Equation 3.
### Lobe volume
The sidewards expansion rate of the lobe is derived by equating the internal pressure to the ram pressure presented by the ambient medium as the lobe widens; i.e. \(\rho v_{\perp}^{2}=p(t)\) where the ambient gas density is reasonably approximated as \(\rho\sim kR^{-\beta}\) for somewhat spherical lobes. As discussed in Section 2, the width of the lobe at some location \(r\) along the jet axis is
\[\begin{split} R_{\perp}(r)&=\int_{t(r)}^{t(R)}v_{ \perp}(t^{*})dt^{*}\\ &=\int_{r}^{R}v_{\perp}(R^{*})\frac{dt}{dR^{*}}dR^{*},\end{split} \tag{11}\]
where \(t(r)\) is the time when the jet-head reached the location \(r\) along the jet axis, and \(t(R)\) is the current time (i.e. jet-head has length \(R\)). The change of variables in the second equality allows the width of the lobe to be evaluated in terms of known limits and the pressure in Equation 10. The integral is evaluated upon substitution of Equations 11 and 10,
\[\begin{split} R_{\perp}(r)&=\frac{4(\Omega c)^{3/4} k^{1/4}}{(12-5\beta-2\alpha)\kappa_{1}^{3/4}\kappa_{2}^{1/2}Q^{1/4}}\bigg{[}\frac{( \Gamma_{c}-1)(q+1)}{\alpha(\Gamma_{c}-1)(q+1)+(4-\beta)/2}\bigg{]}^{1/2}\\ &\qquad\qquad\times(R^{(12-5\beta-2\alpha)/4}-r^{(12-5\beta-2 \alpha)/4}).\end{split} \tag{12}\]
The lobe volume is derived in Equation 8 of Section 2 by integrating this expression over all locations \(r\) along the jet axis.
|
2302.07874 | C14 Automatic Imaging Telescope Photometry of GJ1214 | GJ1214b is the highest signal-to-noise sub-Neptune for atmospheric studies.
Although most previous transmission spectroscopy measurements have revealed a
frustratingly featureless spectrum, JWST observations are expected to give new
insights to this benchmark planet. We have performed photometric monitoring of
GJ1214 (the host star) to provide context for these observations. We find that
GJ1214 entered a period of relatively high brightness during 2021 and 2022.
This implies that the JWST MIRI/LRS phase curve observation of GJ1214b in July
2022 was obtained during an epoch of low activity for the spot-dominated host
star. Like previous works, we are unable to definitively identify the star's
rotation period. Nevertheless, we confirm that it is likely >50 days. | Gregory W. Henry, Jacob L. Bean | 2023-02-14T19:17:04Z | http://arxiv.org/abs/2302.07874v1 | # C14 Automatic Imaging Telescope Photometry of GJ 1214
###### Abstract
GJ 1214b is the highest signal-to-noise sub-Neptune for atmospheric studies. Although most previous transmission spectroscopy measurements have revealed a frustratingly featureless spectrum, _JWST_ observations are expected to give new insights to this benchmark planet. We have performed photometric monitoring of GJ 1214 (the host star) to provide context for these observations. We find that GJ 1214 entered a period of relatively high brightness during 2021 and 2022. This implies that the _JWST_ MIRI/LRS phase curve observation of GJ 1214b in July 2022 was obtained during an epoch of low activity for the spot-dominated host star. Like previous works, we are unable to definitively identify the star's rotation period. Nevertheless, we confirm that it is likely \(\gtrsim\)50 days.
Planet hosting stars (1242) -- Stellar rotation (1629) 0000-0002-4000]Gregory W. Henry
0000-0002-4072-3870]Jacob L. Bean
## 1 Introduction
The transiting exoplanet GJ 1214b was discovered by Charbonneau et al. (2009) with the MEarth Project array of eight 0.40 m automated telescopes designed to monitor a large number of nearby M dwarfs for transiting exoplanets. They found GJ 1214b to have a planetary mass of 6.55 M\({}_{\oplus}\), a radius of 2.68 R\({}_{\oplus}\), and an orbital period of 1.58 days. Originally classified as a super-Earth, consideration of GJ 1214b in light of the _Kepler_ planet demographics (Fulton et al., 2017; Van Eylen et al., 2018) suggests it is better thought of as a sub-Neptune (Bean et al., 2021). Its low density of 1.9 g/cc implies the presence of a substantial atmosphere (Rogers & Seager, 2010). There has been extensive effort to detect this atmosphere using transmission spectroscopy (e.g., Bean et al., 2010; Croll et al., 2011; Bean et al., 2011; Desert et al., 2011; Berta et al., 2012; Fraine et al., 2013; Kreidberg et al., 2014; Kasper et al., 2020; Orell-Miquel et al., 2022; Spake et al., 2022), with the consensus being that the planet has a featureless spectrum due to high-altitude aerosols.
The 2009 MEarth photometry found the star to vary in brightness by 2% on a timescale of several weeks (with a dominant period of 83 days). Charbonneau et al. (2009) concluded that starspots carried around the star by its rotation was the most likely explanation. Carter et al. (2011) and Kreidberg et al. (2014) observed 31 transits between 2009 and 2013 and found that four transits exhibited brightness anomalies as the planet occulted a starspot, confirming the presence of dark spots as the cause of the star's brightness variability.
Subsequent studies have confirmed the low-amplitude stellar variability but have not been very successful at pinning down the true stellar rotation period. Berta et al. (2011) analyzed new 2010 MEarth photometry with better sampling and cadence than the 2009 MEarth discovery observations and found a best period of 53 days. However, they cautioned that if GJ 1214 has well-spaced active longitudes, its true rotation period may be a higher multiple of 53 days (e.g., \(\approx\)100 days).
Narita et al. (2013) monitored GJ 1214 for stellar variability over the relatively short timespan of 78 days in 2012 with the MITSuMe 0.50 m telescope in Japan and found a shorter period of 44.3 days. Additional photometric monitoring in 2012 and 2013 was reported by Nascimbeni et al. (2015) who used the 1.2 m twin robotic telescopes STELLA (STELLar Activity) located on Tenerife in the Canary Islands. They found possible periods of 83.0, 69.0, and 79.6 days. Mallonn et al. (2018) continued long-term monitoring of GJ 1214 with STELLA to create light curves from 2012 through 2016 primarily in the Johnson \(BV\) pass bands. Their most
significant signal was \(125\pm 5\) days for the 2014-2016 \(B+V\) data set, which they claimed overrules previous suggestions of a significantly shorter stellar rotation period.
## 2 Observations
In an attempt to determine the correct rotation period of GJ 1214, we conducted our own photometric observations with the Tennessee State University Celestron 14-inch (C14) automated imaging telescope (AIT) located at Fairborn Observatory in southern Arizona. We acquired 329 good nightly photometric observations (excluding occasional transit observations) during the five observing seasons 2018 through 2022. The observations were made through a Cousins \(R\) filter with an SBIG STL-1001E CCD camera. Each nightly observation consists of 3-5 consecutive exposures of the GJ 1214 field of view. The individual frames are co-added and reduced to differential magnitudes in the sense GJ 1214 minus the mean brightness of 13 constant comparison stars in the same field. Further details of our observing and data reduction procedures can be found in Sing et al. (2015).
The nightly observations are plotted as small filled circles in Figure 1. Since GJ 1214 comes to opposition with the Sun on June 11, much of the observing season occurs during our annual Summer Shutdown when all telescopes must be closed from early July to early September due to the "monsoon season" in southern Arizona. Thus, there are gaps in each year's light curve when no observations can collected. The 2018 observing season is an exception since our initial observations did not take place until after the 2018 Shutdown. The yearly mean differential observations are also plotted in Figure 1 as the large filled circles and include the observations on both sides of the Summer Shutdown. The uncertainties in the seasonal means are roughly the size of the plot symbols.
The observations are summarized by season in Table 1. The standard deviations of the individual observations from their respective seasonal means are given in column 4 and range from 6.44 to 9.96 mmag. The typical precision of a single nightly observation with the C14 AIT is 2-3 mmag on good nights (e.g., Fu et al., 2021), so the standard deviations given in column 4 indicate the presence of low-level, night-to-night brightness variability in GJ 1214 during each observing season. The seasonal means given in column 5 cover a range of 68 mmag, showing a general brightening trend in GJ 1214 of several percent over our five years of observation. Finally, we performed period analyses of the individual yearly light curves using the procedure described in Wong et al. (2022). The resulting best periods are given in column 6 and range between 56.4 and 99.6 days. The period listed for 2018 is particularly uncertain due to the low number of observations. Phase curves of the five observing seasons are plotted with their five individual periods in Figure 2. Peak-to-peak amplitudes are given in each panel and range from 10 to 20 mmag. Like previously published results, we are unable to identify confidently the true stellar rotation period of GJ 1214.
## 3 Conclusion
We can, however, use our photometric results to predict the starspot coverage at the time of the _JWST_ MIRI/LRS phase curve observation of GJ 1214b that was taken in July 2022 (Bean et al., 2021). Radick et al. (2018) examined the patterns of brightness variation for 72 Sun-like stars and demonstrated that the brightness variability in young, active (and therefore convective) stars is driven by dark spots in the sense that a star is fainter when it is more active (spotted). The combination of Figure 1 and the bottom panel of Figure 2 show that GJ 1214 was near a long-term as well as a short-term brightness maximum. In other words, the star was near starspot minimum at the time of the _JWST_ phase curve observation. Continued photometric monitoring would be valuable to provide context for the upcoming NIRCam transmission spectroscopy observations of Greene et al. (2017), which are currently scheduled for July and August 2023
|
2308.03491 | p-Summing Bloch mappings on the complex unit disc | The notion of $p$-summing Bloch mapping from the complex unit open disc
$\mathbb{D}$ into a complex Banach space $X$ is introduced for any $1\leq
p\leq\infty$. It is shown that the linear space of such mappings, equipped with
a natural seminorm $\pi^{\mathbb{B}}_p$, is M\"obius-invariant. Moreover, its
subspace consisting of all those mappings which preserve the zero is an
injective Banach ideal of normalized Bloch mappings. Bloch versions of the
Pietsch's domination/factorization Theorem and the Maurey's extrapolation
Theorem are presented. We also introduce the spaces of $X$-valued Bloch
molecules on $\mathbb{D}$ and identify the spaces of normalized $p$-summing
Bloch mappings from $\mathbb{D}$ into $X^*$ under the norm $\pi^{\mathbb{B}}_p$
with the duals of such spaces of molecules under the Bloch version of the
$p$-Chevet--Saphar tensor norms $d_p$. | M. G. Cabrera-Padilla, A. Jiménez-Vargas, D. Ruiz-Casternado | 2023-08-07T11:33:07Z | http://arxiv.org/abs/2308.03491v2 | # \(p\)-summing Bloch mappings on the complex unit disc
###### Abstract.
The notion of \(p\)-summing Bloch mapping from the complex unit open disc \(\mathbb{D}\) into a complex Banach space \(X\) is introduced for any \(1\leq p\leq\infty\). It is shown that the linear space of such mappings, equipped with a natural seminorm \(\pi_{p}^{\mathcal{B}}\), is Mobius-invariant. Moreover, its subspace consisting of all those mappings which preserve the zero is an injective Banach ideal of normalized Bloch mappings. Bloch versions of the Pietsch's domination/factorization Theorem and the Maurey's extrapolation Theorem are presented. We also introduce the spaces of \(X\)-valued Bloch molecules on \(\mathbb{D}\) and identify the spaces of normalized \(p\)-summing Bloch mappings from \(\mathbb{D}\) into \(X^{*}\) under the norm \(\pi_{p}^{\mathcal{B}}\) with the duals of such spaces of molecules under the Bloch version of the \(p\)-Chevet-Saphar tensor norms \(d_{p}\).
Key words and phrases:Vector-valued Bloch mapping, compact Bloch mapping, Banach-valued Bloch molecule, Bloch-free Banach space 2020 Mathematics Subject Classification: 30H30, 46E15, 46E40, 47B38 Research of the first two authors was partially supported by Junta de Andalucia grant FQM194, and by grant PID2021-122126NB-C31 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe".
The least of all the constants \(c\) for which such an inequality holds, denoted \(\pi_{p}^{\mathcal{B}}(f)\), defines a seminorm on the linear space, denoted \(\Pi_{p}^{\mathcal{B}}(\mathbb{D},X)\), of all \(p\)-summing Bloch mappings \(f\colon\mathbb{D}\to X\). Furthermore, this seminorm becomes a norm on the subspace \(\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X)\) consisting of all those mappings \(f\in\Pi_{p}^{\mathcal{B}}(\mathbb{D},X)\) so that \(f(0)=0\).
These spaces enjoy nice properties in both complex and functional analytical frameworks. In the former setting, we show that the space \((\Pi_{p}^{\mathcal{B}}(\mathbb{D},X),\pi_{p}^{\mathcal{B}})\) is invariant by Mobius transformations of \(\mathbb{D}\). In the latter context and in a clear parallelism with the theory of absolutely \(p\)-summing linear operators (see [6, Chapter 2]), we prove that \([\Pi_{p}^{\widehat{\mathcal{B}}},\pi_{p}^{\mathcal{B}}]\) is an injective Banach ideal of normalized Bloch mappings whose elements can be characterized by means of Pietsch domination/factorization. Applying this Pietsch domination, we present a Bloch version of Maurey's extrapolation Theorem [10].
On the other hand, the known duality of the Bloch spaces (see [1, 3, 15]) is extended to the spaces \((\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*}),\pi_{p}^{\mathcal{B}})\) by identifying them with the duals of the spaces of the so-called _\(X\)-valued Bloch molecules on \(\mathbb{D}\)_, equipped with the Bloch versions of the \(p\)-Chevet-Saphar tensor norms \(d_{p}\). We finish the paper with some open problems.
The proofs of some of our results are similar to those of their corresponding linear versions, but a detailed reading of them shows that the adaptation of the linear techniques to the Bloch setting is far from being simple. Our approach depends mainly on the application of some concepts and results concerning the theory on a strongly unique predual of the space \(\widehat{\mathcal{B}}(\mathbb{D})\), called _Bloch-free Banach space over \(\mathbb{D}\)_ that was introduced in [9].
**Notation.** For two normed spaces \(X\) and \(Y\), \(\mathcal{L}(X,Y)\) denotes the normed space of all bounded linear operators from \(X\) to \(Y\), equipped with the operator canonical norm. In particular, the topological dual space \(\mathcal{L}(X,\mathbb{C})\) is denoted by \(X^{*}\). For \(x\in X\) and \(x^{*}\in X^{*}\), we will sometimes write \(\langle x^{*},x\rangle=x^{*}(x)\). As usual, \(B_{X}\) and \(S_{X}\) stand for the closed unit ball of \(X\) and the unit sphere of \(X\), respectively. Let \(\mathbb{T}\) and \(\mathbb{D}\) denote the unit sphere and the unit open disc of \(\mathbb{C}\), respectively.
Given \(1\leq p\leq\infty\), let \(p^{*}\) denote the _conjugate index of \(p\)_ defined by
\[p^{*}=\left\{\begin{array}{ccc}\infty&\text{if}&p=1,\\ p/(p-1)&\text{if}&1<p<\infty,\\ 1&\text{if}&p=\infty.\end{array}\right.\]
## 1. \(p\)-Summing Bloch mappings on the unit disc
This section gathers the most important properties of \(p\)-summing Bloch mappings on \(\mathbb{D}\). From now on, unless otherwise stated, \(X\) will denote a complex Banach space.
### Inclusions
We will first establish some useful inclusion relations. Compare to [13, Satz 5].
The following class of Bloch functions will be used throughout the paper. For each \(z\in\mathbb{D}\), the function \(f_{z}\colon\mathbb{D}\to\mathbb{C}\) defined by
\[f_{z}(w)=\frac{(1-|z|^{2})w}{1-\overline{z}w}\qquad(w\in\mathbb{D}),\]
belongs to \(\widehat{\mathcal{B}}(\mathbb{D})\) with \(p_{\mathcal{B}}(f_{z})=1=(1-|z|^{2})f_{z}^{\prime}(z)\) (see [9, Proposition 2.2]).
**Proposition 1.1**.: _Let \(1\leq p<q\leq\infty\). Then \(\Pi_{p}^{\mathcal{B}}(\mathbb{D},X)\subseteq\Pi_{q}^{\mathcal{B}}(\mathbb{D},X)\) with \(\pi_{q}^{\mathcal{B}}(f)\leq\pi_{p}^{\mathcal{B}}(f)\) for all \(f\in\Pi_{p}^{\mathcal{B}}(\mathbb{D},X)\). Moreover, \(\Pi_{\infty}^{\mathcal{B}}(\mathbb{D},X)=\mathcal{B}(\mathbb{D},X)\) with \(\pi_{\infty}^{\mathcal{B}}(f)=p_{\mathcal{B}}(f)\) for all \(f\in\Pi_{\infty}^{\mathcal{B}}(\mathbb{D},X)\)._
Proof.: Let \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\). We will first prove the second assertion. Let \(f\in\Pi^{\mathcal{B}}_{\infty}(\mathbb{D},X)\). For all \(z\in\mathbb{D}\), we have
\[(1-|z|^{2})\left\|f^{\prime}(z)\right\|\leq\pi^{\mathcal{B}}_{\infty}(f)\sup_{ g\in B_{\widetilde{\mathbb{B}}(\mathbb{D})}}(1-|z|^{2})\left|g^{\prime}(z) \right|=\pi^{\mathcal{B}}_{\infty}(f),\]
hence \(f\in\mathcal{B}(\mathbb{D},X)\) with \(p_{\mathcal{B}}(f)\leq\pi^{\mathcal{B}}_{\infty}(f)\). Conversely, let \(f\in\mathcal{B}(\mathbb{D},X)\). For \(i=1,\ldots,n\), we have
\[|\lambda_{i}|\left\|f^{\prime}(z_{i})\right\|\leq\frac{|\lambda_{i}|}{1-|z_{i }|^{2}}p_{\mathcal{B}}(f)=|\lambda_{i}|\left|f^{\prime}_{z_{i}}(z_{i})\right|p _{\mathcal{B}}(f)\leq p_{\mathcal{B}}(f)\sup_{g\in B_{\widetilde{\mathbb{B}}( \mathbb{D})}}|\lambda_{i}|\left|g^{\prime}(z_{i})\right|,\]
this implies that
\[\max_{1\leq i\leq n}|\lambda_{i}|\left\|f^{\prime}(z_{i})\right\|\leq p_{ \mathcal{B}}(f)\sup_{g\in B_{\widetilde{\mathbb{B}}(\mathbb{D})}}\left(\max_{ 1\leq i\leq n}|\lambda_{i}|\left|g^{\prime}(z_{i})\right|\right),\]
and thus \(f\in\Pi^{\mathcal{B}}_{\infty}(\mathbb{D},X)\) with \(\pi^{\mathcal{B}}_{\infty}(f)\leq p_{\mathcal{B}}(f)\).
To prove the first assertion, let \(f\in\Pi^{\mathcal{B}}_{p}(\mathbb{D},X)\). Assume \(q<\infty\). Taking \(\mu_{i}=|\lambda_{i}|^{q/p}\left\|f^{\prime}(z_{i})\right\|^{(q/p)-1}\) for \(i=1,\ldots,n\), we have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{q}\left\|f^{\prime}(z_{i})\right\|^{q} \right)^{\frac{1}{p}}=\left(\sum_{i=1}^{n}|\mu_{i}|^{p}\left\|f^{\prime}(z_{i} )\right\|^{p}\right)^{\frac{1}{p}}\leq\pi^{\mathcal{B}}_{p}(f)\sup_{g\in B_{ \widetilde{\mathbb{B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}|\mu_{i}|^{p}\left|g^ {\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}}.\]
Since \(q/p>1\) and \((q/p)^{*}=q/(q-p)\), Holder Inequality yields
\[\sup_{g\in B_{\widetilde{\mathbb{B}}(\mathbb{D})}}\left(\sum_{i=1 }^{n}|\mu_{i}|^{p}\left|g^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}} =\sup_{g\in B_{\widetilde{\mathbb{B}}(\mathbb{D})}}\left(\sum_{i= 1}^{n}(|\lambda_{i}|\left\|f^{\prime}(z_{i})\right\|)^{q-p}(|\lambda_{i}| \left|g^{\prime}(z_{i})\right|)^{p}\right)^{\frac{1}{p}}\] \[\leq\left(\sum_{i=1}^{n}|\lambda_{i}|^{q}\left\|f^{\prime}(z_{i}) \right\|^{q}\right)^{\frac{1}{p}-\frac{1}{q}}\sup_{g\in B_{\widetilde{\mathbb{ B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{q}\left|g^{\prime}(z_{i}) \right|^{q}\right)^{\frac{1}{q}},\]
and thus we obtain
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{q}\left\|f^{\prime}(z_{i})\right\|^{q} \right)^{\frac{1}{q}}\leq\pi^{\mathcal{B}}_{p}(f)\sup_{g\in B_{\widetilde{ \mathbb{B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{q}\left|g^{\prime} (z_{i})\right|^{q}\right)^{\frac{1}{q}}.\]
This shows that \(f\in\Pi^{\mathcal{B}}_{q}(\mathbb{D},X)\) with \(\pi^{\mathcal{B}}_{q}(f)\leq\pi^{\mathcal{B}}_{p}(f)\) if \(q<\infty\). For the case \(q=\infty\), note that
\[(1-|z|^{2})\left\|f^{\prime}(z)\right\|\leq\pi^{\mathcal{B}}_{p}(f)\sup_{g\in B _{\widetilde{\mathbb{B}}(\mathbb{D})}}(1-|z|^{2})\left|g^{\prime}(z)\right|= \pi^{\mathcal{B}}_{p}(f)\]
for all \(z\in\mathbb{D}\), and therefore \(f\in\mathcal{B}(\mathbb{D},X)=\Pi^{\mathcal{B}}_{\infty}(\mathbb{D},X)\) with \(\pi^{\mathcal{B}}_{\infty}(f)=p_{\mathcal{B}}(f)\leq\pi^{\mathcal{B}}_{p}(f)\).
### Injective Banach ideal property
Let us recall (see [9, Definition 5.11]) that a _normalized Bloch ideal_ is a subclass \(\mathcal{I}^{\widetilde{\mathcal{B}}}\) of the class of all normalized Bloch mappings \(\widehat{\mathcal{B}}\) such that for every complex Banach space \(X\), the components
\[\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X):=\mathcal{I}^{\widetilde{ \mathcal{B}}}\cap\widehat{\mathcal{B}}(\mathbb{D},X),\]
satisfy the following properties:
1. \(\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) is a linear subspace of \(\widehat{\mathcal{B}}(\mathbb{D},X)\),
2. For every \(g\in\widehat{\mathcal{B}}(\mathbb{D})\) and \(x\in X\), the mapping \(g\cdot x\colon z\mapsto g(z)x\) from \(\mathbb{D}\) to \(X\) is in \(\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\),
3. The _ideal property_: if \(f\in\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\), \(h\colon\mathbb{D}\to\mathbb{D}\) is a holomorphic function with \(h(0)=0\) and \(T\in\mathcal{L}(X,Y)\) where \(Y\) is a complex Banach space, then \(T\circ f\circ h\) belongs to \(\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},Y)\).
A normalized Bloch ideal \(\mathcal{I}^{\widetilde{\mathcal{B}}}\) is said to be _normed (Banach)_ if there is a function \(\left\|\cdot\right\|_{\widetilde{I}^{\widetilde{\mathcal{B}}}}\colon\mathcal{I} ^{\widetilde{\mathcal{B}}}\to\mathbb{R}_{0}^{+}\) such that for every complex Banach space \(X\), the following three conditions are satisfied:
1. \((\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X),\left\|\cdot\right\|_{ \mathcal{I}^{\widetilde{\mathcal{B}}}})\) is a normed (Banach) space with \(p_{\mathcal{B}}(f)\leq\left\|f\right\|_{\mathcal{I}^{\widetilde{\mathcal{B}}}}\) for all \(f\in\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\),
2. \(\left\|g\cdot x\right\|_{\mathcal{I}^{\widetilde{\mathcal{B}}}}=p_{\mathcal{ B}}(g)\left\|x\right\|\) for all \(g\in\widetilde{\mathcal{B}}(\mathbb{D})\) and \(x\in X\),
3. If \(h\colon\mathbb{D}\to\mathbb{D}\) is a holomorphic function with \(h(0)=0\), \(f\in\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) and \(T\in\mathcal{L}(X,Y)\) where \(Y\) is a complex Banach space, then \(\left\|T\circ f\circ h\right\|_{\mathcal{I}^{\widetilde{\mathcal{B}}}}\leq \left\|T\right\|\left\|f\right\|_{\mathcal{I}^{\widetilde{\mathcal{B}}}}\).
A normed normalized Bloch ideal \([\mathcal{I}^{\widetilde{\mathcal{B}}},\left\|\cdot\right\|_{\mathcal{I}^{ \widetilde{\mathcal{B}}}}]\) is said to be:
1. _Injective_ if for any mapping \(f\in\widetilde{\mathcal{B}}(\mathbb{D},X)\), any complex Banach space \(Y\) and any isometric linear embedding \(\iota\colon X\to Y\), we have that \(f\in\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) with \(\left\|f\right\|_{\mathcal{I}^{\widetilde{\mathcal{B}}}}=\left\|\iota\circ f \right\|_{\mathcal{I}^{\widetilde{\mathcal{B}}}}\) whenever \(\iota\circ f\in\mathcal{I}^{\widetilde{\mathcal{B}}}(\mathbb{D},Y)\).
We are now ready to establish the following result which can be compared to [13, Satzs 1-4].
**Proposition 1.2**.: \([\Pi_{p}^{\widetilde{\mathcal{B}}},\pi_{p}^{\mathcal{B}}]\) _is an injective Banach normalized Bloch ideal for any \(1\leq p\leq\infty\)._
Proof.: Given a complex Banach space \(X\), note that \(\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\subseteq\widehat{\mathcal{B} }(\mathbb{D},X)\) with \(p_{\mathcal{B}}(f)\leq\pi_{p}^{\mathcal{B}}(f)\) for all \(f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) by Proposition 1.1.
We will only prove the case \(1<p<\infty\). The cases \(p=1\) and \(p=\infty\) follow similarly. Let \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\).
(N1) If \(f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) and \(\pi_{p}^{\mathcal{B}}(f)=0\), then \(p_{\mathcal{B}}(f)=0\), and so \(f=0\). Given \(f_{1},f_{2}\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\), we have
\[\left(\sum_{i=1}^{n}\left|\lambda_{i}\right|^{p}\left\|(f_{1}+f_{2 })^{\prime}(z_{i})\right\|^{p}\right)^{\frac{1}{p}} \leq\left(\sum_{i=1}^{n}\left|\lambda_{i}\right|^{p}\left(\left\| f_{1}^{\prime}(z_{i})\right\|\right|^{p}+\left\|f_{2}^{\prime}(z_{i}) \right\|^{p}\right)^{\frac{1}{p}}\] \[\leq\left(\sum_{i=1}^{n}\left|\lambda_{i}\right|^{p}\left\|f_{1}^{ \prime}(z_{i})\right\|^{p}\right)^{\frac{1}{p}}+\left(\sum_{i=1}^{n}\left| \lambda_{i}\right|^{p}\left\|f_{2}^{\prime}(z_{i})\right\|^{p}\right)^{\frac{1 }{p}}\] \[\leq\left(\pi_{p}^{\mathcal{B}}(f_{1})+\pi_{p}^{\mathcal{B}}(f_{2 })\right)\sup_{g\in\widetilde{B}_{\widetilde{\mathcal{B}}(\mathbb{D})}}\left( \sum_{i=1}^{n}\left|\lambda_{i}\right|^{p}\left|g^{\prime}(z_{i})\right|^{p} \right)^{\frac{1}{p}},\]
and therefore \(f_{1}+f_{2}\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) with \(\pi_{p}^{\mathcal{B}}(f_{1}+f_{2})\leq\pi_{p}^{\mathcal{B}}(f_{1})+\pi_{p}^{ \mathcal{B}}(f_{2})\).
Let \(\lambda\in\mathbb{C}\) and \(f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\). We have
\[\left(\sum_{i=1}^{n}\left|\lambda_{i}\right|^{p}\left\|(\lambda f)^{\prime}(z_{ i})\right\|^{p}\right)^{\frac{1}{p}}=\left|\lambda\right|\left(\sum_{i=1}^{n} \left|\lambda_{i}\right|^{p}\left\|f^{\prime}(z_{i})\right\|^{p}\right)^{\frac{1 }{p}}\leq\left|\lambda\right|\pi_{p}^{\mathcal{B}}(f)\sup_{g\in\widetilde{B}_{ \widetilde{\mathcal{B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}\left|\lambda_{i} \right|^{p}\left|g^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}}\]
and thus \(\lambda f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) with \(\pi_{p}^{\mathcal{B}}(\lambda f)\leq\left|\lambda\right|\pi_{p}^{\mathcal{B}}(f)\). This implies that \(\pi_{p}^{\mathcal{B}}(\lambda f)=0=\left|\lambda\right|\pi_{p}^{\mathcal{B}}(f)\) if \(\lambda=0\). For \(\lambda\neq 0\), we have \(\pi_{p}^{\mathcal{B}}(f)=\pi_{p}^{\mathcal{B}}(\lambda^{-1}(\lambda f))\leq\left| \lambda\right|^{-1}\pi_{p}^{\mathcal{B}}(\lambda f)\), hence \(\left|\lambda\right|\pi_{p}^{\mathcal{B}}(f)\leq\pi_{p}^{\mathcal{B}}(\lambda f)\), and so \(\pi_{p}^{\mathcal{B}}(\lambda f)=\left|\lambda\right|\pi_{p}^{\mathcal{B}}(f)\). Thus we have proved that \(\left(\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X),\pi_{p}^{\mathcal{B}}\right)\) is a normed space.
To show that it is a Banach space, it is enough to see that every absolutely convergent series is convergent. So let \((f_{n})_{n\geq 1}\) be a sequence in \(\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) such that \(\sum\pi_{p}^{\mathcal{B}}(f_{n})\) converges. Since \(p_{\mathcal{B}}(f_{n})\leq\pi_{p}^{\mathcal{B}}(f_{n})\) for all \(n\in\mathbb{N}\) and \(\left(\widehat{\mathcal{B}}(\mathbb{D},X),p_{\mathcal{B}}\right)\) is a Banach space, then \(\sum f_{n}\) converges in \(\left(\widehat{\mathcal{B}}(\mathbb{D},X),p_{\mathcal{B}}\right)\) to
a function \(f\in\widehat{\mathcal{B}}(\mathbb{D},X)\). Given \(m\in\mathbb{N}\), \(z_{1},\ldots,z_{m}\in\mathbb{D}\) and \(\lambda_{1},\ldots,\lambda_{m}\in\mathbb{C}\), we have
\[\left(\sum_{k=1}^{m}|\lambda_{k}|^{p}\left\|\sum_{i=1}^{n}f_{i}^{ \prime}(z_{k})\right\|^{p}\right)^{\frac{1}{p}} \leq\pi_{p}^{\mathcal{B}}\left(\sum_{i=1}^{n}f_{i}\right)\sup_{g \in B_{\widehat{\mathbb{B}}(\mathbb{D})}}\left(\sum_{k=1}^{m}|\lambda_{k}|^{p} \left|g^{\prime}(z_{k})\right|^{p}\right)^{\frac{1}{p}}\] \[\leq\sum_{i=1}^{n}\pi_{p}^{\mathcal{B}}(f_{i})\sup_{g\in B_{ \widehat{\mathbb{B}}(\mathbb{D})}}\left(\sum_{k=1}^{m}|\lambda_{k}|^{p}\left| g^{\prime}(z_{k})\right|^{p}\right)^{\frac{1}{p}}\]
for all \(n\in\mathbb{N}\), and by taking limits with \(n\to\infty\) yields
\[\left(\sum_{k=1}^{m}|\lambda_{k}|^{p}\left\|\sum_{i=1}^{\infty}f_{i}^{\prime }(z_{k})\right\|^{p}\right)^{\frac{1}{p}}\leq\sum_{i=1}^{\infty}\pi_{p}^{ \mathcal{B}}(f_{i})\sup_{g\in B_{\widehat{\mathbb{B}}(\mathbb{D})}}\left(\sum _{k=1}^{m}|\lambda_{k}|^{p}\left|g^{\prime}(z_{k})\right|^{p}\right)^{\frac{1} {p}}.\]
Hence \(f\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X)\) with \(\pi_{p}^{\mathcal{B}}(f)\leq\sum_{n=1}^{\infty}\pi_{p}^{\mathcal{B}}(f_{n})\). Moreover, we have
\[\pi_{p}^{\mathcal{B}}\left(f-\sum_{i=1}^{n}f_{i}\right)=\pi_{p}^{\mathcal{B}} \left(\sum_{i=n+1}^{\infty}f_{i}\right)\leq\sum_{i=n+1}^{\infty}\pi_{p}^{ \mathcal{B}}(f_{i})\]
for all \(n\in\mathbb{N}\), and thus \(f\) is the \(\pi_{p}^{\mathcal{B}}\)-limit of the series \(\sum f_{n}\).
(N2) Let \(g\in\widehat{\mathcal{B}}(\mathbb{D})\) and \(x\in X\). Let us recall that \(g\cdot x\in\widehat{\mathcal{B}}(\mathbb{D},X)\) with \(p_{\mathcal{B}}(g\cdot x)=p_{\mathcal{B}}(g)\left\|x\right\|\) by [9, Proposition 5.13]. If \(g=0\), there is nothing to prove. Assume \(g\neq 0\). We have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|(g\cdot x)^{\prime}(z_ {i})\right\|^{p}\right)^{\frac{1}{p}} =\left\|x\right\|p_{\mathcal{B}}(g)\left(\sum_{i=1}^{n}|\lambda_{i }|^{p}\left|\left(\frac{g}{p_{\mathcal{B}}(g)}\right)^{\prime}(z_{i})\right| ^{p}\right)^{\frac{1}{p}}\] \[\leq\left\|x\right\|p_{\mathcal{B}}(g)\sup_{h\in B_{\widehat{ \mathbb{B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|h^{\prime }(z_{i})\right|^{p}\right)^{\frac{1}{p}},\]
and thus \(g\cdot x\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X)\) with \(\pi_{p}^{\mathcal{B}}(g\cdot x)\leq p_{\mathcal{B}}(g)\left\|x\right\|\). Conversely, we have
\[p_{\mathcal{B}}(g)\left\|x\right\|=p_{\mathcal{B}}(g\cdot x)\leq\pi_{p}^{ \mathcal{B}}(g\cdot x).\]
(N3) Let \(h\colon\mathbb{D}\to\mathbb{D}\) be a holomorphic function with \(h(0)=0\), \(f\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X)\) and \(T\in\mathcal{L}(X,Y)\) where \(Y\) is a complex Banach space. Note that \(T\circ f\circ h\in\widehat{\mathcal{B}}(\mathbb{D},Y)\) by [9, Proposition 5.13]. We
have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|(T\circ f\circ h)^{ \prime}(z_{i})\right\|^{p}\right)^{\frac{1}{p}} =\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|T(f^{\prime}(h(z_{i} ))h^{\prime}(z_{i}))\right\|^{p}\right)^{\frac{1}{p}}\] \[\leq\left\|T\right\|\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|h^ {\prime}(z_{i})\right)\right|^{p}\left\|f^{\prime}(h(z_{i}))\right\|^{p} \right)^{\frac{1}{p}}\] \[\leq\left\|T\right\|\pi_{p}^{\mathcal{B}}(f)\sup_{g\in B_{\widetilde {\mathbb{R}}\mathbb{D}}\mathbb{D}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|h ^{\prime}(z_{i})\right|^{p}\left|g^{\prime}(h(z_{i}))\right|^{p}\right)^{ \frac{1}{p}}\] \[=\left\|T\right\|\pi_{p}^{\mathcal{B}}(f)\sup_{g\in B_{\widetilde {\mathbb{R}}\mathbb{D}}\mathbb{D}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|( g\circ h)^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}}\] \[\leq\left\|T\right\|\pi_{p}^{\mathcal{B}}(f)\sup_{k\in B_{ \widetilde{\mathbb{R}}\mathbb{D}}\mathbb{D}}\left(\sum_{i=1}^{n}|\lambda_{i}| ^{p}\left|k^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}},\]
where we have used that \(p_{\mathcal{B}}(g\circ h)\leq p_{\mathcal{B}}(g)\) by [9, Proposition 3.6]. Therefore \(T\circ f\circ h\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},Y)\) with \(\pi_{p}^{\mathcal{B}}(T\circ f\circ h)\leq\left\|T\right\|\pi_{p}^{\mathcal{B} }(f)\).
(I) Let \(f\in\widehat{\mathcal{B}}(\mathbb{D},X)\) and let \(\iota\colon X\to Y\) be a linear (not necessarily surjective) isometry. Assume that \(\iota\circ f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},Y)\). We have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|f^{\prime}(z_{i}) \right\|^{p}\right)^{\frac{1}{p}} =\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|(f^{\prime}(z_{i})) \right\|^{p}\right)^{\frac{1}{p}}=\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\| (\iota\circ f)^{\prime}(z_{i}))\right\|^{p}\right)^{\frac{1}{p}}\] \[\leq\pi_{p}^{\mathcal{B}}(\iota\circ f)\sup_{g\in B_{\widetilde {\mathbb{R}}\mathbb{D}}\mathbb{D}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left| g^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}}\]
and thus \(f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\) with \(\pi_{p}^{\mathcal{B}}(f)\leq\pi_{p}^{\mathcal{B}}(\iota\circ f)\). The reverse inequality follows from (N3).
### Mobius invariance
The _Mobius group of \(\mathbb{D}\)_, denoted \(\operatorname{Aut}(\mathbb{D})\), is formed by all biholomorphic bijections \(\phi\colon\mathbb{D}\to\mathbb{D}\). Each \(\phi\in\operatorname{Aut}(\mathbb{D})\) has the form \(\phi=\lambda\phi_{a}\) with \(\lambda\in\mathbb{T}\) and \(a\in\mathbb{D}\), where
\[\phi_{a}(z)=\frac{a-z}{1-\overline{a}z}\qquad(z\in\mathbb{D}).\]
Given a complex Banach space \(X\), let us recall (see [2]) that a linear space \(\mathcal{A}(\mathbb{D},X)\) of holomorphic mappings from \(\mathbb{D}\) into \(X\), endowed with a seminorm \(p_{\mathcal{A}}\), is _Mobius-invariant_ if it holds:
* \(\mathcal{A}(\mathbb{D},X)\subseteq\mathcal{B}(\mathbb{D},X)\) and there exists \(c>0\) such that \(p_{\mathcal{B}}(f)\leq cp_{\mathcal{A}}(f)\) for all \(f\in\mathcal{A}(\mathbb{D},X)\),
* \(f\circ\phi\in\mathcal{A}(\mathbb{D},X)\) with \(p_{\mathcal{A}}(f\circ\phi)=p_{\mathcal{A}}(f)\) for all \(\phi\in\operatorname{Aut}(\mathbb{D})\) and \(f\in\mathcal{A}(\mathbb{D},X)\).
By Proposition 1.1, each \(p\)-summing Bloch mapping \(f\colon\mathbb{D}\to X\) is Bloch with \(p_{\mathcal{B}}(f)\leq\pi_{p}^{\mathcal{B}}(f)\). Moreover, following the argument of the proof of (N3) in Proposition 1.2, it is easy to prove that if \(f\colon\mathbb{D}\to X\) is \(p\)-summing Bloch and \(\phi\in\operatorname{Aut}(\mathbb{D})\), then \(f\circ\phi\) is \(p\)-summing with \(\pi_{p}^{\mathcal{B}}(f\circ\phi)\leq\pi_{p}^{\mathcal{B}}(f)\), and using this fact we also deduce that \(\pi_{p}^{\mathcal{B}}(f)=\pi_{p}^{\mathcal{B}}((f\circ\phi)\circ\phi^{-1})\leq \pi_{p}^{\mathcal{B}}(f\circ\phi)\). In this way we have proved the following.
**Proposition 1.3**.: \((\Pi_{p}^{\mathcal{B}}(\mathbb{D},X),\pi_{p}^{\mathcal{B}})\) _is a Mobius-invariant space for any \(1\leq p\leq\infty\)._
### Pietsch domination
We establish a version for \(p\)-summing Bloch mappings on \(\mathbb{D}\) of the known Pietsch domination Theorem for \(p\)-summing linear operators between Banach spaces [13, Theorem 2].
Let us recall that \(\widehat{\mathcal{B}}(\mathbb{D})\) is a dual Banach space (see [1]) and therefore we can consider this space equipped with its weak* topology. Let \(\mathcal{P}(B_{\widehat{\mathcal{B}}(\mathbb{D})})\) denote the set of all Borel regular probability measures \(\mu\) on \((B_{\widehat{\mathcal{B}}(\mathbb{D})},w^{*})\).
**Theorem 1.4**.: _Let \(1\leq p<\infty\) and \(f\in\widehat{\mathcal{B}}(\mathbb{D},X)\). The following statements are equivalent:_
1. \(f\) _is_ \(p\)_-summing Bloch._
2. _(Pietsch domination). There is a constant_ \(c\geq 0\) _and a Borel regular probability measure_ \(\mu\) _on_ \((B_{\widehat{\mathcal{B}}(\mathbb{D})},w^{*})\) _such that_ \[\|f^{\prime}(z)\|\leq c\left(\int_{B_{\widehat{\mathcal{B}}(\mathbb{D})}}|g^{ \prime}(z)|^{p}\,d\mu(g)\right)^{\frac{1}{p}}\] _for all_ \(z\in\mathbb{D}\)_._
_In this case, \(\pi^{\mathcal{B}}_{p}(f)\) is the infimum of all constants \(c\geq 0\) satisfying the preceding inequality, and this infimum is attained._
Proof.: (\(i\)) \(\Rightarrow\) (\(ii\)): We will apply an unified abstract version of Pietsch domination Theorem (see [5, 12]). For it, consider the functions
\[S:\,\widehat{\mathcal{B}}(\mathbb{D},X)\times\mathbb{D}\times\mathbb{C} \rightarrow[0,\infty[,\qquad S(f,z,\lambda)=|\lambda|\,\|f^{\prime}(z)\|]\]
and
\[R\colon B_{\widehat{\mathcal{B}}(\mathbb{D})}\times\mathbb{D}\times\mathbb{C} \rightarrow[0,\infty[,\qquad R(g,z,\lambda)=|\lambda|\,|g^{\prime}(z)|\,.\]
Note first that for any \(z\in\mathbb{D}\) and \(\lambda\in\mathbb{C}\), the function \(R_{z,\lambda}\colon B_{\widehat{\mathcal{B}}(\mathbb{D})}\rightarrow[0,\infty[\), given by
\[R_{z,\lambda}(g)=R(g,z,\lambda),\]
is continuous. For every \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\), we have
\[\left(\sum_{i=1}^{n}S(f,z_{i},\lambda_{i})^{p}\right)^{\frac{1}{ p}} =\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,\|f^{\prime}(z_{i})\|^{p}\right)^{\frac{1}{ p}}\] \[\leq\pi^{\mathcal{B}}_{p}(f)\sup_{g\in B_{\widehat{\mathcal{B}}( \mathbb{D})}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,|g^{\prime}(z_{i})|^{p} \right)^{\frac{1}{p}}=\pi^{\mathcal{B}}_{p}(f)\sup_{g\in B_{\widehat{\mathcal{ B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}R(g,z_{i},\lambda_{i})^{p}\right)^{ \frac{1}{p}},\]
and therefore \(f\) is \(R-S\)-abstract \(p\)-summing. Hence, by applying [12, Theorem 3.1], there are a constant \(c\geq 0\) and a measure \(\mu\in\mathcal{P}(B_{\widehat{\mathcal{B}}(\mathbb{D})})\) such that
\[S(f,z,\lambda)\leq c\left(\int_{B_{\widehat{\mathcal{B}}(\mathbb{D})}}R(g,z, \lambda)^{p}\,d\mu(g)\right)^{\frac{1}{p}}\]
for all \(z\in\mathbb{D}\) and \(\lambda\in\mathbb{C}\), and therefore
\[\|f^{\prime}(z)\|\leq c\left(\int_{B_{\widehat{\mathcal{B}}(\mathbb{D})}}|g^{ \prime}(z)|^{p}\,d\mu(g)\right)^{\frac{1}{p}}\]
for all \(z\in\mathbb{D}\). Furthermore, we have
\[\|f^{\prime}(z)\|=\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,\|f^{\prime}(z_{i})\|^{ p}\right)^{\frac{1}{p}}\leq\pi_{p}^{\mathcal{B}}(f)\left(\int_{B_{\widehat{ \mathbb{BD}}(\mathbb{D})}}|g^{\prime}(z)|^{p}\,d\mu(g)\right)^{\frac{1}{p}}\]
for every \(z\in\mathbb{D}\) by taking, for example, \(n\in\mathbb{N}\), \(\lambda_{1}=1\), \(\lambda_{2}=\cdots=\lambda_{n}=0\) and \(z_{1}=\cdots=z_{n}=z\).
\((ii)\Rightarrow(i)\): Given \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\), we have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,\|f^{\prime}(z_{i})\|^{p}\right)^{ \frac{1}{p}}\leq c\sum_{i=1}^{n}\left(\int_{B_{\widehat{\mathbb{BD}}(\mathbb{ D})}}|\lambda_{i}|^{p}\,|g^{\prime}(z_{i})|^{p}\,d\mu(g)\right)^{\frac{1}{p}} \leq c\sup_{g\in B_{\widehat{\mathbb{BD}}(\mathbb{D})}}\left(\sum_{i=1}^{n}| \lambda_{i}|^{p}\,|g^{\prime}(z_{i})|^{p}\right)^{\frac{1}{p}}.\]
Hence \(f\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X)\) with \(\pi_{p}^{\mathcal{B}}(f)\leq c\).
### Pietsch factorization
We now present the analogue for \(p\)-summing Bloch mappings of Pietsch factorization theorem for \(p\)-summing operators (see [13, Theorem 3], also [6, Theorem 2.13]).
Given \(\mu\in\mathcal{P}(B_{\widehat{\mathbb{BD}}(\mathbb{D})})\) and \(1\leq p<\infty\), \(I_{\infty,p}\colon L_{\infty}(\mu)\to L_{p}(\mu)\) and \(j_{\infty}\colon C(B_{\widehat{\mathbb{BD}}(\mathbb{D})})\to L_{\infty}(\mu)\) denote the formal inclusion operators. We will also use the mapping \(\iota_{\mathbb{D}}\colon\,\mathbb{D}\to C(B_{\widehat{\mathbb{BD}}(\mathbb{ D})})\) defined by
\[\iota_{\mathbb{D}}(z)(g)=g^{\prime}(z)\quad\left(z\in\mathbb{D},\ g\in B_{ \widehat{\mathbb{BD}}(\mathbb{D})}\right),\]
and for a complex Banach space \(X\), the isometric linear embedding \(\iota_{X}\colon X\to\ell_{\infty}(B_{X^{*}})\) given by
\[\langle\iota_{X}(x),x^{*}\rangle=x^{*}(x)\quad(x^{*}\in B_{X^{*}},\ x\in X).\]
The following easy fact will be applied below.
**Lemma 1.5**.: _Let \(\mu\in\mathcal{P}(B_{\widehat{\mathbb{BD}}(\mathbb{D})})\). Then there exists a mapping \(h\in\widehat{\mathcal{BD}}(\mathbb{D},L_{\infty}(\mu))\) with \(p_{\mathcal{B}}(h)=1\) such that \(h^{\prime}=j_{\infty}\circ\iota_{\mathbb{D}}\). In fact, \(h\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},L_{\infty}(\mu))\) with \(\pi_{p}^{\mathcal{B}}(h)=1\) for any \(1\leq p<\infty\)._
Proof.: Note that \(j_{\infty}\circ\iota_{\mathbb{D}}\in\mathcal{H}(\mathbb{D},L_{\infty}(\mu))\) with \((j_{\infty}\circ\iota_{\mathbb{D}})^{\prime}=j_{\infty}\circ(\iota_{\mathbb{D }})^{\prime}\), where \((\iota_{\mathbb{D}})^{\prime}(z)(g)=g^{\prime\prime}(z)\) for all \(z\in\mathbb{D}\) and \(g\in B_{\widehat{\mathbb{BD}}(\mathbb{D})}\). By [9, Lemma 2.9], there exists a mapping \(h\in\mathcal{H}(\mathbb{D},L_{\infty}(\mu))\) with \(h(0)=0\) such that \(h^{\prime}=j_{\infty}\circ\iota_{\mathbb{D}}\). In fact, \(h\in\widehat{\mathcal{BD}}(\mathbb{D},L_{\infty}(\mu))\) with \(p_{\mathcal{B}}(h)=1\) since
\[(1-|z|^{2})\,\|h^{\prime}(z)\|_{L_{\infty}(\mu)}=(1-|z|^{2})\,\|j_{\infty}( \iota_{\mathbb{D}}(z))\|_{L_{\infty}(\mu)}=(1-|z|^{2})\,\|\iota_{\mathbb{D}}( z)\|_{\infty}=1\]
for all \(z\in\mathbb{D}\). For the second assertion, given \(1\leq p<\infty\), it suffices to note that
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,\|h^{\prime}(z_{i})\|_{L_ {\infty}(\mu)}^{p}\right)^{\frac{1}{p}} =\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,\|j_{\infty}(\iota_{ \mathbb{D}}(z_{i}))\|_{L_{\infty}(\mu)}^{p}\right)^{\frac{1}{p}}=\left(\sum_{i=1 }^{n}|\lambda_{i}|^{p}\,\|\iota_{\mathbb{D}}(z_{i})\|_{\infty}^{p}\right)^{ \frac{1}{p}}\] \[\leq\left(\sum_{i=1}^{n}\frac{|\lambda_{i}|^{p}}{(1-|z_{i}|^{2})^ {p}}\right)^{\frac{1}{p}}=\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\,\big{|}f_{z_{i} }^{\prime}(z_{i})\big{|}^{p}\right)^{\frac{1}{p}}\] \[\leq\sup_{g\in B_{\widehat{\mathbb{BD}}(\mathbb{D})}}\left(\sum_{i =1}^{n}|\lambda_{i}|^{p}\,|g^{\prime}(z_{i})|^{p}\right)^{\frac{1}{p}}\]
for any \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\).
**Theorem 1.6**.: _Let \(1\leq p<\infty\) and \(f\in\widehat{\mathcal{B}}(\mathbb{D},X)\). The following assertions are equivalent:_
1. \(f\) _is_ \(p\)_-summing Bloch._
_
2. _(Pietsch factorization). There exist a regular Borel probability measure_ \(\mu\) _on_ \((B_{\widehat{\mathcal{B}}(\mathbb{D})},w^{*})\)_, an operator_ \(T\in\mathcal{L}(L_{p}(\mu),\ell_{\infty}(B_{X^{*}}))\) _and a mapping_ \(h\in\widehat{\mathcal{B}}(\mathbb{D},L_{\infty}(\mu))\) _such that the following diagram commutes:_
_In this case, \(\pi_{p}^{\mathcal{B}}(f)=\inf\left\{\|T\|\,p_{\mathcal{B}}(h)\right\}\), where the infimum is taken over all such factorizations of \(\iota_{X}\circ f^{\prime}\) as above, and this infimum is attained._
Proof.: \((i)\Rightarrow(ii)\): If \(f\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X)\), then Theorem 1.4 gives a measure \(\mu\in\mathcal{P}(B_{\widehat{\mathcal{B}}(\mathbb{D})})\) such that
\[\|f^{\prime}(z)\|\leq\pi_{p}^{\mathcal{B}}(f)\left(\int_{B_{\widehat{\mathcal{ B}}(\mathbb{D})}}|g^{\prime}(z)|^{p}\,d\mu(g)\right)^{\frac{1}{p}}\]
for all \(z\in\mathbb{D}\). By Lemma 1.5, there is a mapping \(h\in\widehat{\mathcal{B}}(\mathbb{D},L_{\infty}(\mu))\) with \(p_{\mathcal{B}}(h)=1\) such that \(h^{\prime}=j_{\infty}\circ\iota_{\mathbb{D}}\). Consider the linear subspace \(S_{p}:=\overline{\lim}(I_{\infty,p}(h^{\prime}(\mathbb{D})))\subseteq L_{p}(\mu)\) and the operator \(T_{0}\in\mathcal{L}(S_{p},\ell_{\infty}(B_{X^{*}}))\) defined by \(T_{0}(I_{\infty,p}(h^{\prime}(z)))=\iota_{X}(f^{\prime}(z))\) for all \(z\in\mathbb{D}\). Note that \(\|T_{0}\|\leq\pi_{p}^{\mathcal{B}}(f)\) since
\[\left\|T_{0}\left(\sum_{i=1}^{n}\alpha_{i}I_{\infty,p}(h^{\prime }(z_{i}))\right)\right\|_{\infty} =\left\|\sum_{i=1}^{n}\alpha_{i}T_{0}(I_{\infty,p}(h^{\prime}(z_{ i})))\right\|_{\infty}=\left\|\sum_{i=1}^{n}\alpha_{i}\iota_{X}(f^{\prime}(z_{ i}))\right\|_{\infty}\] \[\leq\sum_{i=1}^{n}|\alpha_{i}|\left\|\iota_{X}(f^{\prime}(z_{i}) )\right\|_{\infty}=\sum_{i=1}^{n}|\alpha_{i}|\left\|f^{\prime}(z_{i})\right\|\] \[\leq\pi_{p}^{\mathcal{B}}(f)\sum_{i=1}^{n}|\alpha_{i}|\left(\int _{B_{\widehat{\mathcal{B}}(\mathbb{D})}}|g^{\prime}(z_{i})|^{p}\,d\mu(g) \right)^{\frac{1}{p}}\leq\pi_{p}^{\mathcal{B}}(f)\sum_{i=1}^{n}\frac{|\alpha_ {i}|}{1-|z_{i}|^{2}}\]
and
\[\sum_{i=1}^{n}\frac{|\alpha_{i}|}{1-|z_{i}|^{2}} =\left|\sum_{i=1}^{n}\alpha_{i}\frac{\overline{\alpha_{i}}}{| \alpha_{i}|}f^{\prime}_{z_{i}}(z_{i})\right|=\sup_{g\in\mathcal{B}_{\widehat {\mathcal{B}}(\mathbb{D})}}\left|\sum_{i=1}^{n}\alpha_{i}g^{\prime}(z_{i}) \right|=\sup_{g\in\mathcal{B}_{\widehat{\mathcal{B}}(\mathbb{D})}}\left|\sum _{i=1}^{n}\alpha_{i}\iota_{\mathbb{D}}(z_{i})(g)\right|\] \[=\left\|\sum_{i=1}^{n}\alpha_{i}\iota_{\mathbb{D}}(z_{i})\right\| _{\infty}=\left\|\sum_{i=1}^{n}\alpha_{i}j_{\infty}(\iota_{\mathbb{D}}(z_{i}) )\right\|_{\infty}=\left\|\sum_{i=1}^{n}\alpha_{i}h^{\prime}(z_{i})\right\|_{\infty}\] \[=\left\|I_{\infty,p}\left(\sum_{i=1}^{n}\alpha_{i}h^{\prime}(z_{i} )\right)\right\|_{p}=\left\|\sum_{i=1}^{n}\alpha_{i}I_{\infty,p}(h^{\prime}(z _{i}))\right\|_{p}\]
for any \(n\in\mathbb{N}\), \(\alpha_{1},\ldots,\alpha_{n}\in\mathbb{C}^{*}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\). By the injectivity of the Banach space \(\ell_{\infty}(B_{X^{*}})\) (see [6, p. 45]), there exists \(T\in\mathcal{L}(L_{p}(\mu),\ell_{\infty}(B_{X^{*}}))\) such that \(T|_{S_{p}}=T_{0}\) with \(\|T\|=\|T_{0}\|\). This tells us that \(\iota_{X}\circ f^{\prime}=T\circ\iota_{\mathbb{D},p}\circ h^{\prime}\) with \(\|T\|\,p_{\mathcal{B}}(h)\leq\pi_{p}^{\mathcal{B}}(f)\).
\((ii)\Rightarrow(i)\): By (ii), we have \(\iota_{X}\circ f^{\prime}=T\circ I_{\infty,p}\circ h^{\prime}\). Given \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\), it holds
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|\|f^{\prime}(z_{i})\right\| ^{p}\right)^{\frac{1}{p}} =\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|\iota_{X}(f^{\prime}( z_{i}))\right\|^{p}\right)^{\frac{1}{p}}=\left(\sum_{i=1}^{n}|\lambda_{i}|^{p} \left\|T(I_{\infty,p}(h^{\prime}(z_{i})))\right\|^{p}\right)^{\frac{1}{p}}\] \[\leq\left\|T\right\|\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|I _{\infty,p}(h^{\prime}(z_{i}))\right\|^{p}\right)^{\frac{1}{p}}=\left\|T \right\|\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|H^{\prime}(z_{i})\right\| ^{p}_{\infty}\right)^{\frac{1}{p}}\] \[\leq\left\|T\right\|p_{\mathcal{B}}(h)\left(\sum_{i=1}^{n}\frac{ |\lambda_{i}|^{p}}{(1-|z_{i}|^{2})^{p}}\right)^{\frac{1}{p}}=\left\|T\right\| p_{\mathcal{B}}(h)\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|f^{\prime}_{z_{i}}(z_{i}) \right|^{p}\right)^{\frac{1}{p}}\] \[\leq\left\|T\right\|p_{\mathcal{B}}(h)\sup_{g\in B_{\widehat{ \mathbb{R}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\pi_{p}^{\mathcal{B}}(f)\leq c\pi_{q}^{\mathcal{B}}(f)\) for all \(f\in\Pi_{q}^{\widetilde{\mathcal{B}}}(\mathbb{D},\ell_{q})\). Since \(L_{q}(\mu)\) is an \(\mathcal{L}_{q,\lambda}\)-space for each \(\lambda>1\), we can assure that given \(n\in\mathbb{N}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\), the subspace
\[E=\ln\left(\left\{I_{\infty,q}(h_{\mu}(z_{1})),\ldots,I_{\infty,q}(h_{\mu}(z_{ n}))\right\}\right)\subseteq L_{q}(\mu)\]
embeds \(\lambda\)-isomorphically into \(\ell_{q}\), that is, \(E\) is contained in a subspace \(F\subseteq L_{q}(\mu)\) for which there exists an isomorphism \(T\colon F\to\ell_{q}\) with \(\|T\|\,\|T^{-1}\|<\lambda\).
Since \(T\circ I_{\infty,q}\circ h_{\mu}\in\Pi_{q}^{\widetilde{\mathcal{B}}}(\mathbb{ D},\ell_{q})=\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},\ell_{q})\) and \((T\circ I_{\infty,q}\circ h_{\mu})^{\prime}=T\circ I_{\infty,q}\circ h_{\mu}^ {\prime}\), we have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|(I_{\infty,q}\circ h_ {\mu})^{\prime}(z_{i})\right\|_{I_{q}(\mu)}^{p}\right)^{\frac{1}{p}} \leq\left\|T^{-1}\right\|\left(\sum_{i=1}^{n}|\lambda_{i}|^{p} \left\|T(I_{\infty,q}(h_{\mu}^{\prime}(z_{i})))\right\|_{I_{\varepsilon_{q}}} ^{p}\right)^{\frac{1}{p}}\] \[\leq\left\|T^{-1}\right\|c\pi_{q}^{\mathcal{B}}(T\circ I_{\infty,q }\circ h_{\mu})\sup_{g\in B_{\widetilde{\mathcal{B}}\mathbb{D}}}\left(\sum_{i= 1}^{n}|\lambda_{i}|^{p}\left|g^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}}\] \[\leq c\left\|T^{-1}\right\|\left\|T\right\|\pi_{q}^{\mathcal{B}}( I_{\infty,q}\circ h_{\mu})\sup_{g\in B_{\widetilde{\mathcal{B}}\mathbb{D}}} \left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|g^{\prime}(z_{i})\right|^{p}\right) ^{\frac{1}{p}},\]
therefore \(\pi_{p}^{\mathcal{B}}(I_{\infty,q}\circ h_{\mu})\leq c\lambda\) for all \(\lambda>1\), and thus \(\pi_{p}^{\mathcal{B}}(I_{\infty,q}\circ h_{\mu})\leq c\). Now, by Theorem 1.4, there exists a measure \(\widehat{\mu}\in\mathcal{P}(B_{\widetilde{\mathcal{B}}\mathbb{D}})\) such that
\[\left\|(I_{\infty,q}\circ h_{\mu})^{\prime}(z)\right\|_{I_{\infty}(\mu)}\leq c \left(\int_{B_{\widetilde{\mathcal{B}}\mathbb{D}})}|g^{\prime}(z)|^{p}\ d \widehat{\mu}(g)\right)^{\frac{1}{p}}=c\left\|(I_{\infty,q}\circ h_{\widehat {\mu}})^{\prime}(z)\right\|_{I_{\nu}(\widehat{\mu})}\]
for all \(z\in\mathbb{D}\). In the last equality, we have used that
\[(I_{\infty,q}\circ h_{\widehat{\mu}})^{\prime}(z)(g)=I_{\infty,q}(h_{\widehat {\mu}}^{\prime}(z))(g)=h_{\widehat{\mu}}^{\prime}(z)(g)=j_{\infty}(t_{\mathbb{ D}}(z))(g)=\iota_{\mathbb{D}}(z)(g)=g^{\prime}(z)\]
for all \(z\in\mathbb{D}\) and \(g\in B_{\widehat{\mathcal{B}}\mathbb{D}}\).
Take a complex Banach space \(X\) and let \(f\in\Pi_{q}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\). In view of Proposition 1.1, we only must show that \(f\in\Pi_{1}^{\widetilde{\mathcal{B}}}(\mathbb{D},X)\). Theorem 1.4 provides again a measure \(\mu_{0}\in\mathcal{P}(B_{\widetilde{\mathcal{B}}\mathbb{D}})\) such that
\[\left\|f^{\prime}(z)\right\|\leq\pi_{q}^{\mathcal{B}}(f)\left\|(I_{\infty,q} \circ h_{\mu_{0}})^{\prime}(z)\right\|_{I_{q}(\mu_{0})}\]
for all \(z\in\mathbb{D}\). We claim that there is a constant \(C>0\) and a measure \(\lambda\in\mathcal{P}(B_{\widetilde{\mathcal{B}}\mathbb{D}})\) such that
\[\left\|(I_{\infty,q}\circ h_{\mu_{0}})^{\prime}(z)\right\|_{I_{q}(\mu_{0})}\leq C \left\|(I_{\infty,q}\circ h_{\lambda})^{\prime}(z)\right\|_{L_{1}(\lambda)}\]
for all \(z\in\mathbb{D}\). Indeed, define \(\lambda=\sum_{n=0}^{\infty}(1/2^{n+1})\mu_{n}\in\mathcal{P}(B_{\widetilde{ \mathcal{B}}\mathbb{D}})\), where \((\mu_{n})_{n\geq 1}\) is the sequence in \(\mathcal{P}(B_{\widetilde{\mathcal{B}}\mathbb{D}})\) given by \(\mu_{n+1}=\widehat{\mu_{n}}\) for all \(n\in\mathbb{N}_{0}\). Since \(1<p<q\), there exists \(\theta\in(0,1)\) such that \(p=\theta\cdot 1+(1-\theta)q\), and applying Holder's Inequality with \(1/\theta\) (note that \((1/\theta)^{*}=1/(1-\theta)\)), we have
\[\left\|(I_{\infty,q}\circ h_{\mu_{n}})^{\prime}(z)\right\|_{L_{p}( \mu_{n})} =\left(\int_{B_{\widetilde{\mathcal{B}}\mathbb{D}})}\left|(I_{\infty,q} \circ h_{\mu_{n}})^{\prime}(z)(g)\right|^{\theta 1+(1-\theta)q}\ d\mu_{n}(g)\right)^{\frac{1}{p}}\] \[\leq\left(\int_{B_{\widetilde{\mathcal{B}}\mathbb{D}})}\left|(I_{ \infty,q}\circ h_{\mu_{n}})^{\prime}(z)(g)\right|\ d\mu_{n}(g)\right)^{\theta} \left(\int_{B_{\widetilde{\mathcal{B}}\mathbb{D}})}\left|(I_{\infty,q}\circ h_{ \mu_{n}})^{\prime}(z)(g)\right|^{q}\ d\mu_{n}(g)\right)^{\frac{1-\theta}{q}}\] \[=\left\|(I_{\infty,q}\circ h_{\mu_{n}})^{\prime}(z)\right\|_{L_{1}( \mu_{n})}^{\theta}\left\|(I_{\infty,q}\circ h_{\mu_{n}})^{\prime}(z)\right\|_{L_{ \eta}(\mu_{n})}^{1-\theta}\]
for each \(n\in\mathbb{N}_{0}\) and all \(z\in\mathbb{D}\). Using Holder's Inequality and the inequality
\[\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h_{\mu _{n+1}})^{\prime}(z)\right\|_{L_{q}(\mu_{n+1})} \leq\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n+1}})^{\prime}(z)\right\|_{L_{q}(\mu_{n+1})}+\left\|(I_{\infty,q} \circ h_{\mu_{0}})^{\prime}(z)\right\|_{L_{q}(\mu_{0})}\] \[=\sum_{n=1}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n+1}})^{\prime}(z)\right\|_{L_{q}(\mu_{n+1})}\] \[=\sum_{n=0}^{\infty}\frac{1}{2^{n}}\left\|(I_{\infty,q}\circ h_{ \mu_{n}})^{\prime}(z)\right\|_{L_{q}(\mu_{n})}\] \[=2\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n}})^{\prime}(z)\right\|_{L_{q}(\mu_{n})},\]
we now obtain
\[\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h_{ \mu_{n}})^{\prime}(z)\right\|_{L_{q}(\mu_{n})} \leq c\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n+1}})^{\prime}(z)\right\|_{L_{q}(\mu_{n+1})}\] \[\leq c\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q} \circ h_{\mu_{n+1}})^{\prime}(z)\right\|_{L_{1}(\mu_{n+1})}^{\theta}\left\|(I _{\infty,q}\circ h_{\mu_{n+1}})^{\prime}(z)\right\|_{L_{q}(\mu_{n+1})}^{1- \theta}\] \[\leq c\left(\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{ \infty,q}\circ h_{\mu_{n+1}})^{\prime}(z)\right\|_{L_{1}(\mu_{n+1})}\right)^{ \theta}\left(\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n+1}})^{\prime}(z)\right\|_{L_{q}(\mu_{n+1})}\right)^{1-\theta}\] \[\leq c\left(\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{ \infty,q}\circ h_{\mu_{n+1}})^{\prime}(z)\right\|_{L_{1}(\mu_{n+1})}\right)^{ \theta}\left(2\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n}})^{\prime}(z)\right\|_{L_{q}(\mu_{n})}\right)^{1-\theta}\]
for all \(z\in\mathbb{D}\), and thus
\[\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h _{\mu_{n}})^{\prime}(z)\right\|_{L_{q}(\mu_{n})} \leq c^{\frac{1}{\theta}}2^{\frac{1-\theta}{\theta}}\left(\sum_{n= 0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h_{\mu_{n+1}})^{\prime}( z)\right\|_{L_{1}(\mu_{n+1})}\right)\] \[\leq c^{\frac{1}{\theta}}2^{\frac{1-\theta}{\theta}}2\left(\sum_ {n=0}^{\infty}\frac{1}{2^{n+1}}\left\|(I_{\infty,q}\circ h_{\mu_{n}})^{\prime}( z)\right\|_{L_{1}(\mu_{n})}\right)\] \[=(2c)^{\frac{1}{\theta}}\left\|(I_{\infty,q}\circ h_{\lambda})^{ \prime}(z)\right\|_{L_{1}(\lambda)}\]
for all \(z\in\mathbb{D}\). From above, we deduce that
\[\frac{1}{2}\left\|(I_{\infty,q}\circ h_{\mu_{0}})^{\prime}(z)\right\|_{L_{q}( \mu_{0})}\leq(2c)^{\frac{1}{\theta}}\left\|(I_{\infty,q}\circ h_{\lambda})^{ \prime}(z)\right\|_{L_{1}(\lambda)}\]
for all \(z\in\mathbb{D}\), and this proves our claim taking \(C=2(2c)^{\frac{1}{\theta}}\). Therefore we can write
\[\left\|f^{\prime}(z)\right\|\leq C\pi_{q}^{\mathcal{B}}(f)\left\|(I_{\infty,q} \circ h_{\lambda})^{\prime}(z)\right\|_{L_{1}(\lambda)}=C\pi_{q}^{\mathcal{B}}( f)\int_{B_{\widetilde{\mathbb{D}}(\mathbb{D})}}\left|g^{\prime}(z)\right|\ d\lambda(g)\]
for all \(z\in\mathbb{D}\). Hence \(f\in\Pi_{1}^{\widetilde{\mathbb{B}}}(\mathbb{D},X)\) with \(\pi_{1}^{\mathcal{B}}(f)\leq C\pi_{q}^{\mathcal{B}}(f)\) by Theorem 1.4.
## 2. Banach-valued Bloch molecules on the unit disc
Our aim in this section is to study the duality of the spaces of \(p\)-summing Bloch mappings from \(\mathbb{D}\) into \(X^{*}\). We begin by recalling some concepts and results stated in [9] on the Bloch-free Banach space over \(\mathbb{D}\).
For each \(z\in\mathbb{D}\), a _Bloch atom of \(\mathbb{D}\)_ is the bounded linear functional \(\gamma_{z}\colon\widehat{\mathcal{B}}(\mathbb{D})\to\mathbb{C}\) given by
\[\gamma_{z}(f)=f^{\prime}(z)\qquad(f\in\widehat{\mathcal{B}}(\mathbb{D})).\]
The elements of \(\operatorname{lin}(\{\gamma_{z}\colon z\in\mathbb{D}\})\) in \(\widehat{\mathcal{B}}(\mathbb{D})^{*}\) are called _Bloch molecules of \(\mathbb{D}\)_. The _Bloch-free Banach space over \(\mathbb{D}\)_, denoted \(\mathcal{G}(\mathbb{D})\), is the norm-closed linear hull of \(\{\gamma_{z}\colon z\in\mathbb{D}\}\) in \(\widehat{\mathcal{B}}(\mathbb{D})^{*}\). The mapping \(\Gamma\colon\mathbb{D}\to\mathcal{G}(\mathbb{D})\), defined by \(\Gamma(z)=\gamma_{z}\) for all \(z\in\mathbb{D}\), is holomorphic with \(\left\|\gamma_{z}\right\|=1/(1-\left|z\right|^{2})\) for all \(z\in\mathbb{D}\) (see [9, Proposition 2.7]).
Let \(X\) be a complex Banach space. Given \(z\in\mathbb{D}\) and \(x\in X\), it is immediate that the functional \(\gamma_{z}\otimes x\colon\widehat{\mathcal{B}}(\mathbb{D},X^{*})\to\mathbb{C}\) defined by
\[(\gamma_{z}\otimes x)\left(f\right)=\left\langle f^{\prime}(z),x\right\rangle \qquad\left(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\right),\]
is linear and continuous with \(\left\|\gamma_{z}\otimes x\right\|\leq\left\|x\right\|/(1-\left|z\right|^{2})\). In fact, \(\left\|\gamma_{z}\otimes x\right\|=\left\|x\right\|/(1-\left|z\right|^{2})\). Indeed, take any \(x^{*}\in S_{X^{*}}\) such that \(x^{*}(x)=\left\|x\right\|\) and consider \(f_{z}\cdot x^{*}\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\). Since \(p_{\mathcal{B}}(f_{z}\cdot x^{*})=1\), it follows that
\[\left\|\gamma_{z}\otimes x\right\|\geq\left|(\gamma_{z}\otimes x)(f_{z}\cdot x ^{*})\right|=\left|\left\langle(f_{z}\cdot x^{*})^{\prime}(z),x\right\rangle \right|=\left|\left\langle f_{z}^{\prime}(z)x^{*},x\right\rangle\right|=\left| f_{z}^{\prime}(z)\right|\left|x^{*}(x)\right|=\frac{\left\|x\right\|}{1-\left|z \right|^{2}}.\]
We now present a tensor product space whose elements, according to [9, Definition 2.6], could be referred to as _\(X\)-valued Bloch molecules on \(\mathbb{D}\)_.
**Definition 2.1**.: Let \(X\) be a complex Banach space. Define the linear space
\[\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X:=\operatorname{lin}\{\gamma_ {z}\otimes x\colon z\in\mathbb{D},\;x\in X\}\subseteq\widehat{\mathcal{B}}( \mathbb{D},X^{*})^{*}.\]
Note that each element \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) is of the form
\[\gamma=\sum_{i=1}^{n}\lambda_{i}(\gamma_{z_{i}}\otimes x_{i})=\sum_{i=1}^{n} \lambda_{i}\gamma_{z_{i}}\otimes x_{i}=\sum_{i=1}^{n}\gamma_{z_{i}}\otimes \lambda_{i}x_{i}\]
where \(n\in\mathbb{N}\), \(\lambda_{i}\in\mathbb{C}\), \(z_{i}\in\mathbb{D}\) and \(x_{i}\in X\) for \(i=1,\ldots,n\), but such a representation of \(\gamma\) is not unique.
The action of the functional \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\operatorname{lin }(\Gamma(\mathbb{D}))\otimes X\) on a mapping \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) can be described as
\[\gamma(f)=\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i}),x_{i}\right\rangle.\]
### Pairing
The space \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) is a linear subspace of \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})^{*}\) and, in fact, we have:
**Proposition 2.2**.: \(\left\langle\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X,\widehat{\mathcal{ B}}(\mathbb{D},X^{*})\right\rangle\) _is a dual pair, via the bilinear form given by_
\[\left\langle\gamma,f\right\rangle=\sum_{i=1}^{n}\lambda_{i}\left\langle f^{ \prime}(z_{i}),x_{i}\right\rangle\]
_for \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\operatorname{lin }(\Gamma(\mathbb{D}))\otimes X\) and \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\)._
Proof.: Note that \(\left\langle\cdot,\cdot\right\rangle\) is a well-defined bilinear map on \((\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X)\times\widehat{\mathcal{B}}( \mathbb{D},X^{*})\) since \(\left\langle\gamma,f\right\rangle=\gamma(f)\). On one hand, if \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) and \(\left\langle\gamma,f\right\rangle=0\) for all \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\), then \(\gamma=0\), and thus \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) separates points of \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\). On the other hand, if \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) and \(\left\langle\gamma,f\right\rangle=0\) for all \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\), then \(\left\langle f^{\prime}(z),x\right\rangle=\left\langle\gamma_{z}\otimes x,f \right\rangle=0\) for all \(z\in\mathbb{D}\) and \(x\in X\), hence \(f^{\prime}(z)=0\) for all \(z\in\mathbb{D}\), therefore \(f\) is a constant function on \(\mathbb{D}\), then \(f=0\) since \(f(0)=0\) and thus \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) separates points of \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})\)
Since \(\left\langle\ln(\Gamma(\mathbb{D}))\otimes X,\widehat{\mathcal{B}}(\mathbb{D},X^{*})\right\rangle\) is a dual pair, we can identify \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) with a linear subspace of \((\ln(\Gamma(\mathbb{D}))\otimes X)^{\prime}\) by means of the following easy result.
**Corollary 2.3**.: _For each \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\), the functional \(\Lambda_{0}(f)\colon\ln(\Gamma(\mathbb{D}))\otimes X\to\mathbb{C}\), given by_
\[\Lambda_{0}(f)(\gamma)=\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i}),x_{i}\right\rangle\]
_for \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\ln(\Gamma( \mathbb{D}))\otimes X\), is linear. We will say that \(\Lambda_{0}(f)\) is the linear functional on \(\ln(\Gamma(\mathbb{D}))\otimes X\) associated to \(f\). Furthermore, the map \(f\mapsto\Lambda_{0}(f)\) is a linear monomorphism from \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) into \((\ln(\Gamma(\mathbb{D}))\otimes X)^{\prime}\). _
### Projective norm
As usual (see [14]), given two linear spaces \(E\) and \(F\), the tensor product space \(E\otimes F\) equipped with a norm \(\alpha\) will be denoted by \(E\otimes_{\alpha}F\), and the completion of \(E\otimes_{\alpha}F\) by \(E\widehat{\otimes}_{\alpha}F\). An important example of tensor norm is the projective norm \(\pi\) on \(u\in E\otimes F\) defined by
\[\pi(u)=\inf\left\{\sum_{i=1}^{n}\|x_{i}\|\,\|y_{i}\|:\,n\in\mathbb{N},\,x_{1}, \ldots,x_{n}\in E,\,y_{1},\ldots,y_{n}\in F,\,u=\sum_{i=1}^{n}x_{i}\otimes y_{ i}\right\},\]
where the infimum is taken over all the representations of \(u\) as above.
It is useful to know that the projective norm and the operator canonical norm coincide on the space \(\ln(\Gamma(\mathbb{D}))\otimes X\).
**Proposition 2.4**.: _Given \(\gamma\in\ln(\Gamma(\mathbb{D}))\otimes X\), we have \(\|\gamma\|=\pi(\gamma)\), where_
\[\|\gamma\|=\sup\left\{|\gamma(f)|:\,f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*} ),\,\,p_{\mathcal{B}}(f)\leq 1\right\}\]
_and_
\[\pi(\gamma)=\inf\left\{\sum_{i=1}^{n}\frac{|\lambda_{i}|}{1-|z_{i}|^{2}}\,\|x_ {i}\|:\,\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\right\}.\]
Proof.: Let \(\gamma\in\ln(\Gamma(\mathbb{D}))\otimes X\) and let \(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\) be a representation of \(\gamma\). Since \(\gamma\) is linear and
\[|\gamma(f)|=\left|\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i}),x_{ i}\right\rangle\right|\leq\sum_{i=1}^{n}|\lambda_{i}|\,\|f^{\prime}(z_{i})\|\,\|x_{i} \|\leq p_{\mathcal{B}}(f)\sum_{i=1}^{n}|\lambda_{i}|\,\frac{\|x_{i}\|}{1-|z_{i} |^{2}}\]
for all \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\), we deduce that \(\|\gamma\|\leq\sum_{i=1}^{n}|\lambda_{i}|\|x_{i}\|/(1-|z_{i}|^{2})\). Since this holds for each representation of \(\gamma\), it follows that \(\|\gamma\|\leq\pi(\gamma)\) and thus \(\|\cdot\|\leq\pi\) on \(\ln(\Gamma(\mathbb{D}))\otimes X\).
To prove the reverse inequality, suppose by contradiction that \(\|\mu\|<1<\pi(\mu)\) for some \(\mu\in\ln(\Gamma(\mathbb{D}))\otimes X\). Denote \(B=\{\gamma\in\ln(\Gamma(\mathbb{D}))\otimes X\colon\pi(\gamma)\leq 1\}\). Clearly, \(B\) is a closed convex subset of \(\ln(\Gamma(\mathbb{D}))\otimes_{\pi}X\). Applying the Hahn-Banach Separation Theorem to \(B\) and \(\{\mu\}\), we obtain a functional \(\eta\in(\ln(\Gamma(\mathbb{D}))\otimes_{\pi}X)^{*}\) such that
\[1=\|\eta\|=\sup\{\operatorname{Re}(\eta(\gamma))\colon\gamma\in B\}< \operatorname{Re}(\eta(\mu)).\]
Define \(F_{\eta}\colon\mathbb{D}\to X^{*}\) by
\[\left\langle F_{\eta}(z),x\right\rangle=\eta\left(\gamma_{z}\otimes x\right) \qquad(x\in X,\,\,z\in\mathbb{D}).\]
We now show that \(F_{\eta}\) is holomorphic. By [11, Exercise 8.D], it suffices to prove that for each \(x\in X\), the function \(F_{\eta,x}\colon\mathbb{D}\to\mathbb{C}\) defined by
\[F_{\eta,x}(z)=\eta(\gamma_{z}\otimes x)\qquad(z\in\mathbb{D})\]
is holomorphic. Let \(a\in\mathbb{D}\). Since \(\Gamma\colon\mathbb{D}\to\operatorname{lin}(\Gamma(\mathbb{D}))\) is holomorphic, there exists \(D\Gamma(a)\in\mathcal{L}(\mathbb{C},\operatorname{lin}(\Gamma(\mathbb{D})))\) such that
\[\lim_{z\to a}\frac{\gamma_{z}-\gamma_{a}-D\Gamma(a)(z-a)}{|z-a|}=0.\]
Consider the function \(T(a)\colon\mathbb{C}\to\mathbb{C}\) given by
\[T(a)(z)=\eta(D\Gamma(a)(z)\otimes x)\qquad(z\in\mathbb{C})\,.\]
Clearly, \(T(a)\in\mathcal{L}(\mathbb{C},\mathbb{C})\) and since
\[F_{\eta,x}(z)-F_{\eta,x}(a)-T(a)(z-a) =\eta(\gamma_{z}\otimes x)-\eta(\gamma_{a}\otimes x)-\eta(D \Gamma(a)(z-a)\otimes x)\] \[=\eta\left((\gamma_{z}-\gamma_{a}-D\Gamma(a)(z-a))\otimes x \right),\]
it follows that
\[\lim_{z\to a}\frac{F_{\eta,x}(z)-F_{\eta,x}(a)-T(a)(z-a)}{|z-a|}=\lim_{z\to a} \eta\left(\frac{\gamma_{z}-\gamma_{a}-D\Gamma(a)(z-a)}{|z-a|}\otimes x\right) =0.\]
Hence \(F_{\eta,x}\) is holomorphic at \(a\) with \(DF_{\eta,x}(a)=T(a)\), as desired.
By [9, Lemma 2.9], there exists a mapping \(f_{\eta}\in\mathcal{H}(\mathbb{D},X^{*})\) with \(f_{\eta}(0)=0\) such that \(f_{\eta}^{\prime}=F_{\eta}\). Given \(z\in\mathbb{D}\), we have
\[(1-|z|^{2})\left|\left\langle f_{\eta}^{\prime}(z),x\right\rangle\right|=(1-|z |^{2})\left|\eta\left(\gamma_{z}\otimes x\right)\right|\leq(1-|z|^{2})\left| \left|\eta\right|\pi(\gamma_{z}\otimes x)=\left\|x\right\|\]
for all \(x\in X\), and thus \((1-|z|^{2})\left\|f_{\eta}^{\prime}(z)\right\|\leq 1\). Hence \(f_{\eta}\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) with \(p_{\mathcal{B}}(f_{\eta})\leq 1\). Moreover, \(\gamma(f_{\eta})=\eta(\gamma)\) for all \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\). Therefore \(\left\|\mu\right\|\geq\left|\mu(f_{\eta})\right|\geq\operatorname{Re}(\mu(f_{ \eta}))=\operatorname{Re}(\eta(\mu))\), so \(\left\|\mu\right\|>1\), and this is a contradiction.
### \(p\)-Chevet-Saphar Bloch norms
The \(p\)-Chevet-Saphar norms \(d_{p}\) on the tensor product of two Banach spaces \(E\otimes F\) are well known (see, for example, [14, Section 6.2]).
Our study of the duality of the spaces \(\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\) requires the introduction of the following Bloch versions of such norms defined now on \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\).
The \(p\)_-Chevet-Saphar Bloch norms_\(d_{p}^{\widehat{\mathcal{B}}}\) for \(1\leq p\leq\infty\) are defined on a \(X\)-valued Bloch molecule \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) as
\[d_{1}^{\widehat{\mathcal{B}}}(\gamma) =\inf\left\{\left(\sup_{s\in B_{\widehat{\mathbb{D}}(\mathbb{D}) }}\left(\max_{i\leq i\leq n}|\lambda_{i}|\,|g^{\prime}(z_{i})|\right)\right) \left(\sum_{i=1}^{n}\left\|x_{i}\right\|\right)\right\},\] \[d_{p}^{\widehat{\mathcal{B}}}(\gamma) =\inf\left\{\left(\sup_{s\in B_{\widehat{\mathbb{D}}(\mathbb{D}) }}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p^{*}}\,|g^{\prime}(z_{i})|^{p^{*}} \right)\right)^{\frac{1}{p^{*}}}\right\}\left(\sum_{i=1}^{n}\left\|x_{i}\right\| ^{p^{*}}\right)\quad(1<p<\infty),\] \[d_{\infty}^{\widehat{\mathcal{B}}}(\gamma) =\inf\left\{\left(\sup_{s\in B_{\widehat{\mathbb{D}}(\mathbb{D} )}}\left(\sum_{i=1}^{n}|\lambda_{i}|\,|g^{\prime}(z_{i})|\right)\right)\left( \max_{1\leq i\leq n}\left\|x_{i}\right\|\right)\right\},\]
where the infimum is taken over all such representations of \(\gamma\) as \(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\).
Motivated by the analogue concept on the tensor product space (see [14, p. 127]), we introduce the following.
**Definition 2.5**.: Let \(X\) be a complex Banach space. A norm \(\alpha\) on \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) is said to be a _Bloch reasonable crossnorm_ if it has the following properties:
1. \(\alpha(\gamma_{z}\otimes x)\leq\left\|\gamma_{z}\right\|\left\|x\right\|\) for all \(z\in\mathbb{D}\) and \(x\in X\),
2. For every \(g\in\widehat{\mathcal{B}}(\mathbb{D})\) and \(x^{*}\in X^{*}\), the linear functional \(g\otimes x^{*}\colon\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\to\mathbb{C}\) defined by \((g\otimes x^{*})(\gamma_{z}\otimes x)=g^{\prime}(z)x^{*}(x)\) is bounded with \(\left\|g\otimes x^{*}\right\|\leq p_{\mathcal{B}}(g)\left\|x^{*}\right\|\).
**Theorem 2.6**.: \(d_{p}^{\widetilde{\mathcal{B}}}\) _is a Bloch reasonable crossnorm on \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) for any \(1\leq p\leq\infty\)._
Proof.: We will only prove it for \(1<p<\infty\). The other cases follow similarly.
Let \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) and let \(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\) be a representation of \(\gamma\). Clearly, \(d_{p}^{\widetilde{\mathcal{B}}}(\gamma)\geq 0\). Given \(\lambda\in\mathbb{C}\), since \(\sum_{i=1}^{n}(\lambda\lambda_{i})\gamma_{z_{i}}\otimes x_{i}\) is a representation of \(\lambda\gamma\), we have
\[d_{p}^{\widetilde{\mathcal{B}}}(\lambda\gamma)\leq \left(\sup_{g\in B_{\widetilde{\mathbb{R}}(\mathbb{D})}}\left( \sum_{i=1}^{n}|\lambda\lambda_{i}|^{p^{*}}\left|g^{\prime}(z_{i})\right|^{p^{ *}}\right)^{\frac{1}{p^{*}}}\right)\left(\sum_{i=1}^{n}\|x_{i}\|^{p}\right)^{ \frac{1}{p^{*}}}\] \[=|\lambda|\left(\sup_{g\in B_{\widetilde{\mathbb{R}}(\mathbb{D}) }}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p^{*}}\left|g^{\prime}(z_{i})\right|^{p^{ *}}\right)^{\frac{1}{p^{*}}}\right)\left(\sum_{i=1}^{n}\|x_{i}\|^{p}\right)^{ \frac{1}{p^{*}}}.\]
If \(\lambda=0\), we obtain \(d_{p}^{\widetilde{\mathcal{B}}}(\lambda\gamma)=0=|\lambda|\,d_{p}^{\widetilde {\mathcal{B}}}(\gamma)\). For \(\lambda\neq 0\), since the preceding inequality holds for every representation of \(\gamma\), we deduce that \(d_{p}^{\widetilde{\mathcal{B}}}(\lambda\gamma)\leq|\lambda|\,d_{p}^{\widetilde {\mathcal{B}}}(\gamma)\). For the converse inequality, note that \(d_{p}^{\widetilde{\mathcal{B}}}(\gamma)=d_{p}^{\widetilde{\mathcal{B}}}( \lambda^{-1}(\lambda\gamma))\leq|\lambda^{-1}|d_{p}^{\widetilde{\mathcal{B}}} (\lambda\gamma)\) by using the proved inequality, thus \(|\lambda|\,d_{p}^{\widetilde{\mathcal{B}}}(\gamma)\leq d_{p}^{\widetilde{ \mathcal{B}}}(\lambda\gamma)\) and hence \(d_{p}^{\widetilde{\mathcal{B}}}(\lambda\gamma)=|\lambda|\,d_{p}^{\widetilde {\mathcal{B}}}(\gamma)\).
We now prove the triangular inequality of \(d_{p}^{\widetilde{\mathcal{B}}}\). Let \(\gamma_{1},\gamma_{2}\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) and let \(\varepsilon>0\). If \(\gamma_{1}=0\) or \(\gamma_{2}=0\), there is nothing to prove. Assume \(\gamma_{1}\neq 0\neq\gamma_{2}\). We can choose representations
\[\gamma_{1}=\sum_{i=1}^{n}\lambda_{1,i}\gamma_{z_{1,i}}\otimes x_{1,i},\qquad \gamma_{2}=\sum_{i=1}^{m}\lambda_{2,i}\gamma_{z_{2,i}}\otimes x_{2,i},\]
so that
\[\left(\sup_{g\in B_{\widetilde{\mathbb{R}}(\mathbb{D})}}\left(\sum_{i=1}^{n} \left|\lambda_{1,i}\right|^{p^{*}}\left|g^{\prime}(z_{1,i})\right|^{p^{*}} \right)^{\frac{1}{p^{*}}}\right)\left(\sum_{i=1}^{n}\left\|x_{1,i}\right\|^{p }\right)^{\frac{1}{p^{*}}}\leq d_{p}^{\widetilde{\mathcal{B}}}(\gamma_{1})+\varepsilon\]
and
\[\left(\sup_{g\in B_{\widetilde{\mathbb{R}}(\mathbb{D})}}\left(\sum_{i=1}^{m} \left|\lambda_{2,i}\right|^{p^{*}}\left|g^{\prime}(z_{2,i})\right|^{p^{*}} \right)^{\frac{1}{p^{*}}}\right)\left(\sum_{i=1}^{m}\left\|x_{2,i}\right\|^{p ^{*}}\right)^{\frac{1}{p^{*}}}\leq d_{p}^{\widetilde{\mathcal{B}}}(\gamma_{2} )+\varepsilon.\]
Fix arbitrary \(r,s\in\mathbb{R}^{+}\) and define
\[\lambda_{3,i}\gamma_{z_{3,i}} =\left\{\begin{array}{ll}r^{-1}\lambda_{1,i}\gamma_{z_{1,i}}& \text{if }i=1,\ldots,n,\\ s^{-1}\lambda_{2,i-n}\gamma_{z_{2,i-n}}&\text{if }i=n+1,\ldots,n+m,\end{array}\right.\] \[x_{3,i} =\left\{\begin{array}{ll}rx_{1,i}&\text{if }i=1,\ldots,n,\\ sx_{2,i-n}&\text{if }i=n+1,\ldots,n+m.\end{array}\right.\]
It is clear that \(\gamma_{1}+\gamma_{2}=\sum_{i=1}^{n+m}\lambda_{3,i}\gamma_{z_{3,i}}\otimes x_{3,i}\) and thus we have
\[d_{p}^{\widetilde{\mathcal{B}}}(\gamma_{1}+\gamma_{2})\leq\left(\sup_{g\in B_{ \widetilde{\mathbb{R}}(\mathbb{D})}}\left(\sum_{i=1}^{n+m}\left|\lambda_{3,i} \right|^{p^{*}}\left|g^{\prime}(z_{3,i})\right|^{p^{*}}\right)^{\frac{1}{p^{*} }}\right)\left(\sum_{i=1}^{n+m}\left\|x_{3,i}\right\|^{p}\right)^{\frac{1}{p^{*} }}.\]
An easy verification gives
\[\left(\sup_{g\in B_{\widehat{\mathbb{B}}\square}}\left(\sum_{i=1}^{n+m }\left|\lambda_{3,i}\right|^{p^{*}}\left|g^{\prime}(z_{3,i})\right|^{p^{*}} \right)^{\frac{1}{p^{*}}}\right)^{p^{*}}\] \[\leq \left(r^{-1}\sup_{g\in B_{\widehat{\mathbb{B}}\square}}\left(\sum_ {i=1}^{n}\left|\lambda_{1,i}\right|^{p^{*}}\left|g^{\prime}(z_{1,i})\right|^{p ^{*}}\right)^{\frac{1}{p^{*}}}\right)^{p^{*}}+\left(s^{-1}\sup_{g\in B_{ \widehat{\mathbb{B}}\square}}\left(\sum_{i=1}^{m}\left|\lambda_{2,i}\right|^{p ^{*}}\left|g^{\prime}(z_{2,i})\right|^{p^{*}}\right)^{\frac{1}{p^{*}}}\right)^ {p^{*}}\]
and
\[\sum_{i=1}^{n+m}\left\|x_{3,i}\right\|^{p}=r^{p}\sum_{i=1}^{n}\left\|x_{1,i} \right\|^{p}+s^{p}\sum_{i=1}^{m}\left\|x_{2,i}\right\|^{p}.\]
Using Young's Inequality, it follows that
\[d_{p}^{\widehat{\mathcal{B}}}(\gamma_{1}+\gamma_{2}) \leq\frac{1}{p^{*}}\left(\sup_{g\in B_{\widehat{\mathbb{B}} \square}}\left(\sum_{i=1}^{n+m}\left|\lambda_{3,i}\right|^{p^{*}}\left|g^{\prime }(z_{3,i})\right|^{p^{*}}\right)^{\frac{1}{p^{*}}}\right)^{p^{*}}+\frac{1}{p} \sum_{i=1}^{n+m}\left\|x_{3,i}\right\|^{p}\] \[\leq\frac{r^{-p^{*}}}{p^{*}}\left(\sup_{g\in B_{\widehat{\mathbb{ B}}\square}}\left(\sum_{i=1}^{n}\left|\lambda_{1,i}\right|^{p^{*}}\left|g^{ \prime}(z_{1,i})\right|^{p^{*}}\right)^{\frac{1}{p^{*}}}\right)^{p^{*}}+\frac{ r^{p}}{p}\sum_{i=1}^{n}\left\|x_{1,i}\right\|^{p}\] \[+\frac{s^{-p^{*}}}{p^{*}}\left(\sup_{g\in B_{\widehat{\mathbb{B}} \square}}\left(\sum_{i=1}^{m}\left|\lambda_{2,i}\right|^{p^{*}}\left|g^{\prime }(z_{2,i})\right|^{p^{*}}\right)^{\frac{1}{p^{*}}}\right)^{p^{*}}+\frac{s^{p}} {p}\sum_{i=1}^{m}\left\|x_{2,i}\right\|^{p}.\]
Since \(r,s\) were arbitrary in \(\mathbb{R}^{+}\), taking above
\[r =(d_{p}^{\widehat{\mathcal{B}}}(\gamma_{1})+\varepsilon)^{-\frac{ 1}{p^{*}}}\left(\sup_{g\in B_{\widehat{\mathbb{B}}\square}}\left(\sum_{i=1}^{ n}\left|\lambda_{1,i}\right|^{p^{*}}\left|g^{\prime}(z_{1,i})\right|^{p^{*}} \right)^{\frac{1}{p^{*}}}\right),\] \[s =(d_{p}^{\widehat{\mathcal{B}}}(\gamma_{2})+\varepsilon)^{-\frac{ 1}{p^{*}}}\left(\sup_{g\in B_{\widehat{\mathbb{B}}\square}}\left(\sum_{i=1}^ {m}\left|\lambda_{2,i}\right|^{p^{*}}\left|g^{\prime}(z_{2,i})\right|^{p^{*}} \right)^{\frac{1}{p^{*}}}\right),\]
we obtain that \(d_{p}^{\widehat{\mathcal{B}}}(\gamma_{1}+\gamma_{2})\leq d_{p}^{\widehat{ \mathcal{B}}}(\gamma_{1})+d_{p}^{\widehat{\mathcal{B}}}(\gamma_{2})+2\varepsilon\), and thus \(d_{p}^{\widehat{\mathcal{B}}}(\gamma_{1}+\gamma_{2})\leq d_{p}^{\widehat{ \mathcal{B}}}(\gamma_{1})+d_{p}^{\widehat{\mathcal{B}}}(\gamma_{2})\) by the arbitrariness of \(\varepsilon\). Hence \(d_{p}^{\widehat{\mathcal{B}}}\) is a seminorm. To prove that it is a norm, note first that
\[\left|\sum_{i=1}^{n}\lambda_{i}h^{\prime}(z_{i})x^{*}(x_{i})\right| \leq\sum_{i=1}^{n}\left|\lambda_{i}\right|\left|h^{\prime}(z_{i}) \right|\left\|x_{i}\right\|\] \[\leq\left(\sum_{i=1}^{n}\left|\lambda_{i}\right|^{p^{*}}\left|h^{ \prime}(z_{i})\right|^{p^{*}}\right)^{\frac{1}{p^{*}}}\left(\sum_{i=1}^{n} \left\|x_{i}\right\|^{p}\right)^{\frac{1}{p}}\] \[\leq\sup_{g\in B_{\widehat{\mathbb{B}}\square}}\left(\sum_{i=1}^{n} \left|\lambda_{i}\right|^{p^{*}}\left|g^{\prime}(z_{i})\right|^{p^{*}}\right)^ {\frac{1}{p^{*}}}\left(\sum_{i=1}^{n}\left\|x_{i}\right\|^{p}\right)^{\frac{1 }{p}},\]
for any \(h\in B_{\widehat{\mathbb{B}}\square}\) and \(x^{*}\in B_{X^{*}}\), by applying Holder's Inequality. Since \(\left|\sum_{i=1}^{n}\lambda_{i}h^{\prime}(z_{i})x^{*}(x_{i})\right|\) does not depend on the representation of \(\gamma\) because
\[\sum_{i=1}^{n}\lambda_{i}h^{\prime}(z_{i})x^{*}(x_{i})=\left(\sum_{i=1}^{n} \lambda_{i}\gamma_{z_{i}}\otimes x_{i}\right)(h\cdot x^{*})=\gamma(h\cdot x^{*}),\]
taking the infimum over all representations of \(\gamma\) we deduce that
\[\left|\sum_{i=1}^{n}\lambda_{i}h^{\prime}(z_{i})x^{*}(x_{i})\right|\leq d^{ \widehat{\mathcal{B}}}_{p}(\gamma)\]
for any \(h\in B_{\widehat{\mathcal{B}}(\mathbb{D})}\) and \(x^{*}\in B_{X^{*}}\). Now, if \(d^{\widehat{\mathcal{B}}}_{p}(\gamma)=0\), the preceding inequality yields
\[\left(\sum_{i=1}^{n}\lambda_{i}x^{*}(x_{i})\gamma_{z_{i}}\right)(h)=\sum_{i=1 }^{n}\lambda_{i}x^{*}(x_{i})h^{\prime}(z_{i})=0\]
for all \(h\in B_{\widehat{\mathcal{B}}(\mathbb{D})}\) and \(x^{*}\in B_{X^{*}}\). For each \(x^{*}\in B_{X^{*}}\), this implies that \(\sum_{i=1}^{n}\lambda_{i}x^{*}(x_{i})\gamma_{z_{i}}=0\), and since \(\Gamma(\mathbb{D})\) is a linearly independent subset of \(\mathcal{G}(\mathbb{D})\) by [9, Remark 2.8], it follows that \(x^{*}(x_{i})\lambda_{i}=0\) for all \(i\in\{1,\ldots,n\}\), hence \(\lambda_{i}=0\) for all \(i\in\{1,\ldots,n\}\) since \(B_{X^{*}}\) separates the points of \(X\), and thus \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}=0\).
Finally, we will show that \(d^{\widehat{\mathcal{B}}}_{p}\) is a Bloch reasonable crossnorm on \(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\). Firstly, given \(z\in\mathbb{D}\) and \(x\in X\), we have
\[d^{\widehat{\mathcal{B}}}_{p}(\gamma_{z}\otimes x)\leq\left(\sup_{g\in B_{ \widehat{\mathcal{B}}(\mathbb{D})}}|g^{\prime}(z)|^{p^{\prime\prime}}\right)^ {\frac{1}{p^{\prime}}}\left\|x\right\|\leq\frac{\left\|x\right\|}{1-\left\|x \right\|^{2}}=\left\|\gamma_{z}\right\|\left\|x\right\|.\]
Secondly, given \(g\in\widehat{\mathcal{B}}(\mathbb{D})\) and \(x^{*}\in X^{*}\), we have
\[|(g\otimes x^{*})(\gamma)| =\left|\sum_{i=1}^{n}\lambda_{i}(g\otimes x^{*})(\gamma_{z_{i}} \otimes x_{i})\right|=\left|\sum_{i=1}^{n}\lambda_{i}g^{\prime}(z_{i})x^{*}(x _{i})\right|\] \[\leq\sum_{i=1}^{n}\left|\lambda_{i}\right|\left|g^{\prime}(z_{i} )\right|\left|x^{*}(x_{i})\right|\leq p_{\mathcal{B}}(g)\left\|x^{*}\right\| \sum_{i=1}^{n}\frac{\left|\lambda_{i}\right|}{1-\left|z_{i}\right|^{2}}\left\| x_{i}\right\|.\]
Taking infimum over all the representations of \(\gamma\), we deduce that \(|(g\otimes x^{*})(\gamma)|\leq p_{\mathcal{B}}(g)\left\|x^{*}\right\|\pi(\gamma)\) and thus \(|(g\otimes x^{*})(\gamma)|\leq p_{\mathcal{B}}(g)\left\|x^{*}\right\|\left\| \gamma\right\|\) by Proposition 2.4. Hence \(g\otimes x^{*}\in(\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X)^{*}\) with \(\left\|g\otimes x^{*}\right\|\leq p_{\mathcal{B}}(g)\left\|x^{*}\right\|\).
The next result shows that \(d^{\widehat{\mathcal{B}}}_{p}\) can be computed using a simpler formula in the cases \(p=1\) and \(p=\infty\). In fact, the 1-Chevet-Saphar Bloch norm is justly the projective norm.
**Proposition 2.7**.: _For \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\), we have_
\[d^{\widehat{\mathcal{B}}}_{1}(\gamma)=\inf\left\{\sum_{i=1}^{n}\frac{\left| \lambda_{i}\right|}{1-\left|z_{i}\right|^{2}}\left\|x_{i}\right\|\right\}\]
_and_
\[d^{\widehat{\mathcal{B}}}_{\infty}(\gamma)=\inf\left\{\sup_{g\in B_{\widehat{ \mathcal{B}}(\mathbb{D})}}\left(\sum_{i=1}^{n}\left|\lambda_{i}\right|\left| g^{\prime}(z_{i})\right|\left\|x_{i}\right\|\right)\right\},\]
_where the infimum is taken over all such representations of \(\gamma\) as \(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\)._
Proof.: Let \(\gamma\in\operatorname{lin}(\Gamma(\mathbb{D}))\otimes X\) and let \(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\) be a representation of \(\gamma\). We have
\[\pi(\gamma) \leq\sum_{i=1}^{n}\frac{|\lambda_{i}|}{1-|z_{i}|^{2}}\left\|x_{i} \right\|=\sum_{i=1}^{n}|\lambda_{i}|\left(\sup_{g\in B_{\widetilde{\mathbb{R} (\mathbb{D})}}^{\widetilde{\mathbb{B}}}}|g^{\prime}(z_{i})|\right)\left\|x_{i} \right\|\] \[\leq\sum_{i=1}^{n}\max_{1\leq i\leq n}\left(|\lambda_{i}|\sup_{g \in B_{\widetilde{\mathbb{R}(\mathbb{D})}}}|g^{\prime}(z_{i})|\right)\left\|x _{i}\right\|=\left(\max_{1\leq i\leq n}\left(|\lambda_{i}|\sup_{g\in B_{ \widetilde{\mathbb{R}(\mathbb{D})}}}|g^{\prime}(z_{i})|\right)\right)\sum_{i= 1}^{n}\left\|x_{i}\right\|\] \[=\sup_{g\in B_{\widetilde{\mathbb{R}(\mathbb{D})}}}\left(\max_{1 \leq i\leq n}\left(|\lambda_{i}|\left|g^{\prime}(z_{i})\right|\right)\right) \sum_{i=1}^{n}\left\|x_{i}\right\|\]
and therefore \(\pi(\gamma)\leq d_{1}^{\widetilde{\mathbb{B}}}(\gamma)\). Conversely, since \(d_{1}^{\widetilde{\mathbb{B}}}\) is a Bloch reasonable crossnorm, we have
\[d_{1}^{\widetilde{\mathbb{B}}}(\gamma)\leq\sum_{i=1}^{n}|\lambda_{i}|\,d_{1}^ {\widetilde{\mathbb{B}}}(\gamma_{z_{i}}\otimes x_{i})=\sum_{i=1}^{n}|\lambda_{ i}|\left\|\gamma_{z_{i}}\right\|\left\|x_{i}\right\|=\sum_{i=1}^{n}\frac{| \lambda_{i}|}{1-|z_{i}|^{2}}\left\|x_{i}\right\|,\]
and thus \(d_{1}^{\widetilde{\mathbb{B}}}(\gamma)\leq\pi(\gamma)\).
On the other hand, we have
\[\inf\left\{\sup_{g\in B_{\widetilde{\mathbb{R}(\mathbb{D})}}} \left(\sum_{i=1}^{n}|\lambda_{i}|\left|g^{\prime}(z_{i})|\left\|x_{i}\right\| \right):\,\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\right\} \leq\sup_{g\in B_{\widetilde{\mathbb{R}(\mathbb{D})}}}\left(\sum _{i=1}^{n}|\lambda_{i}|\left|g^{\prime}(z_{i})\right|\left\|x_{i}\right\|\right)\] \[\leq\left(\max_{1\leq i\leq n}\left\|x_{i}\right\|\right)\sup_{g \in B_{\widetilde{\mathbb{R}(\mathbb{D})}}}\left(\sum_{i=1}^{n}|\lambda_{i}| \left|g^{\prime}(z_{i})\right|\right),\]
and taking the infimum over all representations of \(\gamma\) gives
\[\inf\left\{\sup_{g\in B_{\widetilde{\mathbb{R}(\mathbb{D})}}}\left(\sum_{i=1} ^{n}|\lambda_{i}|\left|g^{\prime}(z_{i})\right|\left\|x_{i}\right\|\right):\, \gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\right\}\leq d_{ \infty}^{\widetilde{\mathbb{B}}}(\gamma).\]
Conversely, we can assume without loss of generality that \(x_{i}\neq 0\) for all \(i\in\{1,\ldots,n\}\) and since \(\gamma=\sum_{i=1}^{n}\lambda_{i}\left\|x_{i}\right\|\gamma_{z_{i}}\otimes(x_{ i}/\left\|x_{i}\right\|)\), we obtain
\[d_{\infty}^{\widetilde{\mathbb{B}}}(\gamma)\leq\sup_{g\in B_{\widetilde{ \mathbb{R}(\mathbb{D})}}}\left(\sum_{i=1}^{n}|\lambda_{i}|\left\|x_{i}\right\| \left|g^{\prime}(z_{i})\right|\right),\]
and taking the infimum over all representations of \(\gamma\), we conclude that
\[d_{\infty}^{\widetilde{\mathbb{B}}}(\gamma)\leq\inf\left\{\sup_{g\in B_{ \widetilde{\mathbb{R}(\mathbb{D})}}}\left(\sum_{i=1}^{n}|\lambda_{i}|\left|g^{ \prime}(z_{i})\right|\left\|x_{i}\right\|\right):\,\gamma=\sum_{i=1}^{n} \lambda_{i}\gamma_{z_{i}}\otimes x_{i}\right\}.\]
### Duality
Given \(p\in[1,\infty]\), we will show that the dual of the space \(\mathcal{G}(\mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widetilde{\mathbb{B}}}}X\) can be canonically identified as the space of \(p\)-summing Bloch mappings from \(\mathbb{D}\) to \(X^{*}\).
**Theorem 2.8**.: _Let \(1\leq p\leq\infty\). Then \(\Pi_{p}^{\widetilde{\mathbb{B}}}(\mathbb{D},X^{*})\) is isometrically isomorphic to \((\mathcal{G}(\mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widetilde{\mathbb{B}}}} X)^{*}\), via the mapping \(\Lambda\colon\Pi_{p}^{\widetilde{\mathbb{B}}}(\mathbb{D},X^{*})\to(\mathcal{G}( \mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widetilde{\mathbb{B}}}}X)^{*}\) defined by_
\[\Lambda(f)(\gamma)=\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i}),x_{i}\right\rangle\]
_for \(f\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\) and \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\ln(\Gamma( \mathbb{D}))\otimes X\). Furthermore, its inverse comes given by_
\[\left\langle\Lambda^{-1}(\varphi)(z),x\right\rangle=\left\langle\varphi,\gamma_ {z}\otimes x\right\rangle\]
_for \(\varphi\in(\mathcal{G}(\mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widehat{ \mathcal{B}}}}X)^{*}\), \(z\in\mathbb{D}\) and \(x\in X\)._
_Moreover, on the unit ball of \(\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\) the weak* topology coincides with the topology of pointwise \(\sigma(X^{*},X)\)-convergence._
Proof.: We prove it for \(1<p<\infty\). The cases \(p=1\) and \(p=\infty\) follow similarly.
Let \(f\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\) and let \(\Lambda_{0}(f)\colon\ln(\Gamma(\mathbb{D}))\otimes X\to\mathbb{C}\) be its associate linear functional given by
\[\Lambda_{0}(f)(\gamma)=\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i} ),x_{i}\right\rangle\]
for \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\ln(\Gamma( \mathbb{D}))\otimes X\). Note that \(\Lambda_{0}(f)\in(\ln(\Gamma(\mathbb{D}))\otimes_{d_{p^{*}}^{\widehat{ \mathcal{B}}}}X)^{*}\) with \(\|\Lambda_{0}(f)\|\leq\pi_{p}^{\mathcal{B}}(f)\) since
\[|\Lambda_{0}(f)(\gamma)| =\left|\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i}),x _{i}\right\rangle\right|\leq\sum_{i=1}^{n}|\lambda_{i}|\|f^{\prime}(z_{i})\| \|x_{i}\|\] \[\leq\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|f^{\prime}(z_{i}) \right\|^{p}\right)^{\frac{1}{p}}\left(\sum_{i=1}^{n}\|x_{i}\|^{p^{*}}\right)^ {\frac{1}{p^{*}}}\] \[\leq\pi_{p}^{\mathcal{B}}(f)\sup_{g\in B_{\widehat{\mathbb{R}} \mathrm{(D)}}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|g^{\prime}(z_{i}) \right|^{p}\right)^{\frac{1}{p}}\left(\sum_{i=1}^{n}\|x_{i}\|^{p^{*}}\right)^ {\frac{1}{p^{*}}},\]
and taking infimum over all the representations of \(\gamma\), we deduce that \(|\Lambda_{0}(f)(\gamma)|\leq\pi_{p}^{\mathcal{B}}(f)d_{p^{*}}^{\widehat{ \mathcal{B}}}(\gamma)\). Since \(\gamma\) was arbitrary, then \(\Lambda_{0}(f)\) is continuous on \(\ln(\Gamma(\mathbb{D}))\otimes_{d_{p^{*}}^{\widehat{\mathcal{B}}}}X\) with \(\|\Lambda_{0}(f)\|\leq\pi_{p}^{\mathcal{B}}(f)\).
Since \(\ln(\Gamma(\mathbb{D}))\) is a norm-dense linear subspace of \(\mathcal{G}(\mathbb{D})\) and \(d_{p^{*}}^{\widehat{\mathcal{B}}}\) is a norm on \(\mathcal{G}(\mathbb{D})\otimes X\), then \(\mathcal{G}(\mathbb{D})\otimes X\) is a dense linear subspace of \(\mathcal{G}(\mathbb{D})\otimes_{d_{p^{*}}^{\widehat{\mathcal{B}}}}X\) and therefore also of its completion \(\mathcal{G}(\mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widehat{\mathcal{B}}}}X\). Hence there is a unique continuous mapping \(\Lambda(f)\) from \(\mathcal{G}(\mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widehat{\mathcal{B}}}}X\) to \(\mathbb{C}\) that extends \(\Lambda_{0}(f)\). Further, \(\Lambda(f)\) is linear and \(\|\Lambda(f)\|=\|\Lambda_{0}(f)\|\).
Let \(\Lambda\colon\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\to(\mathcal{G}( \mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widehat{\mathcal{B}}}}X)^{*}\) be the map so defined. Since \(\Lambda_{0}\) is a linear monomorphism from \(\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\) to \((\mathcal{G}(\mathbb{D})\otimes X)^{*}\) by Corollary 2.3, it follows easily that \(\Lambda\) is so. To prove that \(\Lambda\) is a surjective isometry, let \(\varphi\in(\mathcal{G}(\mathbb{D})\widehat{\otimes}_{d_{p^{*}}^{\widehat{ \mathcal{B}}}}X)^{*}\) and define \(F_{\varphi}\colon\mathbb{D}\to X^{*}\) by
\[\left\langle F_{\varphi}(z),x\right\rangle=\varphi(\gamma_{z}\otimes x)\qquad \left(z\in\mathbb{D},\,\,x\in X\right).\]
As in the proof of Proposition 2.4, it is similarly proved that \(F_{\varphi}\in\mathcal{H}(\mathbb{D},X^{*})\) and there exists a mapping \(f_{\varphi}\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) with \(p_{\mathcal{B}}(f_{\varphi})\leq\|\varphi\|\) such that \(f_{\varphi}^{\prime}=F_{\varphi}\).
We now prove that \(f_{\varphi}\in\Pi_{p}^{\widehat{\mathcal{B}}}(\mathbb{D},X^{*})\). Fix \(n\in\mathbb{N}\), \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) and \(z_{1},\ldots,z_{n}\in\mathbb{D}\). Let \(\varepsilon>0\). For each \(i\in\{1,\ldots,n\}\), there exists \(x_{i}\in X\) with \(\|x_{i}\|\leq 1+\varepsilon\) such that \(\left\langle f_{\varphi}^{\prime}(z_{i}),x_{i}\right\rangle=\left\|f_{\varphi}^{ \prime}(z_{i})\right\|\). It is elementary that the map \(T\colon\mathbb{C}^{n}\to\mathbb{C}\), defined by
\[T(t_{1},\ldots,t_{n})=\sum_{i=1}^{n}t_{i}\lambda_{i}\left\|f_{\varphi}^{\prime}(z _{i})\right\|,\quad\forall(t_{1},\ldots,t_{n})\in\mathbb{C}^{n},\]
is linear and continuous on \((\mathbb{C}^{n},\|\cdot\|_{p^{*}})\) with \(\|T\|=\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|f_{\varphi}^{\prime}(z_{i}) \right\|^{p}\right)^{\frac{1}{p}}\). For any \((t_{1},\ldots,t_{n})\in\mathbb{C}^{n}\) with \(\|(t_{1},\ldots,t_{n})\|_{p^{*}}\leq 1\), we have
\[|T(t_{1},\ldots,t_{n})| =\left|\varphi\left(\sum_{i=1}^{n}t_{i}\lambda_{i}\gamma_{z_{i}} \otimes x_{i}\right)\right|\leq\|\varphi\|d_{p^{*}}^{\widetilde{\mathcal{B}}} \left(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes t_{i}x_{i}\right)\] \[\leq\|\varphi\|\left(\sup_{g\in B_{\widetilde{\mathcal{B}} \mathbb{C}^{n}}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|g^{\prime}(z_{i}) \right|^{p}\right)^{\frac{1}{p}}\right)\left(\sum_{i=1}^{n}\|t_{i}x_{i}\|^{p^ {*}}\right)^{\frac{1}{p^{*}}}\] \[\leq(1+\varepsilon)\left\|\varphi\right\|\sup_{g\in B_{ \widetilde{\mathcal{B}}\mathbb{C}^{n}}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p} \left|g^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}},\]
therefore
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|f_{\varphi}^{\prime}(z_{i}) \right\|^{p}\right)^{\frac{1}{p}}\leq(1+\varepsilon)\left\|\varphi\right\|\sup _{g\in B_{\widetilde{\mathcal{B}}\mathbb{C}^{n}}}\left(\sum_{i=1}^{n}|\lambda_ {i}|^{p}\left|g^{\prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}},\]
and since \(\varepsilon\) was arbitrary, we have
\[\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left\|f_{\varphi}^{\prime}(z_{i}) \right\|^{p}\right)^{\frac{1}{p}}\leq\|\varphi\|\sup_{g\in B_{\widetilde{ \mathcal{B}}\mathbb{C}^{n}}}\left(\sum_{i=1}^{n}|\lambda_{i}|^{p}\left|g^{ \prime}(z_{i})\right|^{p}\right)^{\frac{1}{p}},\]
and we conclude that \(f_{\varphi}\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X^{*})\) with \(\pi_{p}^{\mathcal{B}}(f_{\varphi})\leq\|\varphi\|\).
Finally, for any \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\ln(\Gamma( \mathbb{D}))\otimes X\), we get
\[\Lambda(f_{\varphi})(\gamma)=\sum_{i=1}^{n}\lambda_{i}\left\langle f_{\varphi }^{\prime}(z_{i}),x_{i}\right\rangle=\sum_{i=1}^{n}\lambda_{i}\varphi(\gamma_ {z_{i}}\otimes x_{i})=\varphi\left(\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}} \otimes x_{i}\right)=\varphi(\gamma).\]
Hence \(\Lambda(f_{\varphi})=\varphi\) on a dense subspace of \(\mathcal{G}(\mathbb{D})\widehat{\otimes}_{\vec{\mathcal{B}}_{p^{*}}^{\widetilde {\mathcal{B}}}}X\) and, consequently, \(\Lambda(f_{\varphi})=\varphi\), which shows the last statement of the theorem. Moreover, \(\pi_{p}^{\mathcal{B}}(f_{\varphi})\leq\|\varphi\|=\left\|\Lambda(f_{\varphi})\right\|\).
For the last assertion of the statement, let \((f_{i})_{i\in I}\) be a net in \(\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X^{*})\) and \(f\in\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X^{*})\). Assume \((f_{i})_{i\in I}\to f\) weak* in \(\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X^{*})\), this means that \((\Lambda(f_{i}))_{i\in I}\to\Lambda(f)\) weak* in \((\mathcal{G}(\mathbb{D})\widehat{\otimes}_{\vec{\mathcal{B}}_{p^{*}}^{ \widetilde{\mathcal{B}}}}X)^{*}\), that is, \((\Lambda(f_{i})(\gamma))_{i\in I}\to\Lambda(f)(\gamma)\) for all \(\gamma\in\mathcal{G}(\mathbb{D})\widehat{\otimes}_{\vec{\mathcal{B}}_{p^{*}}^{ \widetilde{\mathcal{B}}}}X\). In particular,
\[(<f_{i}^{\prime}(z),x>)_{i\in I}=(\Lambda(f_{i})(\gamma_{z}\otimes x))_{i\in I }\to\Lambda(f)(\gamma_{z}\otimes x)=<f^{\prime}(z),x>\]
for every \(z\in\mathbb{D}\) and \(x\in X\). Given \(z\in\mathbb{D}\) and \(x\in X\), we have
\[|\langle f_{i}(z)-f(z),x\rangle| =\left|\int_{[0,z]}\left\langle f_{i}^{\prime}(w)-f^{\prime}(w),x \right\rangle\ dw\right|\] \[\leq|z|\max\left\{\left|\langle f_{i}^{\prime}(w)-f^{\prime}(w),x \right\rangle|:\,w\in[0,z]\right\}\] \[=|z|\left\langle f_{i}^{\prime}(w_{z})-f^{\prime}(w_{z}),x\right\rangle\]
for all \(i\in I\) and some \(w_{z}\in[0,z]\), and thus \((\langle f_{i}(z),x\rangle)_{i\in I}\to\langle f(z),x\rangle\). This tells us that \((f_{i})_{i\in I}\) converges to \(f\) in the topology of pointwise \(\sigma(X^{*},X)\)-convergence. Hence the identity on \(\Pi_{p}^{\widetilde{\mathcal{B}}}(\mathbb{D},X^{*})\) is a continuous bijection from the weak* topology to the topology of pointwise \(\sigma(X^{*},X)\)-convergence. On the unit ball, the first topology is compact and the second one is Hausdorff, and so they must coincide.
In particular, in view of Theorem 2.8 and taking into account Propositions 1.1, 2.4 and 2.7, we can identify the space \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) with the dual space of \(\mathcal{G}(\mathbb{D})\widehat{\otimes}X\subseteq\widehat{\mathcal{B}}( \mathbb{D},X^{*})^{*}\).
**Corollary 2.9**.: \(\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) _is isometrically isomorphic to \((\mathcal{G}(\mathbb{D})\widehat{\otimes}X)^{*}\), via \(\Lambda\colon\widehat{\mathcal{B}}(\mathbb{D},X^{*})\to(\mathcal{G}(\mathbb{ D})\widehat{\otimes}X)^{*}\) given by_
\[\Lambda(f)(\gamma)=\sum_{i=1}^{n}\lambda_{i}\left\langle f^{\prime}(z_{i}),x_ {i}\right\rangle\]
_for \(f\in\widehat{\mathcal{B}}(\mathbb{D},X^{*})\) and \(\gamma=\sum_{i=1}^{n}\lambda_{i}\gamma_{z_{i}}\otimes x_{i}\in\mathcal{G}( \mathbb{D})\otimes X\). Furthermore, its inverse is given by_
\[\left\langle\Lambda^{-1}(\varphi)(z),x\right\rangle=\left\langle\varphi, \gamma_{z}\otimes x\right\rangle\]
_for \(\varphi\in(\mathcal{G}(\mathbb{D})\widehat{\otimes}X)^{*}\), \(z\in\mathbb{D}\) and \(x\in X\). _
We conclude this paper with some open questions we hope researchers will take up. In Theorem 1.6, note that if \(f\in\Pi_{2}^{\widehat{\mathbb{B}}}(\mathbb{D},X)\), then
\[\iota_{X}\circ f^{\prime}=T\circ I_{\infty,2}\circ h^{\prime}\colon\mathbb{D} \overset{h^{\prime}}{\to}L_{\infty}(\mu)\overset{I_{\infty,2}}{\to}L_{2}(\mu) \overset{T}{\to}\ell_{\infty}(B_{X^{*}}).\]
Hence \(\iota_{X}\circ f^{\prime}\) factors in this way through the Hilbert space \(L_{2}(\mu)\). It would be interesting to introduce and study the class of Bloch mappings whose derivatives factor through a Hilbert space.
Motivated by the seminal paper of Farmer and Johnson [7] that raised a similar question in the setting of Lipschitz \(p\)-summing mappings, what results about \(p\)-summing linear operators have analogues for \(p\)-summing Bloch mappings?
|
2307.13007 | Sparse-firing regularization methods for spiking neural networks with
time-to-first spike coding | The training of multilayer spiking neural networks (SNNs) using the error
backpropagation algorithm has made significant progress in recent years. Among
the various training schemes, the error backpropagation method that directly
uses the firing time of neurons has attracted considerable attention because it
can realize ideal temporal coding. This method uses time-to-first spike (TTFS)
coding, in which each neuron fires at most once, and this restriction on the
number of firings enables information to be processed at a very low firing
frequency. This low firing frequency increases the energy efficiency of
information processing in SNNs, which is important not only because of its
similarity with information processing in the brain, but also from an
engineering point of view. However, only an upper limit has been provided for
TTFS-coded SNNs, and the information-processing capability of SNNs at lower
firing frequencies has not been fully investigated. In this paper, we propose
two spike timing-based sparse-firing (SSR) regularization methods to further
reduce the firing frequency of TTFS-coded SNNs. The first is the membrane
potential-aware SSR (M-SSR) method, which has been derived as an extreme form
of the loss function of the membrane potential value. The second is the firing
condition-aware SSR (F-SSR) method, which is a regularization function obtained
from the firing conditions. Both methods are characterized by the fact that
they only require information about the firing timing and associated weights.
The effects of these regularization methods were investigated on the MNIST,
Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and
convolutional neural network structures. | Yusuke Sakemi, Kakei Yamamoto, Takeo Hosomi, Kazuyuki Aihara | 2023-07-24T11:55:49Z | http://arxiv.org/abs/2307.13007v1 | # Sparse-firing regularization methods for spiking neural networks with time-to-first spike coding
###### Abstract
The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs, which is important not only because of its similarity with information processing in the brain, but also from an engineering point of view. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. The first is the membrane potential-aware SSR (M-SSR) method, which has been derived as an extreme form of the loss function of the membrane potential value. The second is the firing condition-aware SSR (F-SSR) method, which is a regularization function obtained from the firing conditions. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures.
## Introduction
Spiking neural networks (SNNs) can process information in the form of spikes in a manner similar to the way information is processed in the brain. SNNs are thereby expected to be able to achieve both high computational functionality and energy efficiency [1]. The spikes are represented as all-or-none binary values, and how information is represented by spikes is closely related to the information-processing mechanism in SNNs. The spike-based information representation methods are divided into two major categories, rate coding and temporal coding [2, 3]. In rate coding, information is contained in the average number of spikes generated by a neuron. In this case, the firing frequency can take approximately continuous values as a function of the input intensities; therefore, the resulting SNNs can be treated as differentiable models similar to an artificial neural network (ANN). Using rate coding, ANNs can be converted to SNNs, and the high learning ability of ANNs has been successfully transferred to SNNs [4, 5, 6]. However, when rate coding is used, information processing in the SNNs is just an approximation of that in ANNs. Furthermore, the precise approximation of an ANN requires many spikes, which reduces energy efficiency when implemented in neuromorphic hardware [7]. It has been experimentally shown that physiologically, neurons in certain brain regions or specific neuron types exhibit extremely sparse firing characteristics [8], and it is thought that temporal coding using not only the firing frequency but also the firing time is realized in at least some brain regions [9, 10, 11, 12]. |
2303.16488 | Schematic model for induced fission in a configuration-interaction
approach | We model fission at barrier-top energies in a simplified model space that
permits comparison of different components of the residual nucleon-nucleon
interaction. The model space is built on particle-hole excitations of reference
configurations. These are Slater determinants of uniformly spaced orbitals
characterized only by their quantum numbers and orbital energies. The residual
interaction in the Hamiltonian includes the diabatic interaction connecting
similar orbitals at different deformations, the pairing interaction between
like nucleons, and a schematic off-diagonal neutron-proton interaction. We find
that the fission reaction probability is sensitive to the off-diagonal
neutron-proton interaction much more than to the pairing and the diabatic
interactions. In particular, the transmission coefficients become insensitive
to th e strength of the pairing interaction when the neutron-proton interaction
is large. We also find that the branching ratio is insensitive to the
final-state scission dynamics, as is assumed in the well-known Bohr-Wheeler
theory. | K. Uzawa, K. Hagino | 2023-03-29T06:47:26Z | http://arxiv.org/abs/2303.16488v2 | # Schematic model for induced fission in a configuration-interaction approach
###### Abstract
We model fission at barrier-top energies in a simplified model space that permits comparison of different components of the residual nucleon-nucleon interaction. The model space is built on particle-hole excitations of reference configurations. These are Slater determinants of uniformly spaced orbitals characterized only by their quantum numbers and orbital energies. The residual interaction in the Hamiltonian includes the diabatic interaction connecting similar orbitals at different deformations, the pairing interaction between like nucleons, and a schematic off-diagonal neutron-proton interaction. We find that the fission reaction probability is sensitive to the off-diagonal neutron-proton interaction much more than to the pairing and the diabatic interactions. In particular, the transmission coefficients become insensitive to the strength of the pairing interaction when the neutron-proton interaction is large. We also find that the branching ratio is insensitive to the final-state scission dynamics, as is assumed in the well-known Bohr-Wheeler theory.
## I Introduction
Nuclear fission was discovered about 80 years ago [1; 2]. Many phenomenological models have been proposed since then and have successfully explained the observed behaviors. A well-known model is of Bohr and Wheeler [3], in which a statistical treatment is implemented under the transition-state hypothesis. In addition to this model, the statistical models based on the Hauser-Feshbach theory[4] as well as dynamical models based on a transport theory[5; 6; 7] have also played an important role [8]. In contrast, a microscopic understanding of induced fission has still been far from complete. This has been regarded as one of the most challenging subjects in many-fermion quantum dynamics, and in fact in a recent review for future directions of fission theory[9], the authors omitted this topic "because there has been virtually no coherent microscopic theory addressing this question up to now."
In this paper, we apply the configuration-interaction (CI) approach [10; 11] to a schematic model in order to discuss the role of various types of nucleon-nucleon interaction. In this approach, many-particle-many-hole configurations at different nuclear deformations are coupled by residual interactions. Those many-body configurations are constructed in a constrained mean-field potential at each deformation. The configuration space includes particle-hole excitations of the reference configurations and thus greatly extends the space accessed by the collective coordinates defined in the usual generator coordinate method (GCM)[12]. See Ref. [13] for a similar approach.
In a recent publication [11], the CI approach was applied to semi-realistic calculations based on the Skyrme energy functional. However, for simplicity, several simplifications were introduced. In particular, the model space was restricted to neutron excitations only with seniority zero. As a consequence, only two types of interaction were needed, namely the pairing and the diabatic interactions. In nuclear structure the off-diagonal neutron-proton interaction is important as well, but its role in low-energy nuclear fission has not yet been clarified.
In this paper, we apply the CI approach to a schematic model with uniformly spaced single-particle orbitals. A preliminary version of the work can be found in Ref. [14]; some of the supplementary material of that work is included in Appendix A of this paper. While the model presented here is still far from realistic, our schematic treatment of the configuration space and the details of the Hamiltonian may be useful for focusing attention on aspect of those ingredients in more quantitative theory. This is especially needed in light of the huge CI spaces required to describe the large changes of deformation that occur in fission.
The paper is organized as follows. Sec. II presents the theoretical framework and the model Hamiltonian based on uniformly spaced orbital energies. In Sec. III we apply the model to transmission across a barrier. There are three kinds of residual interaction that can mediate the transmission dynamics, and we examine their relative importance. The interaction types are diabatic, pairing, and the fully off-diagonal nucleon-nucleon interaction. Taking the model as a schematic treatment of fission, we examine in Sec. IV the branching ratio between fission and the capture. It is shown that one of the tenets of the Bohr-Wheeler theory (insensitivity to fission partial widths) can be achieved with the model Hamiltonian. We then summarize the paper in Sec. V.
## II CI approach to induced fission
### Transmission coefficient
In the present approach, the reference configurations are defined at discrete points along the fission path. Many-particle-many-hole excited states are then generated from those reference configurations to form subspaces in the configuration space which we call \(Q\)-blocks. In general, the states in different \(Q\)-blocks are not orthog
onal to each other, and one needs to consider the norm kernel in the form of \(N_{n,n^{\prime}}(q,q^{\prime})=\langle nq|n^{\prime}q^{\prime}\rangle\), where \(q\) and \(q^{\prime}\) label the \(Q\)-block and \(n,n^{\prime}\) are labels for the configurations within a \(Q\)-block. Similarly, the Hamiltonian kernel reads \(H_{n,n^{\prime}}(q,q^{\prime})=\langle nq|H|n^{\prime}q^{\prime}\rangle\). Note that in the usual GCM one takes only the local ground state at each \(q\). In contrast, induced fission is a decay process of an excited nucleus, and it is essential to include excited configurations. In addition, the \(S\)-matrix reaction theory requires matrices for the decay widths to the entrance and exit channels. For the present model, the matrices are \(\Gamma_{\rm n}\), \(\Gamma_{\rm cap}\), and \(\Gamma_{\rm fis}\), to treat a neutron emission channel, a radiative capture channel, and a fission channel, respectively. Specific forms of those matrices are given in Sec. IID below.
Based on the Datta formula in a reaction theory [15; 16; 17], we evaluate the transmission coefficient from the incoming channel \(a\) to a decay channel \(b\) at energy \(E\) as
\[T_{a,b}(E)=\sum_{i\in a,j\in b}|S_{i,j}(E)|^{2}=\mathrm{Tr}[\Gamma_{a}G(E) \Gamma_{b}G(E)^{\dagger}], \tag{1}\]
where
\[G(E)=(H-i(\Gamma_{n}+\Gamma_{\rm cap}+\Gamma_{\rm fis})/2-NE)^{-1} \tag{2}\]
is the Green's function with the total width \(\Gamma=\Gamma_{n}+\Gamma_{\rm cap}+\Gamma_{\rm fis}\). In a low-energy induced fission, the channel \(a\) corresponds to the incident channel and thus \(\Gamma_{a}=\Gamma_{n}\), while the exit channel \(b\) is either the capture channel or the fission channel.
In GCM calculations for nuclear spectroscopy, it is well known that the non-orthogonality of a basis set often leads to a numerical instability [18; 12]. One can largely avoid this problem in reaction calculations, as a rather coarse mesh along the fission path provides an acceptable accuracy for estimating the transmission coefficients [10].
### Model Hamiltonian
The Hamiltonian for each \(Q\)-block is constructed as
\[H_{q}=V(q)+H_{\rm ph}+H_{\rm pair}+H_{\rm ran}, \tag{3}\]
where \(V(q)\) is the energy of the local ground state at \(q\), and is ideally calculated by the constrained Hartree-Fock method or the density functional theory (DFT). \(H_{\rm ph}\), \(H_{\rm pair}\), and \(H_{\rm ran}\) are the single-particle Hamiltonian, the pairing interaction, and the random neutron-proton interaction, respectively.
The configuration space is built in the usual way, defining configurations as Slater determinants of nucleon orbitals. The orbitals are envisioned as eigenstates of an axially deformed single-particle potential. In this paper, we employ a model with a uniform spectrum of orbital energies having the same spacing \(d\) for protons and neutrons. The ladder of orbital states extends infinitely in both directions above and below the Fermi surface. The operator for the particle-hole excitation energy \(E_{ph}\) is given by
\[H_{ph}=d\sum_{\alpha:n_{a}>0}n_{\alpha}a_{\alpha}^{\dagger}a_{\alpha}+d\sum_{ \alpha:n_{a}<0}n_{\alpha}a_{\alpha}^{\dagger}. \tag{4}\]
The label \(\alpha\) includes \(q\) and all quantum numbers associated with the orbital, \(\alpha=(Q,n,K,t)\). Here \(n\) indexes the orbital position in the ladder, with \(n=0\) corresponding to the Fermi level, and \(K\) is the angular momentum about the symmetry axis. To keep the model as transparent as possible, we restrict \(K\) to \(\pm 1/2\). The isospin label \(t=\pm 1/2\) distinguishes neutrons (n) and protons (p). The orbital excitation energies of many-particle configurations are integral multiples of \(d\), given by \(E_{ph}=kd\). As a function of \(k\), the multiplicity of configurations having \(\sum K=0\) and \(\sum t=0\) is \(N_{k}=(1,4,16,48,133,332,784,\cdots)\) for \(k=(0,1,2,3,4,5,6,\cdots)\). The spectrum up to \(k=6\) is shown in Fig. 1. The orange curve shows a smoothed level density fitted to the leading order dependence on energy as derived from statistical theory. This will provide a way to fit the parameter \(d\) to experimental level densities: the single-particle level spacing \(d\) sets the energy scale in the model, and other energy parameters will be expressed in units of \(d\). Even though we will not specify the value of \(d\) in this paper, \(d\) is estimated to be around \(0.5\) MeV for nuclei in the actinide region [14] (see Appendix A-1).
For residual interactions, both particle-particle (pp) and particle-hole (ph) interactions appear. For the pp residual interaction, we employ a monopole pairing in
Figure 1: Spectrum of many-body configurations in the uniform spacing model. \(N_{k}\) denotes the number of configurations at the excitation energy \(E=kd\). The green circles show the non-interacting spectrum, while the orange curve shows its fit to the functional form of \(N_{k}=\exp(a\sqrt{k}+b)\) with \(a=3.97\) and \(b=-3.06\). The blue filled histograms show the interacting spectrum, obtained by diagonalizing the Hamiltonian \(H_{q}\) with \(G_{\rm pair}=0\) and \(v_{np}=0.03d\).
teraction between identical nucleons,
\[H_{\rm pair}=-G_{\rm pair}\sum_{\mu\neq\nu}a^{\dagger}_{\nu}a^{\dagger}_{\bar{ \nu}}a_{\bar{\mu}}a_{\mu}. \tag{5}\]
Here \(a^{\dagger}_{\nu}\) is the creation operator of the state \(\nu\), and \(\bar{\nu}\) denotes the time-reversal state of \(\nu\). The strength of the pairing interaction \(G_{\rm pair}\) is around 0.1 MeV in the actinide region [19], corresponding to \(G_{\rm pair}\approx 0.2d\) in the energy units in the present model. In this paper we take \(G_{\rm pair}=0.3d\) as the baseline value, to be varied to study how the observables depend on the interaction types.
When the monopole pairing interaction is used in the uniform spacing model, an unphysical behavior may appear in the transmission coefficients due to the high degeneracy of the spectrum. To avoid this problem, we shall add a small random number to the diagonal part of the Hamiltonian kernel as [20]
\[kd\to kd+0.1rd,\]
where \(r\) is a random number of unit variance taken from a Gaussian ensemble.
For the ph-type residual interaction, we employ a random interaction in the form of
\[H_{\rm ran}=-v_{np}\sum^{{}^{\prime}}ra^{\dagger}_{\alpha 1}a^{\dagger}_{\alpha 2 }a_{\alpha 4}a_{\alpha 3}, \tag{6}\]
where the parameter \(v_{np}\) is the strength of the interaction and \(r\) is a random number, as before. The sum \(\alpha\) is restricted to the combinations satisfying \(K_{1}+K_{2}=K_{3}+K_{4}\). An early study has suggested that a neutron-proton interaction is dominant in a diffusion process compared to the one between identical particles[21]. We therefore assume that the interaction \(H_{\rm ran}\) acts only on neutron-proton pairs.
Following Appendix A.3, we take \(v_{np}=0.03d\) as a base value in the following calculations. The assumption that the neutron-proton interaction is Gaussian distributed is certainly not justified for the low-energy states in a \(Q\)-block where collective excitations can be built up. However, high in the spectrum the mixing approaches the random matrix limit. Note that the pairing interaction acts coherently while the random interaction acts incoherently. Our interest is to clarify the role of these two different types of interaction in the transmission process.
Because of the random component in the Hamiltonian, one needs to take an ensemble average to obtain physical quantities. In the following calculations, we take many samples so that the standard deviation becomes smaller than 1%.
### Off-diagonal couplings
The interaction between different \(Q\)-blocks is responsible for a shape change and is thus crucial to the modeling. It is clear that the interaction is somewhat suppressed due to the imperfect overlap of orbitals built on different mean-field reference states. The size of the suppression is determined by the overlap kernel, \(N_{n,n^{\prime}}(q,q^{\prime})\), which is given by a determinant of orbital overlaps. For simplicity, we assume that the change of the single-particle orbitals between nearby reference configurations is small. With this assumption, the overlap kernel reads,
\[N_{nn^{\prime}}(q,q^{\prime})=N(q,q^{\prime})\delta_{n,n^{\prime}}, \tag{7}\]
where \(N(q,q^{\prime})\) is the overlap between the reference configurations. Based on the idea of the Gaussian Overlap Approximation (GOA) [12], we parameterize it as
\[N(q,q^{\prime})=\exp(-\lambda(q-q^{\prime})^{2}). \tag{8}\]
In the main calculations below, we take the value \(\lambda=1.0\) for the overlap between neighboring \(Q\)-blocks. This sets the numerical scale for \(q\) as a distance measure along the fission path. We also consider the model in which the configurations are all orthogonal.
The Hamiltonian kernel \(H_{n,n^{\prime}}(q,q^{\prime})\) can be calculated in a similar manner by assuming that the orbital wave functions are nearly the same in the two reference configurations. To take into account the imperfect overlap of the references states, we multiply the bare matrix elements by the suppression factor \(N(q,q^{\prime})\) to the matrix elements. In addition, one has to take into account the diabatic interaction between those configurations which are connected diabatically. Based on the GOA, we parameterize it as [22]
\[\frac{\langle nq|v_{db}|nq^{\prime}\rangle}{\langle q|q^{\prime}\rangle}=\frac {E(nq)+E(nq^{\prime})}{2}-h_{2}(q-q^{\prime})^{2}, \tag{9}\]
where \(E(nq)=k_{n}d+V(q)\) is the energy of the configuration \((nq)\). The first term on the hand side of this equation insures that the Green's function (2) transforms properly under a shift in energy scale \(E^{\prime}=E-\epsilon\), that is \(G^{\prime}(E^{\prime})=G(E)\).
### Width matrices
The matrices \(\Gamma_{a}\) (\(a\)=\(n\), cap, and fis) in Eq. (2) can be in principle derived with the generalized Fermi Golden Rule[23]
\[(\Gamma_{a})_{kk^{\prime}}=2\pi\sum_{j\in a}\langle k|v|j\rangle\langle k^{ \prime}|v|j\rangle\delta(E_{j}-E) \tag{10}\]
where \(j\) labels states in the decay channel \(a\). Due to the non-orthogonality of the configurations, the matrix \(\Gamma_{a}\) is in general non-diagonal. In this work, we take a separable approximation and parameterize it as 1
Footnote 1: In Ref. [10], we used \(N\) instead of \(N^{1/2}\) in the decay matrices. We consider that \(N^{1/2}\) is a more physical choice because of the connection to orthogonal bases as we discuss in Appendix B.
\[(\Gamma_{a})_{kk^{\prime}}=\gamma_{a}\sum_{j\in a}(N^{1/2})_{k,j}(N^{1/2})_{k^ {\prime},j}, \tag{11}\]
where \((N^{1/2})_{k,j}\) is the square root of the norm kernel and \(\gamma_{a}\) is the mean decay width. Here, the indices \(k\) and \(j\) label both the deformation \(Q\) and the excitation \(n\). See Appendix B for a derivation of Eq. (11).
## III Results
Let us now numerically evaluate the transmission coefficients and discuss the dynamics of induced fission. To this end, we consider a chain of three \(Q\)-blocks, \(q=q_{-1},q_{0}\), and \(q_{1}\), with the same spacing \(\Delta q\), that is, \(q_{\pm 1}=q_{0}\pm\Delta q\). We set them to \(q=-1,0\), and \(1\) for convenience. Thus the overlaps between adjacent \(Q\)-blocks is \(N(q,q\pm 1)=e^{-1}\) by Eq. (8) with the chosen value of \(\lambda\). For the barrier, we set \(V(q=\pm 1)=0\) and \(V(q=0)=4d\), giving a barrier height \(B_{h}=4d\). In each \(Q\)-block, the energy cutoff for the many-body configurations is set to be \(E_{\rm cut}=V(q)+5d\). The neutron absorption and the gamma decay occur prior to the fission barrier, so the incident and the capture channels couple to the internal states by Eq. (11) at \(q=-1\). Likewise, the fission channel is coupled at \(q=1\). All the states at these end points are coupled to individual decay channels. Since the relation \(\Gamma_{n}<\Gamma_{\rm cap}<\Gamma_{\rm fis}\) is known empirically in the actinide region [24], we set \(\gamma_{n}=0.001d,\gamma_{\rm cap}=0.01d\), and \(\gamma_{\rm fis}=0.1d\) in the following calculations. As we will show in Sec. IIIC below, the transmission dynamics is not sensitive to the value of \(\gamma_{\rm fis}\).
### Orthogonal basis
We first consider the limit \(\lambda\to\infty\), that is, assuming all configurations are orthogonal. This is a useful limit to study the role of the pairing interaction, since the diabatic interaction does not contribute.
It is a well-known fact that the pairing correlation modifies drastically the dynamics of spontaneous fission, particularly through a reduction of the collective mass [25; 26; 27]. Another important aspect of the pairing correlation is that it is responsible for a hopping of Cooper pairs from one configuration to the neighboring one [28]. On the other hand, the role of pairing correlation in induced fission has not yet been understood well, partly because the pairing correlation is considered to be effective only in the vicinity of the ground state. However, odd-even staggerings have been observed in fission fragments in low-energy induced fission, which suggests that the pairing correlation cannot be completely ignored.
Fig. 2 shows the transmission coefficients for the fission channel, calculated with two different values of \(G_{\rm pair}\). The strength of the neutron-proton random interaction is set to be \(v_{np}=0.03d\). One can see that the pairing correlation enhances the transmission probabilities far below the barrier, while its effect is not important at the barrier top and above. This is to be expected, since the number of configurations with high seniority numbers increases as the excitation energy increases and the pairing correlation becomes weaker.
To study systematically the role of pairing in induced fission, we introduce an energy-averaged transmission coefficient. It is defined as
\[\langle T_{n,fis}(E)\rangle=\frac{1}{\Delta E}\int_{E-\Delta E/2}^{E+\Delta E/ 2}dE^{\prime}\,T_{n,fis}(E^{\prime}). \tag{12}\]
Table 1 summarizes the energy averaged transmission coefficient at \(E=B_{h}=4d\) for several sets of \((v_{np},G_{\rm pair})\). The energy window for the energy average is set to be \(\Delta E=d\). Without the neutron-proton interaction, that is, \(v_{np}=0\), the fission probability increases as the pairing strength increases. Note especially that the transmission coefficient \(\langle T_{n,fis}(E)\rangle\) is zero when there is no interaction at all. As the value of \(v_{np}\) increases, the dependence of \(\langle T_{n,fis}(E)\rangle\) on \(G_{\rm pair}\) becomes milder. For \(v_{np}=0.06d\), the transmission coefficient is almost insensitive to the value of \(G_{\rm pair}\). This suggests that induced fission is more sensitive to the neutron-proton random interaction, as compared to the coherent pairing interaction.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline & & \multicolumn{3}{c}{\(G_{\rm pair}\)} \\ \cline{3-5} Model & \(v_{np}\) & 0 & 0.1\(d\) & 0.2\(d\) \\ \hline I & 0 & 0 & 0.0441 & 0.0589 \\ II & 0.034 & 0.107 & 0.161 & 0.173 \\ III & 0.06d & 0.318 & 0.331 & 0.331 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The averaged transmission coefficient for a fission process, \(\langle T_{n,fis}\rangle\), for several sets of the interaction parameters and assuming that the configurations are orthogonal. The barrier height and the incident energy are both set to be \(4d\).
Figure 2: The transmission coefficients from the incident channel to the fission channel as a function of the excitation energy, \(E\), in the model with an orthogonal configuration space. The solid and the dashed lines are obtained with \(G_{\rm pair}=0\) and \(0.1d\), respectively, for the strength of the pairing interaction. The strength of the neutron-proton interaction and the barrier height are set to be \(v_{np}=0.03d\) and \(B_{h}=4d\), respectively.
### Non-orthogonal basis
Let us now examine the dependence on the interactions for a model having a non-orthogonal basis. This will automatically introduce a diabatic interaction on top of that given by the \(h_{2}\) interaction. To avoid an artifact due to the degeneracy of the single-particle energies, we introduce an offset energy to the \(q=1\) block, taking \(\mathbf{V}(q)=(0,4,0.5)\) in Eq. (3). We mention that this problem appears much more prominently with the non-orthogonal basis as compared to calculations with the orthogonal basis.
Figure 3 shows the transmission probability for fission with two different values of \(v_{np}\). In these calculations, the pairing interaction is switched off by setting \(G_{\rm pair}=0\), while the parameter \(h_{2}\) for the diabatic transitions is set to be \(3d\). From the figure, one notices that the peaks are lowered and broadened as the value of \(v_{np}\) increases. This can be understood easily since the random interaction spreads the spectrum in each \(Q\)-block as is indicated in Fig. 1. The effect of \(v_{np}\) is not only to broaden the peaks in the transmission coefficients but also to increase the energy averaged transmission coefficients, as will be discussed in Table 2 below.
Figure 4 shows an average fission-to-capture branching ratio \(\alpha^{-1}\) as a function of the energy \(E\). We define the average as
\[\alpha^{-1}=\frac{\int dE^{\prime}\,T_{n,dis}(E^{\prime})}{\int dE^{\prime}\,T _{n,cap}(E^{\prime})}, \tag{13}\]
where the range of the integration is the same as that in Eq. (12). To simplify the discussion, we once again set the pairing interaction to be zero. The solid line is obtained by taking into account both the neutron-proton interaction and the diabatic interactions with \(v_{np}=0.03d\) and \(h_{2}=3d\). In this case, the branching ratio increases with the excitation energy, as would be expected from a quantum barrier transmission. On the other hand, if the neutron-proton interaction is switched off, the branching ratio is rather insensitive to the energy except at \(E=d\), as is indicated by the dashed and the dot-dashed lines. The transmission in this case is due solely to the diabatic transitions. One sees that the implicit diabatic interaction associated with the overlap matrix destroys any simple relationship between the strength \(h_{2}\) and the calculated \(\alpha^{-1}\).
Table 2 summarizes the transmission coefficients and the branching ratios for several parameter sets. The results of the models I, II, III indicate that both the neutron-proton interaction and the pairing interaction enhance the transmission coefficients as well as the branching ratios. They also indicate that the transmission coefficients are more sensitive to the neutron-proton interaction than to the pairing interaction. This is consistent with the results of the orthogonal basis shown in
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline Model & \(v_{np}\) & \(G_{\rm pair}\) & \(h_{2}\) & \(\left\langle T_{n,fin}\right\rangle\) & \(\alpha^{-1}\) \\ \hline base & 0.03 & 0.3 & 3 & 0.084 & 0.13 \\ I & & 0.0 & 0.413 & 1.49 \\ II & 0.0 & & 0.372 & 1.14 \\ III & 0.05 & & 0.429 & 2.27 \\ IV & & 0.0 & 0.294 & 0.739 \\ V & 0.0 & 0.0 & 0.172 & 0.291 \\ VI & 0.0 & 0.0 & 0.000 & 0.000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The transmission coefficient for fission \(\left\langle T_{n,fin}\right\rangle\) and the branching ratio \(\alpha^{-1}\) for several sets of interactions. The parameters shown for models I-VI are the only ones that differ from the base model. The overlap parameter is \(\lambda=1.0\) and the averaged observables are calculated at a central energy \(E=4d\). Interaction strength parameters are in units of \(d\). For the model VI, the GCM Hamiltonian is constructed such that the orthogonal physical Hamiltonian Eq. (14) is diagonal.
Figure 3: The transmission coefficient \(T_{n,fis}(E)\) with two different values of \(v_{np}\). The pairing interaction is set to zero, i.e., \(G_{\rm pair}=0\). The other parameters are \(\lambda=1.0,\ h_{2}=3d\), and \(B_{h}=4d\).
Table 1, even though the degree of enhancement is smaller than in Table 1 due to the overlap factor \(N_{n,n}(q,q^{\prime})\) in the off-diagonal matrix elements. In the model IV, the value of \(h_{2}\) is set to be zero. The result indicates that the transmission coefficient and the branching ratio significantly decreases without the diabatic transitions, as has been already observed in Ref. [11]. See also Fig. 4 for the sensitivity of the branching ratios to the value of \(h_{2}\). Finally, in the model V, all the interaction strengths, \(v_{np},~{}G_{\rm pair}\), and \(h_{2}\), are set to be zero. Even in this case, the transmission coefficient is not zero, because the corresponding Hamiltonian in the orthogonal physical basis is not diagonal in this case. As we show in Appendix B, one can actually construct the GCM Hamiltonian which is diagonal with the orthogonal basis. With such a GCM Hamiltonian, we have confirmed that the transmission coefficient becomes zero within the numerical error (see the model VI in the table).
### Validity of the transition state hypothesis
In the Bohr-Wheeler theory for induced fission [3], the decay width is calculated as a sum of transmission coefficients \(T_{i}\) across the barrier via transition states \(i\),
\[\Gamma_{\rm BW}=\frac{1}{2\pi\rho}\sum_{i}T_{i}, \tag{14}\]
where \(\rho\) is the level density of a compound nucleus. The formula indicates that the transition states entirely determine the decay rate, and that the details of the dynamics after accrossing the barrier are unimportant. The branching ratio in the Bohr-Wheeler theory would be expressed as
\[\alpha^{-1}(E)=\frac{1}{2\pi\rho(E)\Gamma_{\rm cap}}\sum_{i}T_{i}. \tag{15}\]
The solid and the dashed lines in Fig. 5 show the branching ratios at \(E=4d\) as a function of \(\gamma_{\rm fis}\) for a model with the pairing interaction switched off. For the calculations with the orthogonal basis shown by the solid line, the branching ratio is almost independent of the fission decay width \(\gamma_{\rm fis}\), in agreement with the insensitivity property of the Bohr-Wheeler theory. On the other hand with the non-orthogonal basis, the branching ratio increases gradually as a function of \(\gamma_{\rm fis}\), even though the insensitivity property may be realized at large values of \(\gamma_{\rm fis}\). To check the dependence on the number of \(Q\)-blocks, we repeat the calculations with 7 \(Q\)-blocks, parameterizing \(V\) as \(V(q)/d=4-4q^{2}/9\) ranging from \(q=-3\) to \(q=3\) with \(\Delta q=1\). In this case, the branching ratio changes by less than a factor of two while the fission decay varies by an order of magnitude. All of these results indicate that the hypothesis used in the Bohr-Wheeler theory is easily realized in the present microscopic theory. See also Ref. [29] for a similar study with random matrices.
## IV Summary
In this article, we have applied the CI methodology to a schematic model for neutron-induced fission. The model Hamiltonian contains the pairing interaction, the diabatic interaction, and a schematic off-diagonal neutron-proton interaction. The model appears to be sufficiently detailed to examine the sensitivity of the fission transmission probabilities to the different types of interaction, as well as the validity of transition state theory in a microscopic framework. We have shown that the transmission coefficients are mainly sensitive to the neutron-proton interaction, while the sensitivity to the pairing interaction is much milder. The diabatic transitions were also found to play a role. Depending on the interaction and the deformation-dependent configuration space, one achieves conditions in which branching ratios depend largely on barrier-top dynamics and are insensitive to properties closer to the scission point. The insensitive property is one of the main assumptions in the well-known Bohr-Wheeler formula for induced fission, but up to now it had no microscopic justification.
The results in this paper indicate that the neutron-proton interaction is an important part of a microscopic theory for induced fission. To include it in realistic calculations based on the density functional theory will require a large model space, however. See Table 3 in Ref. [11] for some estimates of the dimensional requirements. Moreover, single-particle energies are in general not degenerate in contrast to the schematic model employed in this paper. This might require a different energy cutoff, further enlarging the model space. To carry out such
Figure 5: The branching ratios at \(E=4d\) as a function of \(\gamma_{\rm fis}\). The interaction strengths are \((v_{pn},G_{\rm pair},h_{2})=(0.03,0,3)d\). A parabolic fission barrier is employed with the barrier height of \(4d\). The solid and the dashed lines show the results of the \(3Q\) model, while the dot-dashed line the results of the \(7Q\) model. While the non-orthogonality of the configurations is neglected in the solid line, it is taken into account in the other lines with \(\lambda=1.0\). For the sake of presentation, the branching ratios are multiplied by a factor of 0.04 for the dashed line.
large scale calculations for induced fission, one will have to either validate an efficient truncation scheme or develop an efficient numerical method to invert matrices with large dimensions. We leave this for a future work.
###### Acknowledgements.
This work was supported in part by JSPS KAKENHI Grants No. JP19K03861 and No. JP21H00120. This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2123. The numerical calculations were performed with the computer facility at the Yukawa Institute for Theoretical Physics, Kyoto University.
## Appendix A Estimation of physical parameters
### Orbital energy spacing
The single-particle level spacing \(d\) in the uniform model sets the energy scale for the model and does not play any explicit role in the model. However, it is required to determine other energy parameters which are expressed in units of \(d\). Several estimates of \(d\) for \({}^{236}\)U are given in Table 3. The first is based on orbital energies in a deformed Woods-Saxon potential with the parameters given in Ref. [30]; see Table 4 for the calculated orbital energies.
In more realistic theory, the momentum dependence of the potential tends to increase the spacing, but the coupling to many-particle degrees of freedom decreases the spacing of the quasiparticle poles. The combined effect seems to somewhat decrease the spacing2
Footnote 2: We note that an energy density functional fitted to fission data[33] obtained an effective mass in the single-particle Hamiltonian very close to 1.
### Level density
It is important to know the composition of the levels in the compound nucleus to construct microscopic models that involve those levels. For a concrete example, consider the levels at the neutron threshold energy \(S_{n}=6.5\) MeV in \({}^{236}\)U. The predominating configurations at this energy should be \(k\) subblocks at \(k\approx S_{n}/d\) in the independent quasiparticle approximation. Another approach that is less sensitive to the residual interaction is to estimate the total number of states below \(S_{n}\) and compare it to the number obtained by summing the \(N_{k}\) degeneracies in the \(Q\)-block spectrum. In the \({}^{236}\)U example, the combined level spacing of \(J^{\pi}=3^{-}\) and \(4^{-}\) is about 0.45 eV at \(S_{n}\)[34]. At that excitation energy the level density is the same for even and odd parities, and it varies with angular momentum as \(2J+1\). The inferred level spacing of \(J^{\pi}=0^{+}\) levels is thus about 7 eV. The accumulative number of levels can be approximated by \(N=\rho T\) where \(T\) is the nuclear temperature, defined as \(T=d\log(\rho(E))/dE\). A typical estimate for our example is \(T=0.65\) MeV, giving \(N\approx 1.0\times 10^{8}\). To estimate the level density in the present model, we start with the set of quasiparticle configurations including both parities and all \(K\) values. The resulting \(k\)-blocks have multiplicities that are well fit by the formula
\[N_{k}\approx\exp(-3.23+4.414k^{1/2}). \tag{10}\]
Projection on good parity decreases this by a factor of two. The projection on angular momentum \(J=0\) is more subtle. The \(J=0\) states are constructed by projection from \(K=0\) configurations; other configurations do not contribute. However, there may be two distinct configurations that project to the same \(J=0\) state. This gives another factor of nearly two reduction in the multiplicity. The remaining task is to estimate the fraction of \(K=0\) configurations in the unprojected quasiparticle space. The distribution of \(K\) values is approximately Gaussian with a variance given by
\[\langle K^{2}\rangle=\langle n_{\rm qp}\rangle\langle K^{2}\rangle_{\rm sp} \tag{11}\]
where \(\langle n_{\rm qp}\rangle\approx 8\) is the average number of quasiparticles in the \(k\) block and \(\langle K^{2}\rangle_{\rm sp}\approx 6\) is an average over the orbital \(K\)'s near the Fermi level. Including these
\begin{table}
\begin{tabular}{|c c c|c c c|} \hline \multicolumn{3}{|c|}{protons} & \multicolumn{3}{c|}{neutrons} \\ \hline \(2K\) & \(\pi\) & \(\varepsilon_{K\pi}\) (MeV) & \(2K\) & \(\pi\) & \(\varepsilon_{K\pi}\) (MeV) \\ \hline
3 & \(-1\) & \(-3.39\) & 5 & \(-1\) & \(-4.15\) \\
5 & \(-1\) & \(-3.80\) & 1 & \(-1\) & \(-4.25\) \\
5 & \(1\) & \(-4.93\) & 7 & \(-1\) & \(-4.40\) \\ & \multicolumn{3}{c|}{} & & & \\
1 & 1 & \(-5.43\) & 1 & 1 & \(-5.07\) \\
9 & \(-1\) & \(-5.53\) & 5 & 1 & \(-5.75\) \\
3 & 1 & \(-5.74\) & 5 & \(-1\) & \(-5.82\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Characteristics of single-particle orbitals in a deformed Woods-Saxon potential corresponding to \({}^{236}\)U at deformation \((\beta_{2},\beta_{4})=(0.274,0.168)\). Dashed line indicates the Fermi level.
\begin{table}
\begin{tabular}{|c|c|} \hline \(d\) (MeV) & Source \\ \hline
0.45 & Woods-Saxon well \\
0.51 & FRLDM [31] \\
0.33 & FGM [32] \\ \hline \end{tabular}
\end{table}
Table 3: Estimated orbital level spacing in \({}^{236}\)U. The first two are from potential models and the last extracted from the Fermi gas formula and measured level densities.
projection factors, the integrated number of levels up to \(S_{n}\) is achieved by including all \(k\)-subblocks up to \(k=17\) in the entry \(Q\)-block.
### Neutron-proton interaction
To set the scale for our neutron-proton interaction parameter \(v_{\rm np}\) we compare it with phenomenological contact interactions that have been used to model nuclear spectra. The matrix element of the neutron-proton interaction is
\[\langle n_{1}p_{1}|v|n_{2}p_{2}\rangle=-v_{0}I \tag{10}\]
where
\[I=\int d^{3}r\phi^{*}_{n_{1}}(\mathbf{r})\phi^{*}_{p_{1}}(\mathbf{r})\phi_{n_{2}}(\bm {r})\phi_{p_{2}}(\mathbf{r}). \tag{11}\]
The parameter \(v_{0}\) is the strength of the interaction, typically expressed in units of MeV fm3. Some values of \(v_{0}\) from the literature are tabulated in Table 5.
Footnote 3: If the orbitals are restricted only to those in TABLE 4, the histogram is more structured.
We shall adopt the value \(v_{0}=500\) MeV fm\({}^{3}\) to estimate the value of \(v_{np}\).
If the wave functions of the eigenstates approach the compound nucleus limit, the only characteristic we need to know is its mean-square average among the active orbitals. We have used the Woods-Saxon model to calculate the integral Eq. (11) for all the fully off-diagonal matrices of the orbitals within 2 MeV of the Fermi energy. Fig. 6 shows a histogram of their distribution 4. The variance of the distribution is \(\langle I^{2}\rangle^{1/2}=5.22\times 10^{-5}\) fm\({}^{-3}\). Combining this with our estimate of \(v_{0}\) we find \((\overline{(n_{1}p_{1}|v|n_{2}p_{2})^{2}})^{1/2}=0.025\) MeV. This implies \(v_{\rm np}\sim 0.05d\) with our estimated single-particle level density.
Footnote 4: If the orbitals are restricted only to those in TABLE 4, the histogram is more structured.
## Appendix B Reaction theory in a non-orthogonal basis
The space of configurations used in this work is not orthogonal. This causes some conceptual issues, but it does not cause a significant computational burden in CI-based reaction theory. The theory is based on calculating the resolvent of \(H\); in an orthogonal basis it is given by
\[G=(H-E\mathds{1})^{-1} \tag{12}\]
where \(\mathds{1}\) is the unit matrix, \(H\) is the Hamiltonian, and \(E\) is the energy of the reaction. Non-orthogonal bases also arise in the theory of spontaneous decays [38], and in electron transport theory when wave functions are built from atomic orbitals. See for example Refs. [39; 40; 41; 42; 43] for the formulation of the resolvent as commonly used in chemistry and condensed matter physics.
In a non-orthogonal basis the time-dependent Schrodinger equation reads
\[H\Psi=i\hbar N\,\frac{d}{dt}\,\Psi \tag{13}\]
where \(N\) is the overlap matrix between basis states \(N_{ij}=\langle i|j\rangle\). The corresponding resolvent is
\[G=(H-EN)^{-1}. \tag{14}\]
There is hardly any difference from Eq. (12) from a computational point of view. However, the couplings to reaction channels should be treated with care.
To understand the couplings, we define a certain orthogonal basis which we call the physical basis. We call the vector representing a wave function in that basis \(\mathbf{v}_{\rm ph}\) and in the non-orthogonal basis as \(\mathbf{v}_{\rm gcm}\). In the GCM the dot products of basis elements satisfy
\[\mathbf{v}_{\rm gcm}(i)^{*}\cdot\mathbf{v}_{\rm gcm}(j)=N_{ij} \tag{15}\]
while those in the physical basis satisfy
\[\mathbf{v}_{\rm ph}(i)^{*}\cdot\mathbf{v}_{\rm ph}(j)=\delta_{ij}. \tag{16}\]
A physical basis consistent with Eq. (15) can then be defined by setting
\[\mathbf{v}_{\rm ph}(i)=\sum_{j}N_{ij}^{1/2}\mathbf{v}_{\rm gcm}(j). \tag{17}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Basis of estimate & \(v_{0}\) (MeV fm\({}^{3}\)) & Citation \\ \hline \(G\)-matrix & 530 & [35] \\ \(sd\)-shell spectra & 490 & [36] \\ \(\beta\)-decay & 395,320 & [37] \\ \hline \end{tabular}
\end{table}
Table 5: Estimates of neutron-proton interaction strength.
Figure 6: Integrals \(I\) in Eq. (11) of orbitals near the Fermi energy.
This definition is not unique since the dot products are invariant under a unitary transformation of the physical basis. Indeed, an orthogonal basis is usually constructed in the GCM by diagonalizing \(N\) and using its eigenvectors as the basis. However, those basis states are not well localized with respect to the GCM coordinate.
The relationship between the Hamiltonians in the physical and GCM bases can be expressed
\[\tilde{H}=N^{-1/2}HN^{-1/2} \tag{10}\]
or
\[H=N^{1/2}\tilde{H}N^{1/2}. \tag{11}\]
The physical resolvent is related to the GCM resolvent by
\[\tilde{G} =\left(N^{-1/2}HN^{-1/2}-E\mathds{1}\right)^{-1} \tag{12}\] \[=N^{1/2}\left(H-EN\right)^{-1}N^{1/2}.\]
One see that the matrix inversion is the same as in Eq. (10) except for the replacement \(\mathds{1}\to N\). However, the matrix \(N^{1/2}\) appears as pre- and post-factors.
In our applications of CI-based reaction theory we assume that each channel is coupled to a single state (the "doorway" state) in the internal space. Taking that state to be the basis state \(d\) in the physical representation, the decay coupling matrix \(\Gamma\) has elements 4
Footnote 4: A somewhat similar formula was used in Ref. [10, Eq. 16].
\[\Gamma(i,j)=N_{id}^{1/2}\,N_{jd}^{1/2}\tilde{\Gamma} \tag{13}\]
where \(\tilde{\Gamma}\) is the decay width of the physical state \(d\) into the channel. Note that with this construction the transmission coefficient in the physical basis
\[T_{a,b}=\mathrm{Tr}[\tilde{\Gamma}_{a}\tilde{G}(E)\tilde{\Gamma}_{b}\tilde{G} ^{\dagger}(E)] \tag{14}\]
is transformed to
\[T_{a,b}=\mathrm{Tr}\left[\Gamma_{a}G(E)\Gamma_{b}G^{\dagger}(E)\right] \tag{15}\]
in the GCM basis.
There is another reason for explicit construction of the physical basis. The distinction between the GCM and physical basis must be taken into account in Sec. IIIB where we assessed the relevant importance of different interaction types and we want to start with a Hamiltonian \(\tilde{H}^{0}\) for which the transmission probability vanishes. One cannot simply set the off-diagonal elements of \(H\) to zero if the overlap matrix \(N\) connects the entrance and exit channels, even if the connection is indirect. It is the physical Hamiltonian \(\tilde{H}\) that must be diagonal. In two dimensions the construction is obvious. Given the diagonal elements of \(H(i,i)=E_{i}\), the Hamiltonian that is diagonal in the physical basis is
\[H^{0}=\begin{pmatrix}E_{1}&(E_{1}+E_{2})N_{12}/2\\ (E_{1}+E_{2})N_{12}/2&E_{2}\end{pmatrix}. \tag{16}\]
Eq. (16) can be viewed as a justification for the first term in Eq. (9). The construction can be carried out in higher dimensions using only linear algebra operations, but we have no simple formula for the off-diagonal elements of \(H^{0}\). For the base Hamiltonian treated in Sect. IIIB, \(N\) is given by
\[N=\begin{pmatrix}1.0&e^{-1}&e^{-4}\\ e^{-1}&1.0&e^{-1}\\ e^{-4}&e^{-1}&1.0\end{pmatrix}. \tag{17}\]
Keeping only \(V(q)\) in \(H\), the \(H^{0}\) is numerically found to be
\[H^{0}=\begin{pmatrix}0.0000&0.7571&0.1537\\ 0.7571&4.000&0.8547\\ 0.1537&0.8547&0.5000\end{pmatrix}. \tag{18}\]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.